Java tests: Massively generate mock objects by running your code - java

Hello all :) In the JVM debug mode you can inspect the objects that are present when you run your code.
This means one can create a utility that can generate a dump of those objects as ready to use mocks (or close enough, hopefully). Those mocks would span the entire scope of the program run, and this would help greatly in building an extensive test coverage.
Because lazy is good, I was wondering if such a utility is currently available.
Best regards

I don't know of a way to do this from a memory/heap dump or from debug mode but...
If you want to serialize arbitrary java objects to and from files for use in tests then you can use XStream to do so. You could then use them in your unit tests with ease.
You can also use standard Java serialization if your objects are all serializable.
To collect the data in the first place you can create an aspect using AspectJ or Spring-AOP or similar. I've done something similar in the past and it worked really well.
One word of caution though: If you are doing this then any refactoring of your objects require refactoring of the test data. This is easier with XStream as it's XML files you are dealing with.

Related

Mocking out legacy libraries

I'm working on a legacy project that have many external dependencies and classes are so tightly coupled that it's almost impossible to unit test. I know the best way to address this would be doing a major refactor but at the moment we do not have the luxury to do so as the project is virtually 0 tests so we are very concerned about breaking stuffs.
What we are trying to achieve at the moment is to quickly come up with unit / component tests and progressively refactor as we work on the project. For component test I'm thinking to have some kind of wrapper on the existing classes to 'record' the input and output then persist it to a physical file. When we are running tests then it will return output based on the input.
How I'm thinking to achieve that is to store the method name and input parameters as the key and the output as the value. And output will be serialized upon 'record' and deserialized during test.
This approach seems to be able to cater for some cases.. but I foresee there will be a lot of complications later on. Eg: I might face several issues serializing certain objects. And I might also experience difficulties passing object reference from "out" parameters.
So here comes my question. Is there any libraries that does all these things? I would have never considered doing this manually if there was a library for it. By the way I'm using Java.
Thanks
Instead of doing low level unit tests, I would start with tests for the minimum units you can divide now, i.e. possibly just one. You can capture the data input and out using a serialization library which doesn't require the objects be marked as Serializable e.g. Jackson.

"Object Breakpoint" - How to debug acess to a specific Object in a large code base with complex dynamic behavior?

Every once in a while I'm in the Eclipse Debug mode, and wish I could simply pick the Object that I am currently inspecting/watching, put some kind of "Object Breakpoint" on it, and step to the next line of code that accesses it.
Now, I know that I can put breakpoints on Classes, but I usually have hundreds or even thousands of instances in memory, most of which have a long life time. They often go in and out of frameworks. They are wrapped into Collections, filtered and unwrapped again. In short: a regular, large application.
Usually I still find the problem by looking for rare features of that Object, using conditional method breakpoints and a lot of informed guessing. However, I think I sometimes could be much faster if I had something like the described feature.
What I found after some searching is the Debug Proxy (scroll down to examples). It is a container class that will use Javas reflection API to make itself look like the contained Object, thus you can use it instead of the contained Object in your application. Being an InvocationHandler, the DebugProxy can now "intercept" invocations of methods in the contained Object.
Using the proxy for actual debugging is as easy as adding this line to your application.
IMyObject = (IMyObject) DebugProxy.newInstance(new MyObject());
I can then set breakpoints inside the DebugProxies source code.
However, there are at least two problems with this approach.
It works but it is still a hack, and there are a lot of features missing, such as filtering options.
The Proxy-Object cannot be down-cast to the implementing class.
The 2. problem is the more serious one. I was able to use the DebugProxy with Classes generated by EMF, and there is no problem to follow the Object throughout the Framework. However, when I am trying to debug code that doesn't use interfaces for all interesting Classes, the DebugProxy will quickly fail.
Does anybody know about alternatives?
Maybe the Eclipse JDT Debugger already has such a feature and I simply don't see it!?
I know there is the Java instrumentation API, and frameworks such as AspectJ. Could these be used to get a practical solution?
I added basic filtering to the DebugProxy and modified the output so Eclipse Console View shows a link to the calling line of code:
Problem number two remains unsolved, though. I put up the source code on GitHub. Maybe somebody will come up with something.
A completely different way to approach this would be to automatically add breakpoints with conditions comparing the current hashCode() with the HashCode of the Object in question. This may not be too difficult for someone who knows more about the JDT internals.

Are there any Java runtime exploring tools?

Are there any Java runtime exploring tools? I mean the tools allowing to invoke concrete objects' methods in the running application (to check the correct behavior). It would be nice if I could substitute the object with another object of this type also, may be instantiated from sketch or deserialized from some source. As I looked, usual debuggers are limited in this possibilities, but I dont pretend to the fact Ive checked all of them.
I would suggest bytecode manipulation frameworks like ASM or BCEL for this purpose.
I would use basic jUnit and EasyMock for creating different input mock objects to check the behavior of your class in different situations. Then in the end you have a nice set of unit tests to maintain your codebase.
You should be able to achieve this at a higher level than direct bytecode manipulation using AOP tools such as AspectJ. Here are a couple of pointers:
http://www.ibm.com/developerworks/java/library/j-aspectj2/
http://danielroop.com/blog/2007/10/04/jmock-and-aspectj/

Reflection in unit tests for checking code coverage

Here's the scenario. I have VO (Value Objects) or DTO objects that are just containers for data. When I take those and split them apart for saving into a DB that (for lots of reasons) doesn't map to the VO's elegantly, I want to test to see if each field is successfully being created in the database and successfully read back in to rebuild the VO.
Is there a way I can test that my tests cover every field in the VO? I had an idea about using reflection to iterate through the fields of the VO's as part of the solution, but maybe you guys have solved the problem before?
I want this test to fail when I add fields in the VO, and don't remember to add checks for it in my tests.
dev environment:
Using JUnit, Hibernate/Spring, and Eclipse
Keep it simple: write one test per VO/DTO:
fill the VO/DTO with test data
save it
(optional: check everything has been correctly save at the database level, using pure JDBC)
load it
check that the loaded VO/DTO and the original one matches
Productive code will evolve and tests will need to be maintained as well. Making tests the simplest as possible, even if they are repetitive, is IMHO the best approach. Over-engineering the tests or testing framework itself to make tests generic (e.g. by reading fields with reflection and filling VO/DTO automatically) leads to several problems:
time spent to write the test is higher
bug might be introduced in the test themselves
maintenance of the test is harder because they are more sophisticated
tests are harder to evolve, e.g. the generic code will maybe not work for new kinds of VO/DTO that differ slightly from the other and will be introduced later (it's just an example)
tests can not be used easily as example of how the productive code works
Test and productive code are very different in nature. In productive code, you try to avoid duplication and maximize reuse. Productive code can be complicated, because it is tested. On the other hand, you should try to have tests as simple as possible, and duplication is ok. If a duplicated portion is broken, the test will fail anyway.
When productive code change, this may require several tests to be trivially changed. With the problem that tests are seen as boring piece of code. But I think that's the way they should be.
If I however got your question wrong, just let me know.
I would recommend cobertura for this task.
You will get a complete code coverage report after you run your tests and if you use the cobertura-check ant task you can add checks for the coverage and stop the ant call with the property haltonfailure.
You could make it part of the validation of the VO. If the fields aren't set when you use a getter it can throw an exception.

Dumping "real-world" scenarios out for unit testing

I'm currently debugging some fairly complex persistence code, and trying to increase test coverage whilst I'm at it.
Some of the bugs I'm finding against the production code require large, and very specific object graphs to reproduce.
While technically I could sit and write out buckets of instantiation code in my tests to reproduce the specific scenarios, I'm wondering if there are tools that can do this for me?
I guess specifically I'd like to be able to dump out an object as it is in my debugger frame (probably to xml), then use something to load in the XML and create the object graph for unit testing (eg, xStream etc).
Can anyone recommend tools or techniques which are useful in this scenario?
I've done this sort of thing using ObjectOutputStream, but XML should work fine. You need to be working with a serializable tree. You might try JAXB or xStream, etc., too. I think it's pretty straightforward. If you have a place in your code that builds the structure in a form that would be good for your test, inject the serialization code there, and write everything to a file. Then, remove the injected code. Then, for the test, load the XML. You can stuff the file into the classpath somewhere. I usually use a resources or config directory, and get a stream with Thread.currentThread().getContextClassLoader().getResourceAsStream(name). Then deserialize the stuff, and you're good to go.
XStream is of use here. It'll allow you to dump practically any POJO to/from XML without having to implement interfaces/annotate etc. The only headache I've had is with inner classes (since it'll try and serialise the referenced outer class).
I guess all you data is persisted in database. You can use some test data generation tool to get your database filled with test data, and then export that data in form of SQL scripts, and then preload before your intergration test starts.
You can use DBUnit to preload data in your unit test, it has also a number of options to verify database structure/data before test starts. http://www.dbunit.org/
For test data generation in database there is a number of comercial tools you can use. I don't know about any good free tool that can handle features like predefined lists of data, random data with predefined distribution, foreign key usage from other table etc.
I don't know about Java but if you change the implementations of your classes then you may no longer be able to deserialize old unit tests (which were serialized from older versions of the classes). So in the future you may need to put some effort into migrating your unit test data if you change your class definitions.

Categories

Resources