In more complex unit tests, I often require a certain set of Rules to be present. Some of these Rules have dependencies to another. As the ordering is relevant, I use RuleChains for that. All good so far.
This however is duplicated in most tests (with the occasional additional rule being used). Not only does this duplication feel unnecessary and cumbersome to repeat, it also needs to be adjusted in many places, when an additional Rule should be integrated.
What I would like to have is a Rule of Rules, i.e. a (predefined) Rule that contains or aggregates other (application & test specific) Rules.
I'll give an example of how this currently looks like:
public LoggingRule logRule = new LogRule();
public ConfigurationRule configurationRule = new ConfigurationRule();
public DatabaseConnectionRule dbRule = new DatabaseConnectionRule();
public ApplicationSpecificRule appRule = new ApplicationSpecificRule();
#Rule
RuleChain chain = RuleChain.outerRule(logRule)
.around(configurationRule)
.around(dbRule)
.around(appRule);
Assume that the given Rules depend on each other, e.g. the ApplicationSpecificRule requires that the DatabaseConnectionRule is executed first in order to establish a connection, the ConfigurationRule has initialized an empty configuration, etc.
Also assume that for this (rather complex test) all rules are actually required.
The only solution I could come up with so far is to create factory methods that return a predefined RuleChain:
public class ApplicationSpecificRule extends ExternalResource
{
public static RuleChain basicSet()
{
return RuleChain.outerRule(new LogRule())
.around(new ConfigurationRule())
.around(new DatabaseConnectionRule())
.around(new ApplicationSpecificRule());
}
}
In a test this can then be used as follows:
#Rule
RuleChain chain = ApplicationSpecificRule.basicSet();
With that the duplication is removed and additional Rules can easily be integrated. One could even add test-specific Rules to that RuleChain. However one can't access the contained Rules when they are required for additional setup (assume you need the ApplicationSpecificRule in order to create some domain object, etc.).
Ideally this would be extended to also support using other predefined sets, e.g. an advandancedSet that builds on top of the basicSet of Rules.
Can this be somehow simplified? Is it a good idea in the first place or am I somehow misusing Rules? Would it help to restructure the tests?
Thoughts?
The TestRule interface has only one method, so it's quite easy can define your own custom rule that delegates to a RuleChain and keeps references to the other rules:
public class BasicRuleChain implements TestRule {
private final RuleChain delegate;
private final DatabaseConnectionRule databaseConnectionRule
= new DatabaseConnectionRule();
public BasicRuleChain() {
delegate = RuleChain.outerRule(new LogRule())
.around(new ConfigurationRule())
.around(databaseConnectionRule)
.around(new ApplicationSpecificRule());
}
#Override
public Statement apply(Statement base, Description description) {
return delegate.apply(base, description
}
public Connection getConnection() {
return databaseConnectionRule.getConnection();
}
}
Doesn't get much simpler then that, does it? The only thing that would make it even simpler is to just use an instance instead of a factory, since you don't need fresh instances all the time.
Related
I have a huge Part source code I have to touch at 1 place. It is violating a lot of principles so I would like to extract at least the function I had to modify which is a #UIEventTopic handler. There are no tests and I would like to add them here so I know I do not break existing functionality.
I would like to move away from this:
public class MyPart {
...
#Inject
#Optional
public void event(#UIEventTopic(EVENT) EventParam p) {
...
}
}
To something like this:
public class MyPart {
...
}
public class MyEventHandler {
#Inject
#Optional
public void event(#UIEventTopic(EVENT) EventParam p, MyPart part) {
...
}
}
With the Eclipse DI I see no easy way of creating an instance of the handler class. It cannot be a #Singleton because it is a Part which can have multiple instances, and adding the handler to the IEclipseContext in the #PostConstruct is ugly because it adds a circular dependency between the part and the handler. Is there a magic how I can enforce the instantiation through the e4xmi files, or some alternative way?
My current solution is to extract purely the functionality to a utility bean and return the data and set it on the part, but this is also something not too nice (requires a lot of additional null-checks, ifs, etc.).
I am not entirely sure that I understand your question, however, this is how I would proceed:
Extract Delegate
Move the code in event() to the MyEventHandler so that MyClass fully delegates the event handling
public class MyPart {
#Inject
#Optional
public void event( #UIEventTopic(EVENT) EventParam param ) {
new MyEventHandler().handleEvent( this, param );
}
}
class MyEventHandler {
void handleEvent(MyPart part, EventParam param) {
// all code from event() goes here
}
}
This should be a safe-enough refactoring to do without having tests - and in the end, you don't have a choice as there are no tests.
Ensure the Status Quo
Now I would write tests for handleEvent(), mocking the required methods of MyPart and thus make sure that I won't break existing behavior.
Implement new Feature
After that I would be able to make the desired changes to MyEventHandler::handleEvent in a test driven manner.
Clean Up
Then I would extract an interface from MyPart that has only those methods required for MyEventHandler to do its work. If said interface gets too big, it would indicate that there is more refactoring left to do.
How can I test the following code?
class1 {
public InjectedClass injectedClass;
method1(){
returnValue = injectedClass.someMethod;
//another logic
}
method2(){
resultValue = method1();
}
}
My application was developed in Java. I use JUnit and Mockito.
To test method1() I can create a mock for InjectedClass and a mock logic for someMethod().
But how does one properly test a method? Do I need to create a mock for method1()?
UPDATE:
Let me demonstrate real example.
public class Application {
#Inject
DAOFacade facade;
//method1
public ReturnDTO getDTO(LiveServiceRequestParam requestParam) throws AffiliateIdentityException {
ReturnDTO returnDTO = new ReturnDTO();
CoreProductRepository repo = recognizeProduct(ProdCodeTypeEnum.MPN, null, vendorBound);
if(repo!=null){
//logic to fill some fileds in returnDTO
}
return returnDTO ;
}
//метод2
CoreProductRepository recognizeProduct(ProdCodeTypeEnum paramType, String prodCode, List<Integer> vendors) {
CoreProductRepository coreProductRepository = null;
switch (paramType) {
case MPN:
coreProductRepository = facade.findByAlternativeMPN(prodCode, vendors);
break;
case EAN:
coreProductRepository = facade.findByEan(prodCode, vendors);
break;
case DESCRIPTION:
coreProductRepository = facade.findByName(prodCode, vendors);
break;
}
return coreProductRepository;
}
}
So, to test recognizeProduct i mock DAOfacade. But also I want test getDTO method which uses recognizeProduct method.
You don't need to mock out your recognizeProduct method. As long as the DAOfacade is mocked, the behavior is known and deterministic, so the results of both getDTO and recognizeProduct can be verified.
It can also be argued, that you don't even need to test recognizeProduct specifically, because it is not public, so, there is no contract to enforce. As long as the behavior of getDTO is being tested and verified, your API is working as far as the user is concerned. The details of implementation aren't important.
In a way, testing recognizeProduct specifically is counter-productive, it hurts the maintainability and reliability of your code rather than helping it, because it makes any refactoring or reorganization harder to achieve even if it does not affect the externally visible behavior in any way.
If the methods are defined as shown in your example, they are package private. So, if you create a test in the same package (though normally in a test directory) you will be able to access those methods and test them.
That said, if you can refactor or rewrite the class to be more easily testable then that might be a good idea. If indeed you have to test the results of the internal methods and can't just test public ones.
You should focus your test effort on public methods return value and not not on internal implementation.
Focusing on internal implementation causes tests to be harder to mantain since a basic refactoring not affecting the return value will probably require changing your tests.
Sometimes is impossible to avoid testing internal implementation since some methods return nothing and you need to "assert" something. In this case it seems you return something at some point, I'd focus on testing that.
It seems to me you have a (sadly common) misunderstanding of the word test; it does not mean 'execute from a test case'.
Testing means supplying a range of inputs, and asserting that the corresponding outputs are correct. 99% of the time that means checking return codes or object state, occasionally you have to use mocks to properly test a pure-output interface.
If you do that for the public methods, and the private methods are fully covered to the required standard, job done. If there is uncovered code in private methods, either use it to identify and add a missing test case, or delete it.
In the event you feel there would be something useful lost by deleting unreachable private code, make it public, or move it out to another class.
I have an application with a class registered as a message listener that receives messages from a queue, checks it's of the correct class type (in public void onMessage(Message message)) and sends it to another class that converts this class to a string and writes the line to a log file (in public void handleMessage(MessageType m)). How would you write unit tests for this?
If you can use Mockito in combination with JUnit your test could look like this:
public void onMessage_Success() throws Excepton {
// Arrange
Message message = aMessage().withContent("...").create();
File mockLogFile = mock(File.class);
MessageHandler mockMessageHandler = mock(MessageHandler.class);
when(mockMessageHandler).handleMessage(any(MessageType.class)
.thenReturn("somePredefinedTestOutput");
when(mockMessageHandler).getLogFile().thenReturn(mockLogFile);
MessageListener sut = spy(new MessageListener());
Whitebox.setInternalState(sut, "messageHanlder", mockMessageHandler);
// or simply sut.setMessageHandler(mockMessageHandler); if a setter exists
// Act
sut.onMessage(message);
// Assert
assertThat(mockLogFile, contains("your desired content"));
verify(sut, times(1)).handleMessage(any(Message.class));
}
Note that this is just a simple example how you could test this. There are probably plenty of other ways to test the functionality. The example above showcaeses a typical builder-pattern for the generation of default-messages which accept certain values for testing. Moreover, I have not really clarified the Hamcrest matcher for the contains method on the mockLogFile.
As #Keppil also mentioned in his comment, it makes sense to create multiple test-cases which varry slightly in the arrange and assert parts where the bad-cases are tested
What I probably didn't explain enough is that getLogFile() method (which with high certainty has an other name in your application) of MessageHandler should return the reference to the file used by your MessageHandler instance to store the actual log-messages. Therefore, it probably is better to define this mockMessageHandler as spy(new MessageHandler()) instead of mock(MessageHandler.class) although this means that the unit-test is actually an integration test as the interaction of two classes is tested at the same time.
But overall, I hope you got the idea - use mock(Class) to generate default implementations for dependencies your system-under-test (SUT) requires or spy(Instance) if you want to include a real-world object instead of one that only has null-values as return types. You can take influence on the return-value of mocked objects with when(...).thenReturn(...)/.thenThrow(...) or doReturn(...).when(...) in case of void-operations f.e.
If you have dependency-injection into private fields in place you should use Whitebox.setInternalState(...) to inject the values into the sut or mock classes if no public or package-private (if you obtain the testing-model of reusing the package structure of the system-under-test classes within your test-classes) setter-methods are available.
Further, verify(...) lets you verify that a certain method was invoked while executing the SUT. This is quite handy in this scenario when the actual assertion isn't that trivial.
I have written some code which I thought was quite well-designed, but then I started writing unit tests for it and stopped being so sure.
It turned out that in order to write some reasonable unit tests, I need to change some of my variables access modifiers from private to default, i.e. expose them (only within a package, but still...).
Here is some rough overview of my code in question. There is supposed to be some sort of address validation framework, that enables address validation by different means, e.g. validate them by some external webservice or by data in DB, or by any other source. So I have a notion of Module, which is just this: a separate way to validate addresses. I have an interface:
interface Module {
public void init(InitParams params);
public ValidationResponse validate(Address address);
}
There is some sort of factory, that based on a request or session state chooses a proper module:
class ModuleFactory {
Module selectModule(HttpRequest request) {
Module module = chooseModule(request);// analyze request and choose a module
module.init(createInitParams(request)); // init module
return module;
}
}
And then, I have written a Module that uses some external webservice for validation, and implemented it like that:
WebServiceModule {
private WebServiceFacade webservice;
public void init(InitParams params) {
webservice = new WebServiceFacade(createParamsForFacade(params));
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
So basically I have this WebServiceFacade which is a wrapper over external web service, and my module calls this facade, processes its response and returns some framework-standard response.
I want to test if WebServiceModule processes reponses from external web service correctly. Obviously, I can't call real web service in unit tests, so I'm mocking it. But then again, in order for the module to use my mocked web service, the field webservice must be accessible from the outside. It breaks my design and I wonder if there is anything I could do about it. Obviously, the facade cannot be passed in init parameters, because ModuleFactory does not and should not know that it is needed.
I have read that dependency injection might be the answer to such problems, but I can't see how? I have not used any DI frameworks before, like Guice, so I don't know if it could be easily used in this situation. But maybe it could?
Or maybe I should just change my design?
Or screw it and make this unfortunate field package private (but leaving a sad comment like // default visibility to allow testing (oh well...) doesn't feel right)?
Bah! While I was writing this, it occurred to me, that I could create a WebServiceProcessor which takes a WebServiceFacade as a constructor argument and then test just the WebServiceProcessor. This would be one of the solutions to my problem. What do you think about it? I have one problem with that, because then my WebServiceModule would be sort of useless, just delegating all its work to another components, I would say: one layer of abstraction too far.
Yes, your design is wrong. You should do dependency injection instead of new ... inside your class (which is also called "hardcoded dependency"). Inability to easily write a test is a perfect indicator of a wrong design (read about "Listen to your tests" paradigm in Growing Object-Oriented Software Guided by Tests).
BTW, using reflection or dependency breaking framework like PowerMock is a very bad practice in this case and should be your last resort.
I agree with what yegor256 said and would like to suggest that the reason why you ended up in this situation is that you have assigned multiple responsibilities to your modules: creation and validation. This goes against the Single responsibility principle and effectively limits your ability to test creation separately from validation.
Consider constraining the responsibility of your "modules" to creation alone. When they only have this responsibility, the naming can be improved as well:
interface ValidatorFactory {
public Validator createValidator(InitParams params);
}
The validation interface becomes separate:
interface Validator {
public ValidationResponse validate(Address address);
}
You can then start by implementing the factory:
class WebServiceValidatorFactory implements ValidatorFactory {
public Validator createValidator(InitParams params) {
return new WebServiceValidator(new ProdWebServiceFacade(createParamsForFacade(params)));
}
}
This factory code becomes hard to unit-test, since it is explicitly referencing prod code, so keep this impl very concise. Put any logic (like createParamsForFacade) on the side, so that you can test it separately.
The web service validator itself only gets the responsibility of validation, and takes in the façade as a dependency, following the Inversion of Control (IoC) principle:
class WebServiceValidator implements Validator {
private final WebServiceFacade facade;
public WebServiceValidator(WebServiceFacade facade) {
this.facade = facade;
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
Since WebServiceValidator is not controlling the creation of its dependencies anymore, testing becomes a breeze:
#Test
public void aTest() {
WebServiceValidator validator = new WebServiceValidator(new MockWebServiceFacade());
...
}
This way you have effectively inverted the control of the creation of the dependencies: Inversion of Control (IoC)!
Oh, and by the way, write your tests first. This way you will naturally gravitate towards a testable solution, which is usually also the best design. I think that this is due to the fact that testing requires modularity, and modularity is coincidentally the hallmark of good design.
I'm experimenting with java annotation processors. I'm able to write integration tests using the "JavaCompiler" (in fact I'm using "hickory" at the moment). I can run the compile process and analyse the output. The Problem: a single test runs for about half a second even without any code in my annotation processor. This is way too long to using it in TDD style.
Mocking away the dependencies seems very hard for me (I would have to mock out the entire "javax.lang.model.element" package). Have someone succeed to write unit tests for an annotation processor (Java 6)? If not ... what would be your approach?
This is an old question, but it seems that the state of annotation processor testing hadn't gotten any better, so we released Compile Testing today. The best docs are in package-info.java, but the general idea is that there is a fluent API for testing compilation output when run with an annotation processor. For example,
ASSERT.about(javaSource())
.that(JavaFileObjects.forResource("HelloWorld.java"))
.processedWith(new MyAnnotationProcessor())
.compilesWithoutError()
.and().generatesSources(JavaFileObjects.forResource("GeneratedHelloWorld.java"));
tests that the processor generates a file that matches GeneratedHelloWorld.java (golden file on the class path). You can also test that the processor produces error output:
JavaFileObject fileObject = JavaFileObjects.forResource("HelloWorld.java");
ASSERT.about(javaSource())
.that(fileObject)
.processedWith(new NoHelloWorld())
.failsToCompile()
.withErrorContaining("No types named HelloWorld!").in(fileObject).onLine(23).atColumn(5);
This is obviously a lot simpler than mocking and unlike typical integration tests, all of the output is stored in memory.
You're right mocking the annotation processing API (with a mock library like easymock) is painful. I tried this approach and it broke down pretty rapidly. You have to setup to many method call expectations. The tests become unmaintainable.
A state-based test approach worked for me reasonably well. I had to implement the parts of the javax.lang.model.* API I needed for my tests. (That were only < 350 lines of code.)
This is the part of a test to initiate the javax.lang.model objects. After the setup the model should be in the same state as the Java compiler implementation.
DeclaredType typeArgument = declaredType(classElement("returnTypeName"));
DeclaredType validReturnType = declaredType(interfaceElement(GENERATOR_TYPE_NAME), typeArgument);
TypeParameterElement typeParameter = typeParameterElement();
ExecutableElement methodExecutableElement = Model.methodExecutableElement(name, validReturnType, typeParameter);
The static factory methods are defined in the class Model implementing the javax.lang.model.* classes. For example declaredType. (All unsupported operations will throw exceptions.)
public static DeclaredType declaredType(final Element element, final TypeMirror... argumentTypes) {
return new DeclaredType(){
#Override public Element asElement() {
return element;
}
#Override public List<? extends TypeMirror> getTypeArguments() {
return Arrays.asList(argumentTypes);
}
#Override public String toString() {
return format("DeclareTypeModel[element=%s, argumentTypes=%s]",
element, Arrays.toString(argumentTypes));
}
#Override public <R, P> R accept(TypeVisitor<R, P> v, P p) {
return v.visitDeclared(this, p);
}
#Override public boolean equals(Object obj) { throw new UnsupportedOperationException(); }
#Override public int hashCode() { throw new UnsupportedOperationException(); }
#Override public TypeKind getKind() { throw new UnsupportedOperationException(); }
#Override public TypeMirror getEnclosingType() { throw new UnsupportedOperationException(); }
};
}
The rest of the test verifies the behavior of the class under test.
Method actual = new Method(environment(), methodExecutableElement);
Method expected = new Method(..);
assertEquals(expected, actual);
You can have a look at the source code of the Quickcheck #Samples and #Iterables source code generator tests. (The code is not optimal, yet. The Method class has to many parameters and the Parameter class is not tested in its own test but as part of the Method test. It should illustrate the approach nevertheless.)
Viel Glück!
jOOR is a small Java reflection library that also provides simplified access to the in-memory Java compilation API in javax.tool.JavaCompiler. We added support for this to unit test jOOQ's annotation processors. You can easily write unit tests like this:
#Test
public void testCompileWithAnnotationProcessors() {
AProcessor p = new AProcessor();
try {
Reflect.compile(
"org.joor.test.FailAnnotationProcessing",
"package org.joor.test; " +
"#A " +
"public class FailAnnotationProcessing { " +
"}",
new CompileOptions().processors(p)
).create().get();
Assert.fail();
}
catch (ReflectException expected) {
assertFalse(p.processed);
}
}
The above example has been taken from this blog post
I was in a similar situation, so I created the Avatar library. It won't give you the performance of a pure unit test with no compilation, but if used correctly you shouldn't see much of a performance hit.
Avatar lets you write a source file, annotate it, and convert it to elements in a unit test. This allows you to unit test methods and classes which consume Element objects, without manually invoking javac.
I ran into the same problem awhile ago and found this question. Although the other answers provided are decent, I felt that that there was still room for improvement. Based on the other answers for this question, I created Elementary, a suite of JUnit 5 extensions that provide a real annotation processing environment for unit tests.
Most libraries test annotation processors by running them. However, most annotation processors are pretty complex and broken into more fine-grained components. It is not feasible to test individual components by running the annotation processor. Instead, we make the annotation processing environment available to these tests.
The following code snippet illustrates how to test a Lint component:
import com.karuslabs.elementary.junit.Cases;
import com.karuslabs.elementary.junit.Tools;
import com.karuslabs.elementary.junit.ToolsExtension;
import com.karuslabs.elementary.junit.annotations.Case;
import com.karuslabs.elementary.junit.annotations.Introspect;
import com.karuslabs.utilitary.type.TypeMirrors;
#ExtendWith(ToolsExtension.class)
#Introspect
class ToolsExtensionExampleTest {
Lint lint = new Lint(Tools.typeMirrors());
#Test
void lint_string_variable(Cases cases) {
var first = cases.one("first");
assertTrue(lint.lint(first));
}
#Test
void lint_method_that_returns_string(Cases cases) {
var second = cases.get(1);
assertFalse(lint.lint(second));
}
#Case("first") String first;
#Case String second() { return "";}
}
class Lint {
final TypeMirrors types;
final TypeMirror expectedType;
Lint(TypeMirrors types) {
this.types = types;
this.expectedType = types.type(String.class);
}
public boolean lint(Element element) {
if (!(element instanceof VariableElement)) {
return false;
}
var variable = (VariableElement) element;
return types.isSameType(expectedType, variable.asType());
}
}
By annotating the test class with #Introspect and test cases with #Case, we can declare test cases in the same file as the tests. The corresponding Element representation of the test cases can be retrieved by a test using Cases.
If anyone is interested, I wrote an article, The Problem with Annotation Processors that details the problems with unit testing annotation processors.
I have used http://hg.netbeans.org/core-main/raw-file/default/openide.util.lookup/test/unit/src/org/openide/util/test/AnnotationProcessorTestUtils.java though this is based on java.io.File for simplicity and so has the performance overhead you complain about.
Thomas's suggestion of mocking the whole JSR 269 environment would lead to a pure unit test. You might instead want to write more of an integration test which checks how your processor actually runs inside javac, giving more assurance it is correct, but merely want to avoid disk files. Doing this would require you to write a mock JavaFileManager, which is unfortunately not as easy as it seems and I have no examples handy, but you should not need to mock other things like Element interfaces.
An option is to bundle all tests in one class. Half a second for compiling etc. is then a constant for a given set of tests, the real test time for a test is negligible, I assume.