I'm experimenting with java annotation processors. I'm able to write integration tests using the "JavaCompiler" (in fact I'm using "hickory" at the moment). I can run the compile process and analyse the output. The Problem: a single test runs for about half a second even without any code in my annotation processor. This is way too long to using it in TDD style.
Mocking away the dependencies seems very hard for me (I would have to mock out the entire "javax.lang.model.element" package). Have someone succeed to write unit tests for an annotation processor (Java 6)? If not ... what would be your approach?
This is an old question, but it seems that the state of annotation processor testing hadn't gotten any better, so we released Compile Testing today. The best docs are in package-info.java, but the general idea is that there is a fluent API for testing compilation output when run with an annotation processor. For example,
ASSERT.about(javaSource())
.that(JavaFileObjects.forResource("HelloWorld.java"))
.processedWith(new MyAnnotationProcessor())
.compilesWithoutError()
.and().generatesSources(JavaFileObjects.forResource("GeneratedHelloWorld.java"));
tests that the processor generates a file that matches GeneratedHelloWorld.java (golden file on the class path). You can also test that the processor produces error output:
JavaFileObject fileObject = JavaFileObjects.forResource("HelloWorld.java");
ASSERT.about(javaSource())
.that(fileObject)
.processedWith(new NoHelloWorld())
.failsToCompile()
.withErrorContaining("No types named HelloWorld!").in(fileObject).onLine(23).atColumn(5);
This is obviously a lot simpler than mocking and unlike typical integration tests, all of the output is stored in memory.
You're right mocking the annotation processing API (with a mock library like easymock) is painful. I tried this approach and it broke down pretty rapidly. You have to setup to many method call expectations. The tests become unmaintainable.
A state-based test approach worked for me reasonably well. I had to implement the parts of the javax.lang.model.* API I needed for my tests. (That were only < 350 lines of code.)
This is the part of a test to initiate the javax.lang.model objects. After the setup the model should be in the same state as the Java compiler implementation.
DeclaredType typeArgument = declaredType(classElement("returnTypeName"));
DeclaredType validReturnType = declaredType(interfaceElement(GENERATOR_TYPE_NAME), typeArgument);
TypeParameterElement typeParameter = typeParameterElement();
ExecutableElement methodExecutableElement = Model.methodExecutableElement(name, validReturnType, typeParameter);
The static factory methods are defined in the class Model implementing the javax.lang.model.* classes. For example declaredType. (All unsupported operations will throw exceptions.)
public static DeclaredType declaredType(final Element element, final TypeMirror... argumentTypes) {
return new DeclaredType(){
#Override public Element asElement() {
return element;
}
#Override public List<? extends TypeMirror> getTypeArguments() {
return Arrays.asList(argumentTypes);
}
#Override public String toString() {
return format("DeclareTypeModel[element=%s, argumentTypes=%s]",
element, Arrays.toString(argumentTypes));
}
#Override public <R, P> R accept(TypeVisitor<R, P> v, P p) {
return v.visitDeclared(this, p);
}
#Override public boolean equals(Object obj) { throw new UnsupportedOperationException(); }
#Override public int hashCode() { throw new UnsupportedOperationException(); }
#Override public TypeKind getKind() { throw new UnsupportedOperationException(); }
#Override public TypeMirror getEnclosingType() { throw new UnsupportedOperationException(); }
};
}
The rest of the test verifies the behavior of the class under test.
Method actual = new Method(environment(), methodExecutableElement);
Method expected = new Method(..);
assertEquals(expected, actual);
You can have a look at the source code of the Quickcheck #Samples and #Iterables source code generator tests. (The code is not optimal, yet. The Method class has to many parameters and the Parameter class is not tested in its own test but as part of the Method test. It should illustrate the approach nevertheless.)
Viel Glück!
jOOR is a small Java reflection library that also provides simplified access to the in-memory Java compilation API in javax.tool.JavaCompiler. We added support for this to unit test jOOQ's annotation processors. You can easily write unit tests like this:
#Test
public void testCompileWithAnnotationProcessors() {
AProcessor p = new AProcessor();
try {
Reflect.compile(
"org.joor.test.FailAnnotationProcessing",
"package org.joor.test; " +
"#A " +
"public class FailAnnotationProcessing { " +
"}",
new CompileOptions().processors(p)
).create().get();
Assert.fail();
}
catch (ReflectException expected) {
assertFalse(p.processed);
}
}
The above example has been taken from this blog post
I was in a similar situation, so I created the Avatar library. It won't give you the performance of a pure unit test with no compilation, but if used correctly you shouldn't see much of a performance hit.
Avatar lets you write a source file, annotate it, and convert it to elements in a unit test. This allows you to unit test methods and classes which consume Element objects, without manually invoking javac.
I ran into the same problem awhile ago and found this question. Although the other answers provided are decent, I felt that that there was still room for improvement. Based on the other answers for this question, I created Elementary, a suite of JUnit 5 extensions that provide a real annotation processing environment for unit tests.
Most libraries test annotation processors by running them. However, most annotation processors are pretty complex and broken into more fine-grained components. It is not feasible to test individual components by running the annotation processor. Instead, we make the annotation processing environment available to these tests.
The following code snippet illustrates how to test a Lint component:
import com.karuslabs.elementary.junit.Cases;
import com.karuslabs.elementary.junit.Tools;
import com.karuslabs.elementary.junit.ToolsExtension;
import com.karuslabs.elementary.junit.annotations.Case;
import com.karuslabs.elementary.junit.annotations.Introspect;
import com.karuslabs.utilitary.type.TypeMirrors;
#ExtendWith(ToolsExtension.class)
#Introspect
class ToolsExtensionExampleTest {
Lint lint = new Lint(Tools.typeMirrors());
#Test
void lint_string_variable(Cases cases) {
var first = cases.one("first");
assertTrue(lint.lint(first));
}
#Test
void lint_method_that_returns_string(Cases cases) {
var second = cases.get(1);
assertFalse(lint.lint(second));
}
#Case("first") String first;
#Case String second() { return "";}
}
class Lint {
final TypeMirrors types;
final TypeMirror expectedType;
Lint(TypeMirrors types) {
this.types = types;
this.expectedType = types.type(String.class);
}
public boolean lint(Element element) {
if (!(element instanceof VariableElement)) {
return false;
}
var variable = (VariableElement) element;
return types.isSameType(expectedType, variable.asType());
}
}
By annotating the test class with #Introspect and test cases with #Case, we can declare test cases in the same file as the tests. The corresponding Element representation of the test cases can be retrieved by a test using Cases.
If anyone is interested, I wrote an article, The Problem with Annotation Processors that details the problems with unit testing annotation processors.
I have used http://hg.netbeans.org/core-main/raw-file/default/openide.util.lookup/test/unit/src/org/openide/util/test/AnnotationProcessorTestUtils.java though this is based on java.io.File for simplicity and so has the performance overhead you complain about.
Thomas's suggestion of mocking the whole JSR 269 environment would lead to a pure unit test. You might instead want to write more of an integration test which checks how your processor actually runs inside javac, giving more assurance it is correct, but merely want to avoid disk files. Doing this would require you to write a mock JavaFileManager, which is unfortunately not as easy as it seems and I have no examples handy, but you should not need to mock other things like Element interfaces.
An option is to bundle all tests in one class. Half a second for compiling etc. is then a constant for a given set of tests, the real test time for a test is negligible, I assume.
Related
I have a java app with maven.
Junit for tests, with failsafe and surefire plugins.
I have more than 2000 integration tests.
To speed up the test running, I use failsafe jvmfork to run my tests parallel.
I have some heavy test class, and they typically running at end of my test execution and it is slows down my CI verify process.
The filesafe runorder:balanced would be a good option for me, but i cant use it because the jvmfork.
To rename the test classes or move to another package and run it alpahabetical is not an option.
Any suggestion how can I run my slow test classes at the begining of the verify process?
In JUnit 5 (from version 5.8.0 onwards) test classes can be ordered too.
src/test/resources/junit-platform.properties:
# ClassOrderer$OrderAnnotation sorts classes based on their #Order annotation
junit.jupiter.testclass.order.default=org.junit.jupiter.api.ClassOrderer$OrderAnnotation
Other Junit built-in class orderer implementations:
org.junit.jupiter.api.ClassOrderer$ClassName
org.junit.jupiter.api.ClassOrderer$DisplayName
org.junit.jupiter.api.ClassOrderer$Random
For other ways (beside junit-platform.properties file) to set configuration parameters see JUnit 5 user guide.
You can also provide your own orderer. It must implement ClassOrderer interface:
package foo;
public class MyOrderer implements ClassOrderer {
#Override
public void orderClasses(ClassOrdererContext context) {
Collections.shuffle(context.getClassDescriptors());
}
}
junit.jupiter.testclass.order.default=foo.MyOrderer
Note that #Nested test classes cannot be ordered by a ClassOrderer.
Refer to JUnit 5 documentations and ClassOrderer API docs to learn more about this.
I gave the combination of answers I found a try:
Running JUnit4 Test classes in specified order
Running JUnit Test in parallel on Suite Level
The second answer is based on these classes of this github project, which is available under the BSD-2 license.
I defined a few test classes:
public class LongRunningTest {
#Test
public void test() {
System.out.println(Thread.currentThread().getName() + ":\tlong test - started");
long time = System.currentTimeMillis();
do {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
}
} while(System.currentTimeMillis() - time < 1000);
System.out.println(Thread.currentThread().getName() + ":\tlong test - done");
}
}
#Concurrent
public class FastRunningTest1 {
#Test
public void test1() {
try {
Thread.sleep(250);
} catch (InterruptedException e) {
}
System.out.println(Thread.currentThread().getName() + ":\tfrt1-test1 - done");
}
// +7 more repetions of the same method
}
Then I defined the test suites:
(FastRunningTest2 is a copy of the first class with adjusted output)
#SuiteClasses({LongRunningTest.class, LongRunningTest.class})
#RunWith(Suite.class)
public class SuiteOne {}
#SuiteClasses({FastRunningTest1.class, FastRunningTest2.class})
#RunWith(Suite.class)
public class SuiteTwo {}
#SuiteClasses({SuiteOne.class, SuiteTwo.class})
#RunWith(ConcurrentSuite.class)
public class TopLevelSuite {}
When I execute the TopLevelSuite I get the following output:
TopLevelSuite-1-thread-1: long test - started
FastRunningTest1-1-thread-4: frt1-test4 - done
FastRunningTest1-1-thread-2: frt1-test2 - done
FastRunningTest1-1-thread-1: frt1-test1 - done
FastRunningTest1-1-thread-3: frt1-test3 - done
FastRunningTest1-1-thread-5: frt1-test5 - done
FastRunningTest1-1-thread-3: frt1-test6 - done
FastRunningTest1-1-thread-1: frt1-test8 - done
FastRunningTest1-1-thread-5: frt1-test7 - done
FastRunningTest2-2-thread-1: frt2-test1 - done
FastRunningTest2-2-thread-2: frt2-test2 - done
FastRunningTest2-2-thread-5: frt2-test5 - done
FastRunningTest2-2-thread-3: frt2-test3 - done
FastRunningTest2-2-thread-4: frt2-test4 - done
TopLevelSuite-1-thread-1: long test - done
TopLevelSuite-1-thread-1: long test - started
FastRunningTest2-2-thread-5: frt2-test8 - done
FastRunningTest2-2-thread-2: frt2-test6 - done
FastRunningTest2-2-thread-1: frt2-test7 - done
TopLevelSuite-1-thread-1: long test - done
Which basically shows that the LongRunningTest is executed in parralel to the FastRunningTests. The default value of threads used for parallel execution defined by the Concurrent Annotation is 5, which can be seen in the output of the parallel execution of the FastRunningTests.
The downside is that theses Threads are not shared between FastRunningTest1 and FastRunningTest2.
This behavious shows that it is "somewhat" possible to do what you want to do (so whether that works with your current setup is a different question).
Also I am not sure whether this is actually worth the effort,
as you need to prepare those TestSuites manually (or write something that autogenerates them)
and you need to define the Concurrent Annotation for all those classes (maybe with a different number of threads for each class)
As this basically shows that it is possible to define the execution order of classes and trigger their parallel execution, it should also be possibly to get the whole process to only use one ThreadPool (but I am not sure what the implication of that would be).
As the whole concept is based on a ThreadPoolExecutor, using a PriorityBlockingQueue which gives long running tasks a higher priority you would get closer to your ideal outcome of executing the long running tests first.
I experimented around a bit more and implemented my own custom suite runner and junit runner. The idea behind is to have your JUnitRunner submit the tests into a queue which is handeld by a single ThreadPoolExecutor. Because I didn't implement a blocking operation in the RunnerScheduler#finish method, I ended up with a solution where the tests from all classes were passed to the queue before the execution even started. (That might look different if there a more test classes and methods involved).
At least it proves the point that you can mess with junit at this level if you really want to.
The code of my poc is a bit messy and to lengthy to put it here, but if someone is interested I can push it into a github project.
In out project we had created a few marker interfaces (
example
public interface SlowTestsCategory {}
)
and put it into the #Category annotation of JUnit in the test class with slow tests.
#Category(SlowTestsCategory.class)
After that we created some special tasks for Gradle to run tests by category or a few categories by custom order:
task unitTest(type: Test) {
description = 'description.'
group = 'groupName'
useJUnit {
includeCategories 'package.SlowTestsCategory'
excludeCategories 'package.ExcludedCategory'
}
}
This solution is served by Gradle, but maybe it'll be helpful for you.
Let me summarize everything before I will provide a recommendation.
Integration tests are slow. This is fine and it's natural.
CI build doesn't run tests that assume deployment of a system, since there is no deployment in CI. We care about deployment in CD process.
So I assume your integration tests don't assume deployment.
CI build runs unit tests first. Unit tests are extremely fast because they use only RAM.
We have good and quick feedback from unit tests.
At this moment we are sure we don't have a problem with getting a quick feedback. But we still want to run integration tests faster.
I would recommend the following solutions:
Improve actual tests. Quite often they are not effective and can be speed up significantly.
Run integration tests in background (i.e. don't wait for real time feedback from them).
It's natural for them to be much slower than unit tests.
Split integration tests on groups and run them separately if you need feedback from some of them faster.
Run integration tests in different JVMs. Not different threads within the same JVM!
In this case you don't care about thread safety and you should not care about it.
Run integration tests on different machines and so on.
I worked with many different projects (some of them had CI build running for 48 hours) and first 3 steps were enough (even for crazy cases). Step #4 is rarely needed having good tests. Step #5 is for very specific situations.
You see that my recommendation relates to the process and not to the tool, because the problem is in the process.
Quite often people ignore root cause and try to tune the tool (Maven in this case). They get cosmetic improvements but with high maintenance cost of created solution.
There is a solution for that from version 5.8.0-M1 of junit.
Basically you need to create your own orderer. I did something like that.
Here is an annotation which you will use inside your test classes:
#Retention(RetentionPolicy.RUNTIME)
public #interface TestClassesOrder {
public int value() default Integer.MAX_VALUE;
}
Then you need to create class which will implement org.junit.jupiter.api.ClassOrderer
public class AnnotationTestsOrderer implements ClassOrderer {
#Override
public void orderClasses(ClassOrdererContext context) {
Collections.sort(context.getClassDescriptors(), new Comparator<ClassDescriptor>() {
#Override
public int compare(ClassDescriptor o1, ClassDescriptor o2) {
TestClassesOrder a1 = o1.getTestClass().getDeclaredAnnotation(TestClassesOrder.class);
TestClassesOrder a2 = o2.getTestClass().getDeclaredAnnotation(TestClassesOrder.class);
if (a1 == null) {
return 1;
}
if (a2 == null) {
return -1;
}
if (a1.value() < a2.value()) {
return -1;
}
if (a1.value() == a2.value()) {
return 0;
}
if (a1.value() > a2.value()) {
return 1;
}
return 0;
}
});
}
}
To get it working you need to tell junit which class you would use for ordering descriptors. So you need to create file "junit-platform.properties" it should be in resources folder. In that file you just need one line with your orderer class:
junit.jupiter.testclass.order.default=org.example.tests.AnnotationTestOrderer
Now you can use your orderer annotation like Order annotation but on class level:
#TestClassesOrder(1)
class Tests {...}
#TestClassesOrder(2)
class MainTests {...}
#TestClassesOrder(3)
class EndToEndTests {...}
I hope that this will help someone.
You can use annotations in Junit 5 to set the test order you wish to use:
From Junit 5's user guide:
https://junit.org/junit5/docs/current/user-guide/#writing-tests-test-execution-order
import org.junit.jupiter.api.MethodOrderer.OrderAnnotation;
import org.junit.jupiter.api.Order;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestMethodOrder;
#TestMethodOrder(OrderAnnotation.class)
class OrderedTestsDemo {
#Test
#Order(1)
void nullValues() {
// perform assertions against null values
}
#Test
#Order(2)
void emptyValues() {
// perform assertions against empty values
}
#Test
#Order(3)
void validValues() {
// perform assertions against valid values
}
}
Upgrading to Junit5 can be done fairly easily and the documentation on the link in the beginning of the post contains all the information you might need.
I have a method like this one:
public void foo(#Nonnull String value) {...}
I would like to write a unit test to make sure foo() throws an NPE when value is null but I can't since the compiler refuses to compile the unit test when static null pointer flow analysis is enabled in IDE.
How do I make this test compile (in Eclipse with "Enable annotation-based null analysis" enabled):
#Test(expected = NullPointerException.class)
public void test() {
T inst = ...
inst.foo(null);
}
Note: In theory the static null pointer of the compiler should prevent cases like that. But there is nothing stopping someone from writing another module with the static flow analysis turned off and calling the method with null.
Common case: Big messy old project without flow analysis. I start with annotating some utility module. In that case, I'll have existing or new unit tests which check how the code behaves for all the modules which don't use flow analysis yet.
My guess is that I have to move those tests into an unchecked module and move them around as I spread flow analysis. That would work and fit well into the philosophy but it would be a lot of manual work.
To put it another way: I can't easily write a test which says "success when code doesn't compile" (I'd have to put code pieces into files, invoke the compiler from unit tests, check the output for errors ... not pretty). So how can I test easily that the code fails as it should when callers ignore #Nonnull?
Hiding null within a method does the trick:
public void foo(#NonNull String bar) {
Objects.requireNonNull(bar);
}
/** Trick the Java flow analysis to allow passing <code>null</code>
* for #Nonnull parameters.
*/
#SuppressWarnings("null")
public static <T> T giveNull() {
return null;
}
#Test(expected = NullPointerException.class)
public void testFoo() {
foo(giveNull());
}
The above compiles fine (and yes, double-checked - when using foo(null) my IDE gives me a compile error - so "null checking" is enabled).
In contrast to the solution given via comments, the above has the nice side effect to work for any kind of parameter type (but might probably require Java8 to get the type inference correct always).
And yes, the test passes (as written above), and fails when commenting out the Objects.requireNonNull() line.
Why not just use plain old reflection?
try {
YourClass.getMethod("foo", String.class).invoke(someInstance, null);
fail("Expected InvocationException with nested NPE");
} catch(InvocationException e) {
if (e.getCause() instanceof NullPointerException) {
return; // success
}
throw e; // let the test fail
}
Note that this can break unexpectedly when refactoring (you rename the method, change the order of method parameters, move method to new type).
Using assertThrows from Jupiter assertions I was able to test this:
public MethodName(#NonNull final param1 dao) {....
assertThrows(IllegalArgumentException.class, () -> new MethodName(null));
Here design by contract comes to picture. You can not provide null value parameter to a method annotated with notNull argument.
You can use a field which you initialize and then set to null in a set up method:
private String nullValue = ""; // set to null in clearNullValue()
#Before
public void clearNullValue() {
nullValue = null;
}
#Test(expected = NullPointerException.class)
public void test() {
T inst = ...
inst.foo(nullValue);
}
As in GhostCat's answer, the compiler is unable to know whether and when clearNullValue() is called and has to assume that the field is not null.
I have Java code that uses JAR:
public class Version {
public String getVersion() {
// Use Java Package API to return information specified in the manifest of this JAR.
return getClass().getPackage().getImplementationVersion();
}
}
How do I run JUnit test for this code?
It fails in development build (in Eclipse) since there is no JAR file yet.
It fails in production build (in Gradle) since there is no JAR file yet.
You always need to mock the dependencies for your unit testing. Boundary is unit test your code and not the jar itself. Mockito framework is good and there are other frameworks that do the job.
Chances are, that this can't be properly mocked (and thus: not unit tested). The point is that you are actually calling a method on "this". But you can't test some object ... and mock it at the same time.
You see, if your production code would look like this:
public String getVersion() {
return someObject.getClass().....
}
then you could create a mock object; and insert that into your Version class. But even then, the method getClass() is final within java.lang.Object; and therefore you can't be mocking it anyway.
[ Reasonable mocking frameworks like EasyMock or Mokito work by extending classes and overriding the methods you want to control. There are frameworks like PowerMock that do byte code manipulation and that allow for this kind of mocking - but you should never ever use such libraries; as they have really bad side effects (like breaking most coverage libraries) ]
What might work:
class Version {
private final Package packageForVersionCheck;
public Version() {
this(getClass().getPackage()));
}
Version(Package somePackage) {
this.packageForVersionCheck = ...
}
public String getVersion() {
return this.packageForVersionCheck.getImpl....
Now you can use dependency injection to provide a "mocked" package that returns that string. But well, that looks like a lot of code for almost no gain.
Long story short: sometimes, you simply can't write a reasonable unit test. Then do the next best thing: create some "functional" test that is automatically executed in a "customer like" setup; and make sure that you have an automated setup to run such tests, too.
I want to test that a specific method produces the expected result, but to do that I need to manipulate the input in the test as well.
class ToTest {
public String produceResponse(String input) {
// ....
encryptedIds = encryptIds(input)
output = doStuff(input, encryptedIds)
}
public encryptIds(input) {
....
}
}
In my test I need to check that produceResponse actually produces the expected response.
in order to do that I have to encrypt the ids in the input.
My question is: should I rewrite encryptIds in the test (so that I would have more controller on the result) or should I call encryptIds from the class itself.
Is there a better approach to solve this? I don't like that in my test I know what happens in the specific flow.
If I understand correctly, you would like to test produceResponse() with known encryptedIds as input.
You could do that without refactoring the code, but it would probably be a good idea to refactor it, so that's what I'm going to explain:
class ToTest {
private IdEncryptor encryptor;
public ToTest(IdEncryptor encryptor) {
this.encryptor = encryptor;
}
public String produceResponse(String input) {
String[] encryptedIds = encryptor.encryptIds(input);
return doStuff(input, encryptedIds);
}
}
Now you can unit-test IdEncryptor to test that it produces correct encrypted IDs based on a String input.
And to test the ToTest class, you can mock the IdEncryptor so that whatever the input it receives, it produces the encryptedIds you desire. For example with mockito:
IdEncryptor mockEncryptor = mock(IdEncryptor.class);
when(mockEncryptor.encryptIds(any(String.class)).thenReturn(new String[] {"a", "b"});
ToTest toTest = new ToTest(mockEncryptor);
String response = toTest.produceResponse("input");
// expect that the response is what you expect given "a", "b" as input of doStuff()
Never copy any production code into the unit test as it will get outdated at some point.
If both methods are public, they are part of the public API, so:
you should first unit test the correct behavior of the encryptIds(String) method
then unit test the produceResponse(String) method which will internally use the already tested encryptIds(String) method
If encryptIds(String) would not be part of the public API:
then it is internal implementation and helper method which is not unit testable
produceResponse(String) is then responsible for encryption as a side-effect:
you can still test it if you mark it package private (no modifier)
you can also change the implementation of the encryptIds(String) only for testing purposes
Is encrypting id's something that is integral to your system or not? As it stands this class takes some input and produces some output and as far as your test is concerned this is what's important, no more, no less.
What is the impact of not performing the encryption? If your doStuff method will just fail if it doesn't happen then it is an internal detail to your class-under-test and I wouldn't have the tests care about it at all. If it's a step that absolutely must be performed then I would refactor the code to verify that it absolutely has happened, maybe using a mock as #jb-nizet answered.
As for the general case of duplicating production code in tests, as #Crazyjavahacking stated you should not do this, but I have no issue with using production code from a test- maybe not at a unit level but definitely the higher up the system I go, e.g. when testing writing to a DB I will use the reading code to verify it's happened correctly, but will also have independent tests to verify the reading path as well
How can I test the following code?
class1 {
public InjectedClass injectedClass;
method1(){
returnValue = injectedClass.someMethod;
//another logic
}
method2(){
resultValue = method1();
}
}
My application was developed in Java. I use JUnit and Mockito.
To test method1() I can create a mock for InjectedClass and a mock logic for someMethod().
But how does one properly test a method? Do I need to create a mock for method1()?
UPDATE:
Let me demonstrate real example.
public class Application {
#Inject
DAOFacade facade;
//method1
public ReturnDTO getDTO(LiveServiceRequestParam requestParam) throws AffiliateIdentityException {
ReturnDTO returnDTO = new ReturnDTO();
CoreProductRepository repo = recognizeProduct(ProdCodeTypeEnum.MPN, null, vendorBound);
if(repo!=null){
//logic to fill some fileds in returnDTO
}
return returnDTO ;
}
//метод2
CoreProductRepository recognizeProduct(ProdCodeTypeEnum paramType, String prodCode, List<Integer> vendors) {
CoreProductRepository coreProductRepository = null;
switch (paramType) {
case MPN:
coreProductRepository = facade.findByAlternativeMPN(prodCode, vendors);
break;
case EAN:
coreProductRepository = facade.findByEan(prodCode, vendors);
break;
case DESCRIPTION:
coreProductRepository = facade.findByName(prodCode, vendors);
break;
}
return coreProductRepository;
}
}
So, to test recognizeProduct i mock DAOfacade. But also I want test getDTO method which uses recognizeProduct method.
You don't need to mock out your recognizeProduct method. As long as the DAOfacade is mocked, the behavior is known and deterministic, so the results of both getDTO and recognizeProduct can be verified.
It can also be argued, that you don't even need to test recognizeProduct specifically, because it is not public, so, there is no contract to enforce. As long as the behavior of getDTO is being tested and verified, your API is working as far as the user is concerned. The details of implementation aren't important.
In a way, testing recognizeProduct specifically is counter-productive, it hurts the maintainability and reliability of your code rather than helping it, because it makes any refactoring or reorganization harder to achieve even if it does not affect the externally visible behavior in any way.
If the methods are defined as shown in your example, they are package private. So, if you create a test in the same package (though normally in a test directory) you will be able to access those methods and test them.
That said, if you can refactor or rewrite the class to be more easily testable then that might be a good idea. If indeed you have to test the results of the internal methods and can't just test public ones.
You should focus your test effort on public methods return value and not not on internal implementation.
Focusing on internal implementation causes tests to be harder to mantain since a basic refactoring not affecting the return value will probably require changing your tests.
Sometimes is impossible to avoid testing internal implementation since some methods return nothing and you need to "assert" something. In this case it seems you return something at some point, I'd focus on testing that.
It seems to me you have a (sadly common) misunderstanding of the word test; it does not mean 'execute from a test case'.
Testing means supplying a range of inputs, and asserting that the corresponding outputs are correct. 99% of the time that means checking return codes or object state, occasionally you have to use mocks to properly test a pure-output interface.
If you do that for the public methods, and the private methods are fully covered to the required standard, job done. If there is uncovered code in private methods, either use it to identify and add a missing test case, or delete it.
In the event you feel there would be something useful lost by deleting unreachable private code, make it public, or move it out to another class.