JMockit: Mocked apis are getting reverted after sometime - java

I am using JMockit to mock System.currentMillis().
Few invocations returning mocked time but after sometime, it starts returning original time.
When I run the same after disabling the JIT, it runs perfectly fine.

You obviously have an important dependency to the current time inside one or more of your components. In this case you should express this dependency with an interface:
public interface TimeService {
long currentTimeMillis();
}
In your real code you have an implementation that uses the System method:
public final SystemTimeService implements TimeService {
#Override
public long currentTimeMillis() {
return System.currentTimeMillis();
}
}
Note, with Java 8 you can reduce some code to express it more clearly (thanks #Holger):
public interface TimeService {
static final DEFAULT = System::currentTimeMillis;
long currentTimeMillis();
}
Your classes that depend on this time service should look like that:
public final ClassThatDependsOnTimeService {
private final TimeService timeService;
public ClassThatDependsOnTimeService(TimeService timeService) {
this.timeService = timeService;
}
// other features omitted
}
Now they can be fed with
TimeService timeService = new SystemTimeService();
ClassThatDependsOnTimeService someObject = new ClassThatDependsOnTimeService(timeService);
or (Java 8):
ClassThatDependsOnTimeService someObject = new ClassThatDependsOnTimeService(TimeService.DEFAULT);
or with any dependency injection framework or whatever.
In your tests you do not mock the method System.currentTimeMillis but you mock the interface TimeService and inject the mock into the depending classes.

This happens because the JIT optimizer in the JVM does not check for redefined methods (redefinition is done through a different subsystem in the JVM). So, eventually the JVM decides to optimize the code containing the call to System.currentTimeMillis(), inlining the call to the native Java method so that it starts executing the actual native method directly. At this point, the optimizer should check if currentTimeMillis() is currently redefined or not, and abandon the inlining in case it is redefined. But, unfortunately, the JDK engineers failed to account for this possibility.
If you really need to invoke a mocked System.currentTimeMillis() too many times, the only workaround is indeed to run with -Xint (which is not such a bad idea, as it usually reduces the total execution time of the test run).

Related

Equivalent to #DirtiesContext(...) for Surefire + JUnit?

I'm using the maven-surefire-plugin with junit 4.1.4. I have a unit test which relies on a 3rd party class that internally uses static { ... } code block to initiate some variables. For one test, I need to change one of these variables, but only for certain tests. I'd like this block to be re-executed between tests, since it picks up a value the first time it runs.
When testing, it seems like surefire instantiates the test class once, so the static { ... } code block is never processed again.
This means my unit tests that change values required for testing are ignored, the static class has already been instantiated.
💭 Note: The static class uses System.loadLibrary(...), from what I've found, it can't be rewritten to be instantiated, static is the (rare, but) proper usage.
I found a similar solution for Spring Framework which uses #DirtiesContext(...) annotation, allowing the programmer to mark classes or methods as "Dirty" so that a new class (or in many cases, the JVM) is initialized between tests.
How do you do the same thing as #DirtiesContext(...), but with maven-surefire-plugin?
public class MyTests {
#Test
public void test1() {
assertThat(MyClass.THE_VALUE, is("something-default"));
}
#Test
public void test2() {
System.setProperty("foo.bar", "something-else");
assertThat(MyClass.THE_VALUE, is("something-else"));
// ^-- this assert fails
// value still "something-default"
}
}
public class MyClass {
static {
String value;
if(System.getProperty("foo.bar") != null) {
value = System.getProperty("foo.bar"); // set to "something-else"
} else {
value = "something-default";
}
}
public static String THE_VALUE = value;
}
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>4.1.2</version>
</plugin>
static initialization blocks in java are something that can't be easily handled by JUnit. In general static stuff doesn't play nicely with unit testing concepts.
So, assuming you can't touch this code, your options are:
Option 1:
Spawn a new JVM for each test - well, this will work, but might be an overkill because it will aggravate the performance
If you'll follow this path, you might need to configure surefire plugin with:
forkCount=1
reuseForks=false
According to the surefire plugin documentation this combination will execute each test class in its own JVM process.
Option 2:
Create a class with a different class loader for every test.
Basically in Java if class com.foo.A is created by ClassLoader M is totally different than the same class com.foo.A created by ClassLoaded N.
This is somewhat hacky but should work.
The overhead is much smaller than in option 1. However you'll have to understand how to "incorporate" new class loaders into the testing infrastructure.
For more information about the creation of the custom class loader read for example this tutorial

Spock + Spring - Stubs returned from Stubbed #SpringBean always return null

I am attempting to use Spock to create an integration test around a Spring application. As it is not a Spring Boot application and the #SpringBootTest annotation interfered significantly with the app's initialization, I am using a minimal configuration.
I specifically need to stub a service in my app that returns objects of type Message; in the actual app these objects would come from a third-party vendor's library and they cannot be instantiated or subclassed, nor do their members have setters, so my only option is to create Stubs for them. However, with this current configuration (I've simplified the test significantly just to get the gist across):
#ContextConfiguration([TestSetup]) // supplies actual Spring beans including some JPA repos
class LogicSpec extends Specification {
#SpringBean
RestService restService = Stub()
#Autowired
ServiceUnderTest sut
#Autowired
SomeJPARepo repository;
def 'should do some business logic'() {
given:
Message m = new Stub() {
getStatus() >> "stubbed status"
}
restService.getMessage(_ as String) >> {
m
}
when:
sut.businessMethod()
then:
// just checking for side effects that `businessMethod` causes, no mocks being matched against
assert repository.findAll().every { it.processed == true }
}
}
}
Internally, the ServiceUnderTest.businessMethod() is using the Message object like this:
restService.getMessage(sid).getStatus() // should be "stubbed status"; always evaluates to null
however, every method of the Message stub always returns null regardless of whether I have defined a behavior for it. The Message objects must return specific values from its getters for the test to work. I would prefer to not have to declare every Message stub as its own #SpringBean; I need to eventually expand the test to use several different Message stub objects. I don't need mocks or spies because the number of invocations of RestService's methods doesn't matter, I just need it to emit proper stubs for ServiceUnderTest to chew on. Apologies if this question is unusual or I've missed something obvious; I'm slightly oblivious to Spock's notion of lifecycle, and the waters have been especially muddied with the addition of the Spring extension.
I discovered the answer soon after writing this, but just for posterity; the third-party Message class is declared as final and thus can't be subclassed; Spock was creating stubs for them but silently failing to add the overridden mock methods. I ended up using PowerMockito to remove this limitation; however this interfered with collecting test coverage metrics, so I instead used a wrapper class that can be mocked and used it everywhere in my code the original Message class was:
public class MessageWrapper {
public MessageWrapper(Message from) {...}
}
it's an extra bit of headache, but it was necessary because test coverage was required in this case. There also seems to be a promising Spock-specific mocking utility that will mock final classes, but I haven't tested it nor do I know if it will interfere with collecting coverage metrics like PowerMockito does.

How to properly test internal method of class in java

How can I test the following code?
class1 {
public InjectedClass injectedClass;
method1(){
returnValue = injectedClass.someMethod;
//another logic
}
method2(){
resultValue = method1();
}
}
My application was developed in Java. I use JUnit and Mockito.
To test method1() I can create a mock for InjectedClass and a mock logic for someMethod().
But how does one properly test a method? Do I need to create a mock for method1()?
UPDATE:
Let me demonstrate real example.
public class Application {
#Inject
DAOFacade facade;
//method1
public ReturnDTO getDTO(LiveServiceRequestParam requestParam) throws AffiliateIdentityException {
ReturnDTO returnDTO = new ReturnDTO();
CoreProductRepository repo = recognizeProduct(ProdCodeTypeEnum.MPN, null, vendorBound);
if(repo!=null){
//logic to fill some fileds in returnDTO
}
return returnDTO ;
}
//метод2
CoreProductRepository recognizeProduct(ProdCodeTypeEnum paramType, String prodCode, List<Integer> vendors) {
CoreProductRepository coreProductRepository = null;
switch (paramType) {
case MPN:
coreProductRepository = facade.findByAlternativeMPN(prodCode, vendors);
break;
case EAN:
coreProductRepository = facade.findByEan(prodCode, vendors);
break;
case DESCRIPTION:
coreProductRepository = facade.findByName(prodCode, vendors);
break;
}
return coreProductRepository;
}
}
So, to test recognizeProduct i mock DAOfacade. But also I want test getDTO method which uses recognizeProduct method.
You don't need to mock out your recognizeProduct method. As long as the DAOfacade is mocked, the behavior is known and deterministic, so the results of both getDTO and recognizeProduct can be verified.
It can also be argued, that you don't even need to test recognizeProduct specifically, because it is not public, so, there is no contract to enforce. As long as the behavior of getDTO is being tested and verified, your API is working as far as the user is concerned. The details of implementation aren't important.
In a way, testing recognizeProduct specifically is counter-productive, it hurts the maintainability and reliability of your code rather than helping it, because it makes any refactoring or reorganization harder to achieve even if it does not affect the externally visible behavior in any way.
If the methods are defined as shown in your example, they are package private. So, if you create a test in the same package (though normally in a test directory) you will be able to access those methods and test them.
That said, if you can refactor or rewrite the class to be more easily testable then that might be a good idea. If indeed you have to test the results of the internal methods and can't just test public ones.
You should focus your test effort on public methods return value and not not on internal implementation.
Focusing on internal implementation causes tests to be harder to mantain since a basic refactoring not affecting the return value will probably require changing your tests.
Sometimes is impossible to avoid testing internal implementation since some methods return nothing and you need to "assert" something. In this case it seems you return something at some point, I'd focus on testing that.
It seems to me you have a (sadly common) misunderstanding of the word test; it does not mean 'execute from a test case'.
Testing means supplying a range of inputs, and asserting that the corresponding outputs are correct. 99% of the time that means checking return codes or object state, occasionally you have to use mocks to properly test a pure-output interface.
If you do that for the public methods, and the private methods are fully covered to the required standard, job done. If there is uncovered code in private methods, either use it to identify and add a missing test case, or delete it.
In the event you feel there would be something useful lost by deleting unreachable private code, make it public, or move it out to another class.

How can I make a JUnit test wait?

I have a JUnit test that I want to wait for a period of time synchronously. My JUnit test looks like this:
#Test
public void testExipres(){
SomeCacheObject sco = new SomeCacheObject();
sco.putWithExipration("foo", 1000);
// WAIT FOR 2 SECONDS
assertNull(sco.getIfNotExipred("foo"));
}
I tried Thread.currentThread().wait(), but it throws an IllegalMonitorStateException (as expected).
Is there some trick to it or do I need a different monitor?
How about Thread.sleep(2000); ? :)
Thread.sleep() could work in most cases, but usually if you're waiting, you are actually waiting for a particular condition or state to occur. Thread.sleep() does not guarantee that whatever you're waiting for has actually happened.
If you are waiting on a rest request for example maybe it usually return in 5 seconds, but if you set your sleep for 5 seconds the day your request comes back in 10 seconds your test is going to fail.
To remedy this JayWay has a great utility called Awatility which is perfect for ensuring that a specific condition occurs before you move on.
It has a nice fluent api as well
await().until(() ->
{
return yourConditionIsMet();
});
https://github.com/jayway/awaitility
In case your static code analyzer (like SonarQube) complaints, but you can not think of another way, rather than sleep, you may try with a hack like:
Awaitility.await().pollDelay(Durations.ONE_SECOND).until(() -> true);
It's conceptually incorrect, but it is the same as Thread.sleep(1000).
The best way, of course, is to pass a Callable, with your appropriate condition, rather than true, which I have.
https://github.com/awaitility/awaitility
You can use java.util.concurrent.TimeUnit library which internally uses Thread.sleep. The syntax should look like this :
#Test
public void testExipres(){
SomeCacheObject sco = new SomeCacheObject();
sco.putWithExipration("foo", 1000);
TimeUnit.MINUTES.sleep(2);
assertNull(sco.getIfNotExipred("foo"));
}
This library provides more clear interpretation for time unit. You can use 'HOURS'/'MINUTES'/'SECONDS'.
If it is an absolute must to generate delay in a test CountDownLatch is a simple solution. In your test class declare:
private final CountDownLatch waiter = new CountDownLatch(1);
and in the test where needed:
waiter.await(1000 * 1000, TimeUnit.NANOSECONDS); // 1ms
Maybe unnecessary to say but keeping in mind that you should keep wait times small and not cumulate waits to too many places.
Mockito (which is already provided via transitive dependencies for Spring Boot projects) has a couple of ways to wait for asynchronous events, respectively conditions to happen.
A simple pattern which currently works very well for us is:
// ARRANGE – instantiate Mocks, setup test conditions
// ACT – the action to test, followed by:
Mockito.verify(myMockOrSpy, timeout(5000).atLeastOnce()).delayedStuff();
// further execution paused until `delayedStuff()` is called – or fails after timeout
// ASSERT – assertThat(...)
Two slightly more complex yet more sophisticated are described in this article by #fernando-cejas
My urgent advice regarding the current top answers given here: you want your tests to
finish as fast as possible
have consistent results, independent of the test environment (non-"flaky")
... so just don't be silly by using Thread.sleep() in your test code.
Instead, have your production code use dependency injection (or, a little "dirtier", expose some mockable/spyable methods) then use Mockito, Awaitly, ConcurrentUnit or others to ensure asynchronous preconditions are met before assertions happen.
You could also use the CountDownLatch object like explained here.
There is a general problem: it's hard to mock time. Also, it's really bad practice to place long running/waiting code in a unit test.
So, for making a scheduling API testable, I used an interface with a real and a mock implementation like this:
public interface Clock {
public long getCurrentMillis();
public void sleep(long millis) throws InterruptedException;
}
public static class SystemClock implements Clock {
#Override
public long getCurrentMillis() {
return System.currentTimeMillis();
}
#Override
public void sleep(long millis) throws InterruptedException {
Thread.sleep(millis);
}
}
public static class MockClock implements Clock {
private final AtomicLong currentTime = new AtomicLong(0);
public MockClock() {
this(System.currentTimeMillis());
}
public MockClock(long currentTime) {
this.currentTime.set(currentTime);
}
#Override
public long getCurrentMillis() {
return currentTime.addAndGet(5);
}
#Override
public void sleep(long millis) {
currentTime.addAndGet(millis);
}
}
With this, you could imitate time in your test:
#Test
public void testExpiration() {
MockClock clock = new MockClock();
SomeCacheObject sco = new SomeCacheObject();
sco.putWithExpiration("foo", 1000);
clock.sleep(2000) // wait for 2 seconds
assertNull(sco.getIfNotExpired("foo"));
}
An advanced multi-threading mock for Clock is much more complex, of course, but you can make it with ThreadLocal references and a good time synchronization strategy, for example.
Using Thread.sleep in a test is not a good practice. It creates brittle tests that can fail unpredictably depending on environment ("Passes on my machine!") or load. Don’t rely on timing (use mocks) or use libraries such as Awaitility for asynchroneous testing.
Dependency : testImplementation 'org.awaitility:awaitility:3.0.0'
await().pollInterval(Duration.FIVE_SECONDS).atLeast(Duration.FIVE_SECONDS).atMost(Duration.FIVE_SECONDS).untilAsserted(() -> {
// your assertion
});

How to write automated unit tests for java annotation processor?

I'm experimenting with java annotation processors. I'm able to write integration tests using the "JavaCompiler" (in fact I'm using "hickory" at the moment). I can run the compile process and analyse the output. The Problem: a single test runs for about half a second even without any code in my annotation processor. This is way too long to using it in TDD style.
Mocking away the dependencies seems very hard for me (I would have to mock out the entire "javax.lang.model.element" package). Have someone succeed to write unit tests for an annotation processor (Java 6)? If not ... what would be your approach?
This is an old question, but it seems that the state of annotation processor testing hadn't gotten any better, so we released Compile Testing today. The best docs are in package-info.java, but the general idea is that there is a fluent API for testing compilation output when run with an annotation processor. For example,
ASSERT.about(javaSource())
.that(JavaFileObjects.forResource("HelloWorld.java"))
.processedWith(new MyAnnotationProcessor())
.compilesWithoutError()
.and().generatesSources(JavaFileObjects.forResource("GeneratedHelloWorld.java"));
tests that the processor generates a file that matches GeneratedHelloWorld.java (golden file on the class path). You can also test that the processor produces error output:
JavaFileObject fileObject = JavaFileObjects.forResource("HelloWorld.java");
ASSERT.about(javaSource())
.that(fileObject)
.processedWith(new NoHelloWorld())
.failsToCompile()
.withErrorContaining("No types named HelloWorld!").in(fileObject).onLine(23).atColumn(5);
This is obviously a lot simpler than mocking and unlike typical integration tests, all of the output is stored in memory.
You're right mocking the annotation processing API (with a mock library like easymock) is painful. I tried this approach and it broke down pretty rapidly. You have to setup to many method call expectations. The tests become unmaintainable.
A state-based test approach worked for me reasonably well. I had to implement the parts of the javax.lang.model.* API I needed for my tests. (That were only < 350 lines of code.)
This is the part of a test to initiate the javax.lang.model objects. After the setup the model should be in the same state as the Java compiler implementation.
DeclaredType typeArgument = declaredType(classElement("returnTypeName"));
DeclaredType validReturnType = declaredType(interfaceElement(GENERATOR_TYPE_NAME), typeArgument);
TypeParameterElement typeParameter = typeParameterElement();
ExecutableElement methodExecutableElement = Model.methodExecutableElement(name, validReturnType, typeParameter);
The static factory methods are defined in the class Model implementing the javax.lang.model.* classes. For example declaredType. (All unsupported operations will throw exceptions.)
public static DeclaredType declaredType(final Element element, final TypeMirror... argumentTypes) {
return new DeclaredType(){
#Override public Element asElement() {
return element;
}
#Override public List<? extends TypeMirror> getTypeArguments() {
return Arrays.asList(argumentTypes);
}
#Override public String toString() {
return format("DeclareTypeModel[element=%s, argumentTypes=%s]",
element, Arrays.toString(argumentTypes));
}
#Override public <R, P> R accept(TypeVisitor<R, P> v, P p) {
return v.visitDeclared(this, p);
}
#Override public boolean equals(Object obj) { throw new UnsupportedOperationException(); }
#Override public int hashCode() { throw new UnsupportedOperationException(); }
#Override public TypeKind getKind() { throw new UnsupportedOperationException(); }
#Override public TypeMirror getEnclosingType() { throw new UnsupportedOperationException(); }
};
}
The rest of the test verifies the behavior of the class under test.
Method actual = new Method(environment(), methodExecutableElement);
Method expected = new Method(..);
assertEquals(expected, actual);
You can have a look at the source code of the Quickcheck #Samples and #Iterables source code generator tests. (The code is not optimal, yet. The Method class has to many parameters and the Parameter class is not tested in its own test but as part of the Method test. It should illustrate the approach nevertheless.)
Viel Glück!
jOOR is a small Java reflection library that also provides simplified access to the in-memory Java compilation API in javax.tool.JavaCompiler. We added support for this to unit test jOOQ's annotation processors. You can easily write unit tests like this:
#Test
public void testCompileWithAnnotationProcessors() {
AProcessor p = new AProcessor();
try {
Reflect.compile(
"org.joor.test.FailAnnotationProcessing",
"package org.joor.test; " +
"#A " +
"public class FailAnnotationProcessing { " +
"}",
new CompileOptions().processors(p)
).create().get();
Assert.fail();
}
catch (ReflectException expected) {
assertFalse(p.processed);
}
}
The above example has been taken from this blog post
I was in a similar situation, so I created the Avatar library. It won't give you the performance of a pure unit test with no compilation, but if used correctly you shouldn't see much of a performance hit.
Avatar lets you write a source file, annotate it, and convert it to elements in a unit test. This allows you to unit test methods and classes which consume Element objects, without manually invoking javac.
I ran into the same problem awhile ago and found this question. Although the other answers provided are decent, I felt that that there was still room for improvement. Based on the other answers for this question, I created Elementary, a suite of JUnit 5 extensions that provide a real annotation processing environment for unit tests.
Most libraries test annotation processors by running them. However, most annotation processors are pretty complex and broken into more fine-grained components. It is not feasible to test individual components by running the annotation processor. Instead, we make the annotation processing environment available to these tests.
The following code snippet illustrates how to test a Lint component:
import com.karuslabs.elementary.junit.Cases;
import com.karuslabs.elementary.junit.Tools;
import com.karuslabs.elementary.junit.ToolsExtension;
import com.karuslabs.elementary.junit.annotations.Case;
import com.karuslabs.elementary.junit.annotations.Introspect;
import com.karuslabs.utilitary.type.TypeMirrors;
#ExtendWith(ToolsExtension.class)
#Introspect
class ToolsExtensionExampleTest {
Lint lint = new Lint(Tools.typeMirrors());
#Test
void lint_string_variable(Cases cases) {
var first = cases.one("first");
assertTrue(lint.lint(first));
}
#Test
void lint_method_that_returns_string(Cases cases) {
var second = cases.get(1);
assertFalse(lint.lint(second));
}
#Case("first") String first;
#Case String second() { return "";}
}
class Lint {
final TypeMirrors types;
final TypeMirror expectedType;
Lint(TypeMirrors types) {
this.types = types;
this.expectedType = types.type(String.class);
}
public boolean lint(Element element) {
if (!(element instanceof VariableElement)) {
return false;
}
var variable = (VariableElement) element;
return types.isSameType(expectedType, variable.asType());
}
}
By annotating the test class with #Introspect and test cases with #Case, we can declare test cases in the same file as the tests. The corresponding Element representation of the test cases can be retrieved by a test using Cases.
If anyone is interested, I wrote an article, The Problem with Annotation Processors that details the problems with unit testing annotation processors.
I have used http://hg.netbeans.org/core-main/raw-file/default/openide.util.lookup/test/unit/src/org/openide/util/test/AnnotationProcessorTestUtils.java though this is based on java.io.File for simplicity and so has the performance overhead you complain about.
Thomas's suggestion of mocking the whole JSR 269 environment would lead to a pure unit test. You might instead want to write more of an integration test which checks how your processor actually runs inside javac, giving more assurance it is correct, but merely want to avoid disk files. Doing this would require you to write a mock JavaFileManager, which is unfortunately not as easy as it seems and I have no examples handy, but you should not need to mock other things like Element interfaces.
An option is to bundle all tests in one class. Half a second for compiling etc. is then a constant for a given set of tests, the real test time for a test is negligible, I assume.

Categories

Resources