I'm using TestNG for my unit tests and I'd like to check exception messages. OK, #Test(expectedExceptionsMessageRegExp = ...) is exactly what I need, right? Well, at the same time I'd like to externalize my messages so they aren't mixed with my code. I'm loosely following a guide by Brian Goetz, so my exception code looks like
throw new IllegalArgumentException(MessageFormat.format(
EXCEPTIONS.getString(EX_NOT_A_VALID_LETTER), c));
Works perfectly for me, except these two things don't exactly mix. I can't write
#Test(dataProvider = "getInvalidLetters",
expectedExceptions = {IllegalArgumentException.class},
expectedExceptionsMessageRegExp = regexize(EXCEPTIONS.getString(EX_NOT_A_VALID_LETTER)))
Here, regexize is a function that is supposed to replace {0}-style placeholders with .*. However, this fails with a “element value must be a constant expression”. Makes sense, since it's needed at compile time. But what are possible workarounds?
I can imagine a test code generator that would replace these constructs with real message regexps, but it would be a pain to integrate it with IDE, SCM, build tools and so on.
Another option is to use try-catch and check exception message manually. But this is ugly.
Lastly, I think it should be possible to hack TestNG with something like
#Test(expectedExceptionsMessageBundle = "bundle.name.goes.here",
expectedExceptionsMessageLocaleProvider = "functionReturningListOfLocales"
expectedExceptionsMessageKey = "MESSAGE_KEY_GOES_HERE")
This would be a great thing, really. Except that it won't be the same TestNG that Maven fetches for me from the repo. Another option is to implement this, contribute a patch to TestNG and wait for it to be released. I'm seriously considering this option now, but maybe there's an easier way? Haven't I missed something obvious? I can't possibly be the only one with this issue!
Or maybe I'm externalizing my messages in a wrong way. But a guy like Brian Goetz can't be wrong, now can he? Or did I get him wrong?
Update
Based on the answer given here, I've made a tutorial on the topic, covering some pitfalls, especially when using NetBeans 8.1.
Why not using an annotation transformer here?
You will be able to do something like:
#LocalizedException(expectedExceptionsMessageBundle = "bundle.name.goes.here",
expectedExceptionsMessageLocaleProvider = "functionReturningListOfLocales"
expectedExceptionsMessageKey = "MESSAGE_KEY_GOES_HERE")
#Test(dataProvider = "getInvalidLetters",
expectedExceptions = {IllegalArgumentException.class)
public void test() {
// ...
}
Where the annotation transformer will look like:
public class LocalizedExceptionTransformer implements IAnnotationTransformer {
public void transform(ITest annotation, Class testClass,
Constructor testConstructor, Method testMethod) {
if (testMethod != null) {
LocalizedException le = testMethod.getAnnotation(LocalizedException.class);
if (le != null) {
String regexp = regexize(le);
annotation.setExpectedExceptionsMessageRegExp(regexp);
}
}
}
}
Related
I could not find much resources on my question so I guess this is not an easy resolution.
We use JodaTime in our codebase and I wish to forbid (or at least warn) using some methods from this library as they are prone to errors (around timezone management).
I tried the reflections library already, without success due to a non released issue.
We used to have a custom sonar rule to handle this but it is not supported by sonarcloud so I looking for another way.
Do you have any lead to handle this?
I would recommend using ArchUnit for this, which allows you to specify restrictions such as this as unit tests:
public class DisallowedMethodsTest {
#Test
public void forbidJodaTimeMethods()
{
JavaClasses importedClasses = new ClassFileImporter().importPackages("your.base.package");
ArchRule rule = noClasses().should()
.callMethodWhere(target(name("disallowedMethodName"))
.and(target(owner(assignableTo(DateTime.class)))))
.because("Your reasons");
rule.check(importedClasses);
}
}
If you are looking for something works in unit test environment, Jeroen Steenbeeke' answer might be helpful.
If you are looking for something works in production environmen, you'll need HOOK.
In case you cannot require partners to use java.lang.reflect.Proxy to construct related object, I'd recommend you have a look on AspectJ if you are working on a regular Java project or Xposed if you are working on an Android project.
Both of them could add restrictions without modifing existing codebase nor programming flow.
I solved such kind of problems by writing an interceptor like the following, as explained at https://docs.oracle.com/javaee/7/tutorial/interceptors002.htm:
import javax.interceptor.AroundInvoke;
import javax.interceptor.InvocationContext;
import java.lang.reflect.Method;
import java.lang.reflect.Parameter;
public class MethodCallTracerInterceptor {
#AroundInvoke
Object intercept(InvocationContext context)
throws Exception
{
Method method = context.getMethod();
String methodClass = method.getDeclaringClass().getName();
String methodName = method.getName();
if (methodClass.equals("myClass") && methodName.equals("myMethod")) {
//TODO Raise an exception or log a warning.
}
return context.proceed();
}
}
I have this method that I am using in a NetBeans plugin:
public static SourceCodeFile getCurrentlyOpenedFile() {
MainProjectManager mainProjectManager = new MainProjectManager();
Project openedProject = mainProjectManager.getMainProject();
/* Get Java file currently displaying in the IDE if there is an opened project */
if (openedProject != null) {
TopComponent activeTC = TopComponent.getRegistry().getActivated();
DataObject dataLookup = activeTC.getLookup().lookup(DataObject.class);
File file = FileUtil.toFile(dataLookup.getPrimaryFile()); // Currently opened file
// Check if the opened file is a Java file
if (FilenameUtils.getExtension(file.getAbsoluteFile().getAbsolutePath()).equalsIgnoreCase("java")) {
return new SourceCodeFile(file);
} else {
return null;
}
} else {
return null;
}
}
Basically, using NetBeans API, it detects the file currently opened by the user in the IDE. Then, it loads it and creates a SourceCodeFile object out of it.
Now I want to unit test this method using JUnit. The problem is that I don't know how to test it.
Since it doesn't receive any argument as parameter, I can't test how it behaves given wrong arguments. I also thought about trying to manipulate openedProject in order to test the method behaviour given some different values to that object, but as far as I'm concernet, I can't manipulate a variable in JUnit that way. I also cannot check what the method returns, because the unit test will always return null, since it doesn't detect any opened file in NetBeans.
So, my question is: how can I approach the unit testing of this method?
Well, your method does take parameters, "between the lines":
MainProjectManager mainProjectManager = new MainProjectManager();
Project openedProject = mainProjectManager.getMainProject();
basically fetches the object to work on.
So the first step would be to change that method signature, to:
public static SourceCodeFile getCurrentlyOpenedFile(Project project) {
...
Of course, that object isn't used, except for that null check. So the next level would be to have a distinct method like
SourceCodeFile lookup(DataObject dataLookup) {
In other words: your real problem is that you wrote hard-to-test code. The "default" answer is: you have to change your production code, to make easier to test.
For example by ripping it apart, and putting all the different aspects into smaller helper methods.
You see, that last method lookup(), that one takes a parameter, and now it becomes (somehow) possible to think up test cases for this. Probably you will have to use a mocking framework such as Mockito to pass mocked instances of that DataObject class within your test code.
Long story short: there are no detours here. You can't test your code (in reasonable ways) as it is currently structured. Re-structure your production code, then all your ideas about "when I pass X, then Y should happen" can work out.
Disclaimer: yes, theoretically, you could test the above code, by heavily relying on frameworks like PowerMock(ito) or JMockit. These frameworks allow you to contol (mock) calls to static methods, or to new(). So they would give you full control over everything in your method. But that would basically force your tests to know everything that is going on in the method under test. Which is a really bad thing.
Here is a snippet of my code. I want to force call the catch block with WakeupException.
public void run() {
try {
try {
while (true) {
LOGGER.logp(Level.INFO, CLASS_NAME, "run()", "Attempting to Poll");
ConsumerRecords<String, String> records = consumer.poll(10000);
if (records.count() == 0) {
LOGGER.logp(Level.INFO, CLASS_NAME, "run()", "No Response. Invalid Topic");
break;
}
else if(records.count()>0) {
LOGGER.logp(Level.INFO, CLASS_NAME, "run()", "Response Received");
}
}
}
} catch (WakeupException e) {
consumer.close();
}
}
Here is what I tried:
#Test(expected = WakeupException.class)
public void failRun() throws WakeupException, IOException {
KafkaConsumerForTests consumerThread3;
consumerThread3 = Mockito.mock(KafkaConsumerForTests.class);
doThrow(new WakeupException()).when(consumerThread3).run();
//Mockito.when(consumerThread2.run()).thenThrow(new WakeupException());
consumerThread3.run();
}
I just want to call the WakeupException so that I get line coverage for that block of code. What would I do. This is a void method by the way. I'm open to suggestions involving PowerMock as well.
After seeing the code, I am quite sure that the call we want to mock is consumer.poll(...). I am not an expert in using Kafka so take everything from here with a grain of salt. Seeing that consumer is an attribute of the class under test, it should be possible to inject a mocked instance into the class under test and throw the WakeupException we need. Instead of (or additional to - your decision) the class under test, we create a(n additional) mock of the consumer and mock its poll(...)-method to throw the desired WakeupException when called. Instead of mocking the call to consumerThread3.run(), we mock the call to consumer.poll(...).
A remark on your question: "I just want to call the WakeupException so that I get line coverage" - This should never be the reason to write a test. A test should test behaviour. If there is no behaviour to test (which is rarely the case), do not write a test.
OP edited the question and added some additional information. I am quite confident that the first paragraph of this post should answer the question. The other paragraphs were written before OP added the relevant code in the try-block. They are written on a more abstract level. The interested reader may read them, but this is not necessary to understand the answer.
お楽しみください! - Please enjoy!
We want to verify the behaviour of the catch-block. In productive code, something in the try-block would throw the corresponding Exception triggering the catch-block. Thus, in order to test the catch-block, we should mock something in the try-block to throw said Exception.
If mocking a call within the block seems impossible, that may be due to the fact that the code was not developed test-driven. You see, an upside of Test-Driven Development is that you intrinsically generate testable code. If we are stuck with untestable/hard to test code, w ehave two (or maybe three) options:
Leave it as is, do not test it. This can be a valid answer if there is no behaviour to test.
Rewrite the code, make it testable. Depending on the structure of your project this could take from five minutes up to 2 weeks or more. Hard to say without knowing the codebase
Use unconventional tools. Normal mocking frameworks like Mockito have certain limittations, e.g. for Mockito mocking of static or final methods is not supported. Other tools, like PowerMock, aim to eliminate those limitations. But be warned: PowerMock operates on bytecode level. This means that
we are not necessarily testing the bytecode we use in production
this can screw with other tools, e.g. JaCoCo.
Those tools should be your last resort only and used sparsely.
While creating new scenarios I only want to test the scenario I am currently working with. For this purpose I want to use the Meta: #skip tag before my scenarios. As I found out I have to use the embedder to configure the used meta tags, so I tried:
configuredEmbedder().useMetaFilters(Arrays.asList("-skip"));
but actually this still has no effect on my test scenarios. I used it in the constructor of my SerenityStories test suite definition. Here is the complete code of this class:
public class AcceptanceTestSuite extends SerenityStories {
#Managed
WebDriver driver;
public AcceptanceTestSuite() {
System.setProperty("webdriver.chrome.driver", "D:/files/chromedriver/chromedriver.exe");
System.setProperty("chrome.switches", "--lang=en");
System.setProperty("restart.browser.each.scenario", "true");
configuredEmbedder().useMetaFilters(Arrays.asList("-skip"));
runSerenity().withDriver("chrome");
}
#Override
public Configuration configuration() {
Configuration configuration = super.configuration();
Keywords keywords = new LocalizedKeywords(DEFAULTSTORYLANGUAGE);
Properties properties = configuration.storyReporterBuilder().viewResources();
properties.setProperty("encoding", "UTF-8");
configuration.useKeywords(keywords)
.useStoryParser(new RegexStoryParser(keywords, new ExamplesTableFactory(new LoadFromClasspath(this.getClass()))))
.useStoryLoader(new UTF8StoryLoader()).useStepCollector(new MarkUnmatchedStepsAsPending(keywords))
.useDefaultStoryReporter(new ConsoleOutput(keywords)).storyReporterBuilder().withKeywords(keywords).withViewResources(properties);
return configuration;
}
}
Is this the wrong place or have I missed something? Still all scenarios are executed.
EDIT:
I changed following classes and now I think that it "works"
public AcceptanceTestSuite() {
System.setProperty("webdriver.chrome.driver", "D:/files/chromedriver/chromedriver.exe");
System.setProperty("chrome.switches", "--lang=de");
System.setProperty("restart.browser.each.scenario", "true");
this.useEmbedder(configuredEmbedder());
runSerenity().withDriver("chrome");
}
#Override
public Embedder configuredEmbedder() {
final Embedder embedder = new Embedder();
embedder.embedderControls()
.useThreads(1)
.doGenerateViewAfterStories(true)
.doIgnoreFailureInStories(false)
.doIgnoreFailureInView(false)
.doVerboseFailures(true);
final Configuration configuration = configuration();
embedder.useConfiguration(configuration);
embedder.useStepsFactory(stepsFactory());
embedder.useMetaFilters(Arrays.asList("-skip"));
return embedder;
}
But now I get the message [pool-1-thread-1] INFO net.serenitybdd.core.Serenity - TEST IGNORED but the scenario is still executed. Only in the result page I get the info that this scenario is ignored (but still executed). Is there a way to SKIP the scenario so it won't run?
I could not make it run with using configuredEmbedder() but by adding -Dmetafilter="+working -finished" as goals in my mvn run configurations and using the tags #working for scenarios I'm working with and which I want to run and #finsihed for scenarios I don't want to execute. Still I have to change the run configuration if I want to change the meta tags so it is not very comfortable but still I get what I was looking for.
As long as you document it well (some doc in https://github.com/serenity-bdd/the-serenity-book would be brilliant), I think as a JBehave/Serenity user you are well enough placed to decide which option makes the most sense.
Investigation
I debugged the serenity-jbehave classes, trying to understand why setting
configuredEmbedder().useMetaFilters(Collections.singletonList("-skip"))
is not working in all the possible places I put it within my class extending the SerenityStories, I found the strategic code place where metaFilters in ExtendedEmbedder#embedder are overwritten with what we define in our class into settings from serenity-jbehave.
This method is SerenityReportingRunner#createPerformableTree:
private PerformableTree createPerformableTree(List<CandidateSteps> candidateSteps, List<String> storyPaths) {
ExtendedEmbedder configuredEmbedder = this.getConfiguredEmbedder();
configuredEmbedder.useMetaFilters(getMetaFilters());
BatchFailures failures = new BatchFailures(configuredEmbedder.embedderControls().verboseFailures());
PerformableTree performableTree = configuredEmbedder.performableTree();
RunContext context = performableTree.newRunContext(getConfiguration(), candidateSteps,
configuredEmbedder.embedderMonitor(), configuredEmbedder.metaFilter(), failures);
performableTree.addStories(context, configuredEmbedder.storyManager().storiesOfPaths(storyPaths));
return performableTree;
}
This line changes the set metaFilters:
configuredEmbedder.useMetaFilters(getMetaFilters());
It overrides the current metaFilters value.
Going further the call chain, we get to the logic that defines from where it gets metaFilters, i.e. where we can actually set it.
SerenityReportingRunner#createPerformableTree
↓
SerenityReportingRunner#getMetaFilters
↓
SerenityReportingRunner#getMetafilterSetting
This is the method we need!
private String getMetafilterSetting() {
Optional<String> environmentMetafilters = getEnvironmentMetafilters();
Optional<String> annotatedMetafilters = getAnnotatedMetafilters(testClass);
Optional<String> thucAnnotatedMetafilters = getThucAnnotatedMetafilters(testClass);
return environmentMetafilters.orElse(annotatedMetafilters.orElse(thucAnnotatedMetafilters.orElse("")));
}
As we see here, the metaFilters can be defined in three places, and they override each other. In the priority lowering order, they are:
Value of metafilter (exactly all lowercase!) VM property.
Value of on net.serenitybdd.jbehave.annotations.Metafilter annotation on our SerenityStories class.
Value of on net.thucydides.jbehave.annotations.Metafilter annotation on our SerenityStories class. This annotation is deprecated, but left in place for backwards-compatibility.
Solution that is working with the current serenity-jbehave version
I've tried/debugged all these three options, they work and override each other as described above.
1. Use environment metafilter property
Added this to my JVM run arguments:
-Dmetafilter=skip
2. Use the modern #Metafilter annotation
import net.serenitybdd.jbehave.SerenityStories;
import net.serenitybdd.jbehave.annotations.Metafilter;
#Metafilter("-skip")
public class Acceptance extends SerenityStories {
3. Use the deprecated #Metafilter annotation
import net.serenitybdd.jbehave.SerenityStories;
import net.thucydides.jbehave.annotations.Metafilter;
#Metafilter("-skip") // warned as deprecated
public class Acceptance extends SerenityStories {
Solution for my current project is to use the current #Metafilter("-skip") annotation on my test class, to not depend on/have to change VM properties of the particular Jenkins/local dev execution.
Possible pull request to make
https://github.com/serenity-bdd/serenity-core/issues/95 — here Serenity guys have suggested me to do a PR with this fix, since they are not concentrated on Serenity + JBehave now.
I understand where to make the changes (in the code chain described above), but I don't know what overriding logic should be:
— MetaFilters from configuredEmbedder override any of ENV/annotation MetaFilters.
OR
— Any ENV/annotation MetaFilters override Metafilters from configuredEmbedder
OR
— MetaFilters from configuredEmbedder are merged with ENV/annotation MetaFilters. This option required merging priority.
Any suggestions?
In any type of fix, I would prefer add the explicit logs about how the overriding is now working into SerenityReportingRunner#getMetafilterSetting, since the current behaviour is really non-obvious and took lots of time to investigate.
I'm experimenting with java annotation processors. I'm able to write integration tests using the "JavaCompiler" (in fact I'm using "hickory" at the moment). I can run the compile process and analyse the output. The Problem: a single test runs for about half a second even without any code in my annotation processor. This is way too long to using it in TDD style.
Mocking away the dependencies seems very hard for me (I would have to mock out the entire "javax.lang.model.element" package). Have someone succeed to write unit tests for an annotation processor (Java 6)? If not ... what would be your approach?
This is an old question, but it seems that the state of annotation processor testing hadn't gotten any better, so we released Compile Testing today. The best docs are in package-info.java, but the general idea is that there is a fluent API for testing compilation output when run with an annotation processor. For example,
ASSERT.about(javaSource())
.that(JavaFileObjects.forResource("HelloWorld.java"))
.processedWith(new MyAnnotationProcessor())
.compilesWithoutError()
.and().generatesSources(JavaFileObjects.forResource("GeneratedHelloWorld.java"));
tests that the processor generates a file that matches GeneratedHelloWorld.java (golden file on the class path). You can also test that the processor produces error output:
JavaFileObject fileObject = JavaFileObjects.forResource("HelloWorld.java");
ASSERT.about(javaSource())
.that(fileObject)
.processedWith(new NoHelloWorld())
.failsToCompile()
.withErrorContaining("No types named HelloWorld!").in(fileObject).onLine(23).atColumn(5);
This is obviously a lot simpler than mocking and unlike typical integration tests, all of the output is stored in memory.
You're right mocking the annotation processing API (with a mock library like easymock) is painful. I tried this approach and it broke down pretty rapidly. You have to setup to many method call expectations. The tests become unmaintainable.
A state-based test approach worked for me reasonably well. I had to implement the parts of the javax.lang.model.* API I needed for my tests. (That were only < 350 lines of code.)
This is the part of a test to initiate the javax.lang.model objects. After the setup the model should be in the same state as the Java compiler implementation.
DeclaredType typeArgument = declaredType(classElement("returnTypeName"));
DeclaredType validReturnType = declaredType(interfaceElement(GENERATOR_TYPE_NAME), typeArgument);
TypeParameterElement typeParameter = typeParameterElement();
ExecutableElement methodExecutableElement = Model.methodExecutableElement(name, validReturnType, typeParameter);
The static factory methods are defined in the class Model implementing the javax.lang.model.* classes. For example declaredType. (All unsupported operations will throw exceptions.)
public static DeclaredType declaredType(final Element element, final TypeMirror... argumentTypes) {
return new DeclaredType(){
#Override public Element asElement() {
return element;
}
#Override public List<? extends TypeMirror> getTypeArguments() {
return Arrays.asList(argumentTypes);
}
#Override public String toString() {
return format("DeclareTypeModel[element=%s, argumentTypes=%s]",
element, Arrays.toString(argumentTypes));
}
#Override public <R, P> R accept(TypeVisitor<R, P> v, P p) {
return v.visitDeclared(this, p);
}
#Override public boolean equals(Object obj) { throw new UnsupportedOperationException(); }
#Override public int hashCode() { throw new UnsupportedOperationException(); }
#Override public TypeKind getKind() { throw new UnsupportedOperationException(); }
#Override public TypeMirror getEnclosingType() { throw new UnsupportedOperationException(); }
};
}
The rest of the test verifies the behavior of the class under test.
Method actual = new Method(environment(), methodExecutableElement);
Method expected = new Method(..);
assertEquals(expected, actual);
You can have a look at the source code of the Quickcheck #Samples and #Iterables source code generator tests. (The code is not optimal, yet. The Method class has to many parameters and the Parameter class is not tested in its own test but as part of the Method test. It should illustrate the approach nevertheless.)
Viel Glück!
jOOR is a small Java reflection library that also provides simplified access to the in-memory Java compilation API in javax.tool.JavaCompiler. We added support for this to unit test jOOQ's annotation processors. You can easily write unit tests like this:
#Test
public void testCompileWithAnnotationProcessors() {
AProcessor p = new AProcessor();
try {
Reflect.compile(
"org.joor.test.FailAnnotationProcessing",
"package org.joor.test; " +
"#A " +
"public class FailAnnotationProcessing { " +
"}",
new CompileOptions().processors(p)
).create().get();
Assert.fail();
}
catch (ReflectException expected) {
assertFalse(p.processed);
}
}
The above example has been taken from this blog post
I was in a similar situation, so I created the Avatar library. It won't give you the performance of a pure unit test with no compilation, but if used correctly you shouldn't see much of a performance hit.
Avatar lets you write a source file, annotate it, and convert it to elements in a unit test. This allows you to unit test methods and classes which consume Element objects, without manually invoking javac.
I ran into the same problem awhile ago and found this question. Although the other answers provided are decent, I felt that that there was still room for improvement. Based on the other answers for this question, I created Elementary, a suite of JUnit 5 extensions that provide a real annotation processing environment for unit tests.
Most libraries test annotation processors by running them. However, most annotation processors are pretty complex and broken into more fine-grained components. It is not feasible to test individual components by running the annotation processor. Instead, we make the annotation processing environment available to these tests.
The following code snippet illustrates how to test a Lint component:
import com.karuslabs.elementary.junit.Cases;
import com.karuslabs.elementary.junit.Tools;
import com.karuslabs.elementary.junit.ToolsExtension;
import com.karuslabs.elementary.junit.annotations.Case;
import com.karuslabs.elementary.junit.annotations.Introspect;
import com.karuslabs.utilitary.type.TypeMirrors;
#ExtendWith(ToolsExtension.class)
#Introspect
class ToolsExtensionExampleTest {
Lint lint = new Lint(Tools.typeMirrors());
#Test
void lint_string_variable(Cases cases) {
var first = cases.one("first");
assertTrue(lint.lint(first));
}
#Test
void lint_method_that_returns_string(Cases cases) {
var second = cases.get(1);
assertFalse(lint.lint(second));
}
#Case("first") String first;
#Case String second() { return "";}
}
class Lint {
final TypeMirrors types;
final TypeMirror expectedType;
Lint(TypeMirrors types) {
this.types = types;
this.expectedType = types.type(String.class);
}
public boolean lint(Element element) {
if (!(element instanceof VariableElement)) {
return false;
}
var variable = (VariableElement) element;
return types.isSameType(expectedType, variable.asType());
}
}
By annotating the test class with #Introspect and test cases with #Case, we can declare test cases in the same file as the tests. The corresponding Element representation of the test cases can be retrieved by a test using Cases.
If anyone is interested, I wrote an article, The Problem with Annotation Processors that details the problems with unit testing annotation processors.
I have used http://hg.netbeans.org/core-main/raw-file/default/openide.util.lookup/test/unit/src/org/openide/util/test/AnnotationProcessorTestUtils.java though this is based on java.io.File for simplicity and so has the performance overhead you complain about.
Thomas's suggestion of mocking the whole JSR 269 environment would lead to a pure unit test. You might instead want to write more of an integration test which checks how your processor actually runs inside javac, giving more assurance it is correct, but merely want to avoid disk files. Doing this would require you to write a mock JavaFileManager, which is unfortunately not as easy as it seems and I have no examples handy, but you should not need to mock other things like Element interfaces.
An option is to bundle all tests in one class. Half a second for compiling etc. is then a constant for a given set of tests, the real test time for a test is negligible, I assume.