How can I make a JUnit test wait? - java

I have a JUnit test that I want to wait for a period of time synchronously. My JUnit test looks like this:
#Test
public void testExipres(){
SomeCacheObject sco = new SomeCacheObject();
sco.putWithExipration("foo", 1000);
// WAIT FOR 2 SECONDS
assertNull(sco.getIfNotExipred("foo"));
}
I tried Thread.currentThread().wait(), but it throws an IllegalMonitorStateException (as expected).
Is there some trick to it or do I need a different monitor?

How about Thread.sleep(2000); ? :)

Thread.sleep() could work in most cases, but usually if you're waiting, you are actually waiting for a particular condition or state to occur. Thread.sleep() does not guarantee that whatever you're waiting for has actually happened.
If you are waiting on a rest request for example maybe it usually return in 5 seconds, but if you set your sleep for 5 seconds the day your request comes back in 10 seconds your test is going to fail.
To remedy this JayWay has a great utility called Awatility which is perfect for ensuring that a specific condition occurs before you move on.
It has a nice fluent api as well
await().until(() ->
{
return yourConditionIsMet();
});
https://github.com/jayway/awaitility

In case your static code analyzer (like SonarQube) complaints, but you can not think of another way, rather than sleep, you may try with a hack like:
Awaitility.await().pollDelay(Durations.ONE_SECOND).until(() -> true);
It's conceptually incorrect, but it is the same as Thread.sleep(1000).
The best way, of course, is to pass a Callable, with your appropriate condition, rather than true, which I have.
https://github.com/awaitility/awaitility

You can use java.util.concurrent.TimeUnit library which internally uses Thread.sleep. The syntax should look like this :
#Test
public void testExipres(){
SomeCacheObject sco = new SomeCacheObject();
sco.putWithExipration("foo", 1000);
TimeUnit.MINUTES.sleep(2);
assertNull(sco.getIfNotExipred("foo"));
}
This library provides more clear interpretation for time unit. You can use 'HOURS'/'MINUTES'/'SECONDS'.

If it is an absolute must to generate delay in a test CountDownLatch is a simple solution. In your test class declare:
private final CountDownLatch waiter = new CountDownLatch(1);
and in the test where needed:
waiter.await(1000 * 1000, TimeUnit.NANOSECONDS); // 1ms
Maybe unnecessary to say but keeping in mind that you should keep wait times small and not cumulate waits to too many places.

Mockito (which is already provided via transitive dependencies for Spring Boot projects) has a couple of ways to wait for asynchronous events, respectively conditions to happen.
A simple pattern which currently works very well for us is:
// ARRANGE – instantiate Mocks, setup test conditions
// ACT – the action to test, followed by:
Mockito.verify(myMockOrSpy, timeout(5000).atLeastOnce()).delayedStuff();
// further execution paused until `delayedStuff()` is called – or fails after timeout
// ASSERT – assertThat(...)
Two slightly more complex yet more sophisticated are described in this article by #fernando-cejas
My urgent advice regarding the current top answers given here: you want your tests to
finish as fast as possible
have consistent results, independent of the test environment (non-"flaky")
... so just don't be silly by using Thread.sleep() in your test code.
Instead, have your production code use dependency injection (or, a little "dirtier", expose some mockable/spyable methods) then use Mockito, Awaitly, ConcurrentUnit or others to ensure asynchronous preconditions are met before assertions happen.

You could also use the CountDownLatch object like explained here.

There is a general problem: it's hard to mock time. Also, it's really bad practice to place long running/waiting code in a unit test.
So, for making a scheduling API testable, I used an interface with a real and a mock implementation like this:
public interface Clock {
public long getCurrentMillis();
public void sleep(long millis) throws InterruptedException;
}
public static class SystemClock implements Clock {
#Override
public long getCurrentMillis() {
return System.currentTimeMillis();
}
#Override
public void sleep(long millis) throws InterruptedException {
Thread.sleep(millis);
}
}
public static class MockClock implements Clock {
private final AtomicLong currentTime = new AtomicLong(0);
public MockClock() {
this(System.currentTimeMillis());
}
public MockClock(long currentTime) {
this.currentTime.set(currentTime);
}
#Override
public long getCurrentMillis() {
return currentTime.addAndGet(5);
}
#Override
public void sleep(long millis) {
currentTime.addAndGet(millis);
}
}
With this, you could imitate time in your test:
#Test
public void testExpiration() {
MockClock clock = new MockClock();
SomeCacheObject sco = new SomeCacheObject();
sco.putWithExpiration("foo", 1000);
clock.sleep(2000) // wait for 2 seconds
assertNull(sco.getIfNotExpired("foo"));
}
An advanced multi-threading mock for Clock is much more complex, of course, but you can make it with ThreadLocal references and a good time synchronization strategy, for example.

Using Thread.sleep in a test is not a good practice. It creates brittle tests that can fail unpredictably depending on environment ("Passes on my machine!") or load. Don’t rely on timing (use mocks) or use libraries such as Awaitility for asynchroneous testing.
Dependency : testImplementation 'org.awaitility:awaitility:3.0.0'
await().pollInterval(Duration.FIVE_SECONDS).atLeast(Duration.FIVE_SECONDS).atMost(Duration.FIVE_SECONDS).untilAsserted(() -> {
// your assertion
});

Related

Excecuting Junit test classes in Order [duplicate]

I have a java app with maven.
Junit for tests, with failsafe and surefire plugins.
I have more than 2000 integration tests.
To speed up the test running, I use failsafe jvmfork to run my tests parallel.
I have some heavy test class, and they typically running at end of my test execution and it is slows down my CI verify process.
The filesafe runorder:balanced would be a good option for me, but i cant use it because the jvmfork.
To rename the test classes or move to another package and run it alpahabetical is not an option.
Any suggestion how can I run my slow test classes at the begining of the verify process?
In JUnit 5 (from version 5.8.0 onwards) test classes can be ordered too.
src/test/resources/junit-platform.properties:
# ClassOrderer$OrderAnnotation sorts classes based on their #Order annotation
junit.jupiter.testclass.order.default=org.junit.jupiter.api.ClassOrderer$OrderAnnotation
Other Junit built-in class orderer implementations:
org.junit.jupiter.api.ClassOrderer$ClassName
org.junit.jupiter.api.ClassOrderer$DisplayName
org.junit.jupiter.api.ClassOrderer$Random
For other ways (beside junit-platform.properties file) to set configuration parameters see JUnit 5 user guide.
You can also provide your own orderer. It must implement ClassOrderer interface:
package foo;
public class MyOrderer implements ClassOrderer {
#Override
public void orderClasses(ClassOrdererContext context) {
Collections.shuffle(context.getClassDescriptors());
}
}
junit.jupiter.testclass.order.default=foo.MyOrderer
Note that #Nested test classes cannot be ordered by a ClassOrderer.
Refer to JUnit 5 documentations and ClassOrderer API docs to learn more about this.
I gave the combination of answers I found a try:
Running JUnit4 Test classes in specified order
Running JUnit Test in parallel on Suite Level
The second answer is based on these classes of this github project, which is available under the BSD-2 license.
I defined a few test classes:
public class LongRunningTest {
#Test
public void test() {
System.out.println(Thread.currentThread().getName() + ":\tlong test - started");
long time = System.currentTimeMillis();
do {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
}
} while(System.currentTimeMillis() - time < 1000);
System.out.println(Thread.currentThread().getName() + ":\tlong test - done");
}
}
#Concurrent
public class FastRunningTest1 {
#Test
public void test1() {
try {
Thread.sleep(250);
} catch (InterruptedException e) {
}
System.out.println(Thread.currentThread().getName() + ":\tfrt1-test1 - done");
}
// +7 more repetions of the same method
}
Then I defined the test suites:
(FastRunningTest2 is a copy of the first class with adjusted output)
#SuiteClasses({LongRunningTest.class, LongRunningTest.class})
#RunWith(Suite.class)
public class SuiteOne {}
#SuiteClasses({FastRunningTest1.class, FastRunningTest2.class})
#RunWith(Suite.class)
public class SuiteTwo {}
#SuiteClasses({SuiteOne.class, SuiteTwo.class})
#RunWith(ConcurrentSuite.class)
public class TopLevelSuite {}
When I execute the TopLevelSuite I get the following output:
TopLevelSuite-1-thread-1: long test - started
FastRunningTest1-1-thread-4: frt1-test4 - done
FastRunningTest1-1-thread-2: frt1-test2 - done
FastRunningTest1-1-thread-1: frt1-test1 - done
FastRunningTest1-1-thread-3: frt1-test3 - done
FastRunningTest1-1-thread-5: frt1-test5 - done
FastRunningTest1-1-thread-3: frt1-test6 - done
FastRunningTest1-1-thread-1: frt1-test8 - done
FastRunningTest1-1-thread-5: frt1-test7 - done
FastRunningTest2-2-thread-1: frt2-test1 - done
FastRunningTest2-2-thread-2: frt2-test2 - done
FastRunningTest2-2-thread-5: frt2-test5 - done
FastRunningTest2-2-thread-3: frt2-test3 - done
FastRunningTest2-2-thread-4: frt2-test4 - done
TopLevelSuite-1-thread-1: long test - done
TopLevelSuite-1-thread-1: long test - started
FastRunningTest2-2-thread-5: frt2-test8 - done
FastRunningTest2-2-thread-2: frt2-test6 - done
FastRunningTest2-2-thread-1: frt2-test7 - done
TopLevelSuite-1-thread-1: long test - done
Which basically shows that the LongRunningTest is executed in parralel to the FastRunningTests. The default value of threads used for parallel execution defined by the Concurrent Annotation is 5, which can be seen in the output of the parallel execution of the FastRunningTests.
The downside is that theses Threads are not shared between FastRunningTest1 and FastRunningTest2.
This behavious shows that it is "somewhat" possible to do what you want to do (so whether that works with your current setup is a different question).
Also I am not sure whether this is actually worth the effort,
as you need to prepare those TestSuites manually (or write something that autogenerates them)
and you need to define the Concurrent Annotation for all those classes (maybe with a different number of threads for each class)
As this basically shows that it is possible to define the execution order of classes and trigger their parallel execution, it should also be possibly to get the whole process to only use one ThreadPool (but I am not sure what the implication of that would be).
As the whole concept is based on a ThreadPoolExecutor, using a PriorityBlockingQueue which gives long running tasks a higher priority you would get closer to your ideal outcome of executing the long running tests first.
I experimented around a bit more and implemented my own custom suite runner and junit runner. The idea behind is to have your JUnitRunner submit the tests into a queue which is handeld by a single ThreadPoolExecutor. Because I didn't implement a blocking operation in the RunnerScheduler#finish method, I ended up with a solution where the tests from all classes were passed to the queue before the execution even started. (That might look different if there a more test classes and methods involved).
At least it proves the point that you can mess with junit at this level if you really want to.
The code of my poc is a bit messy and to lengthy to put it here, but if someone is interested I can push it into a github project.
In out project we had created a few marker interfaces (
example
public interface SlowTestsCategory {}
)
and put it into the #Category annotation of JUnit in the test class with slow tests.
#Category(SlowTestsCategory.class)
After that we created some special tasks for Gradle to run tests by category or a few categories by custom order:
task unitTest(type: Test) {
description = 'description.'
group = 'groupName'
useJUnit {
includeCategories 'package.SlowTestsCategory'
excludeCategories 'package.ExcludedCategory'
}
}
This solution is served by Gradle, but maybe it'll be helpful for you.
Let me summarize everything before I will provide a recommendation.
Integration tests are slow. This is fine and it's natural.
CI build doesn't run tests that assume deployment of a system, since there is no deployment in CI. We care about deployment in CD process.
So I assume your integration tests don't assume deployment.
CI build runs unit tests first. Unit tests are extremely fast because they use only RAM.
We have good and quick feedback from unit tests.
At this moment we are sure we don't have a problem with getting a quick feedback. But we still want to run integration tests faster.
I would recommend the following solutions:
Improve actual tests. Quite often they are not effective and can be speed up significantly.
Run integration tests in background (i.e. don't wait for real time feedback from them).
It's natural for them to be much slower than unit tests.
Split integration tests on groups and run them separately if you need feedback from some of them faster.
Run integration tests in different JVMs. Not different threads within the same JVM!
In this case you don't care about thread safety and you should not care about it.
Run integration tests on different machines and so on.
I worked with many different projects (some of them had CI build running for 48 hours) and first 3 steps were enough (even for crazy cases). Step #4 is rarely needed having good tests. Step #5 is for very specific situations.
You see that my recommendation relates to the process and not to the tool, because the problem is in the process.
Quite often people ignore root cause and try to tune the tool (Maven in this case). They get cosmetic improvements but with high maintenance cost of created solution.
There is a solution for that from version 5.8.0-M1 of junit.
Basically you need to create your own orderer. I did something like that.
Here is an annotation which you will use inside your test classes:
#Retention(RetentionPolicy.RUNTIME)
public #interface TestClassesOrder {
public int value() default Integer.MAX_VALUE;
}
Then you need to create class which will implement org.junit.jupiter.api.ClassOrderer
public class AnnotationTestsOrderer implements ClassOrderer {
#Override
public void orderClasses(ClassOrdererContext context) {
Collections.sort(context.getClassDescriptors(), new Comparator<ClassDescriptor>() {
#Override
public int compare(ClassDescriptor o1, ClassDescriptor o2) {
TestClassesOrder a1 = o1.getTestClass().getDeclaredAnnotation(TestClassesOrder.class);
TestClassesOrder a2 = o2.getTestClass().getDeclaredAnnotation(TestClassesOrder.class);
if (a1 == null) {
return 1;
}
if (a2 == null) {
return -1;
}
if (a1.value() < a2.value()) {
return -1;
}
if (a1.value() == a2.value()) {
return 0;
}
if (a1.value() > a2.value()) {
return 1;
}
return 0;
}
});
}
}
To get it working you need to tell junit which class you would use for ordering descriptors. So you need to create file "junit-platform.properties" it should be in resources folder. In that file you just need one line with your orderer class:
junit.jupiter.testclass.order.default=org.example.tests.AnnotationTestOrderer
Now you can use your orderer annotation like Order annotation but on class level:
#TestClassesOrder(1)
class Tests {...}
#TestClassesOrder(2)
class MainTests {...}
#TestClassesOrder(3)
class EndToEndTests {...}
I hope that this will help someone.
You can use annotations in Junit 5 to set the test order you wish to use:
From Junit 5's user guide:
https://junit.org/junit5/docs/current/user-guide/#writing-tests-test-execution-order
import org.junit.jupiter.api.MethodOrderer.OrderAnnotation;
import org.junit.jupiter.api.Order;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestMethodOrder;
#TestMethodOrder(OrderAnnotation.class)
class OrderedTestsDemo {
#Test
#Order(1)
void nullValues() {
// perform assertions against null values
}
#Test
#Order(2)
void emptyValues() {
// perform assertions against empty values
}
#Test
#Order(3)
void validValues() {
// perform assertions against valid values
}
}
Upgrading to Junit5 can be done fairly easily and the documentation on the link in the beginning of the post contains all the information you might need.

How to schedule tasks with a delay

I need to schedule a task to run after 2 minutes. Then when the time is up I need to check if we are still ONLINE. If we are still online I simple don't do anything. If OFFLINE then I will do some work.
private synchronized void schedule(ConnectionObj connectionObj)
{
if(connectionObj.getState() == ONLINE)
{
// schedule timer
}
else
{
// cancel task.
}
}
This is the code I am considering:
#Async
private synchronized void task(ConnectionObj connectionObj)
{
try
{
Thread.sleep(2000); // short time for test
}
catch (InterruptedException e)
{
e.printStackTrace();
}
if(connectionObj.getState() == ONLINE)
{
// don't do anything
}
else
{
doWork();
}
}
For scheduling this task should I use #Async? I may still get many more calls to schedule while I am waiting inside the task() method.
Does SpringBoot have something like a thread that I create each time schedule() gets called so that this becomes easy?
I am looking for something similar to a postDelay() from Android: how to use postDelayed() correctly in android studio?
I'm not sure about an exclusively spring-boot solution, since it isn't something that I work with.
However, you can use ScheduledExecutorService, which is in the base Java environment. For your usage, it would look something like this:
#Async
private synchronized void task(ConnectionObj connectionObj)
{
Executors.newScheduledThreadPool(1).schedule(() -> {
if(connectionObj.getState() == ONLINE)
{
// don't do anything
}
else
{
doWork();
}
}, 2, TimeUnit.MINUTES);
}
I used lambda expressions, which are explained here.
Update
Seeing as how you need to schedule them "on-demand", #Scheduling won't help as you mentioned. I think the simplest solution is to go for something like #Leftist proposed.
Otherwise, as I mentioned in the comments, you can look at Spring Boot Quartz integration to create a job and schedule it with Quartz. It will then take care of running it after the two minute mark. It's just more code for almost the same result.
Original
For Spring Boot, you can use the built in Scheduling support. It will take care of running your code on time on a separate thread.
As the article states, you must enable scheduling with #EnableScheduling.
Then you annotate your method you want to run with #Scheduled(..) and you can either setup a fixedDelay or cron expression, or any of the other timing options to suit your time execution requirements.

JavaEE - EJB/CDI Method Duration Mechanism

not sure how to title this issue but lets hope description may give better explaination. I am looking for a way to annotate a ejb method or cdi method with a custom annotation like " #Duration" or someothing aaand so to kill methods execution if takes too long after the given duration period. I guess some pseudo code will make everything clear:
public class myEJBorCdiBean {
#Duration(seconds = 5)
public List<Data> complexTask(..., ...)
{
while(..)
// this takes more time than the given 5 seconds so throw execption
}
To sum up, a method takes extremely long and it shall throw a given time duration expired error or something like that
Kinda a timeout mechanism, I dont know if there is already something like this, I am new to javaEE world.
Thanks in advance guys
You are not supposed to use Threading API inside EJB/CDI container. EJB spec clearly states that:
The enterprise bean must not attempt to manage threads. The enterprise
bean must not attempt to start, stop, suspend, or resume a thread, or
to change a thread’s priority or name. The enterprise bean must not
attempt to manage thread groups.
Managed beans and the invocation of their business methods have to be fully controlled by the container in order to avoid corruption of their state. Depending on your usecase, either offload this operation to a dedicated service(outside javaee), or you could come up with some semi-hacking solution using EJB #Singleton and Schedule - so that you could periodically check for some control flag. If you are running on Wildfly/JBoss, you could misuse the #TransactionTimeout annotation for this- as EJB methods are by default transaction aware, setting the timeout on Transaction will effective control the invocation timeout on the bean method. I am not sure, how it is supported on other applications servers.
If async processing is an option, then EJB #Asynchronous could be of some help: see Asynchronous tutorial - Cancelling and asynchronous operation.
As a general advice: Do not run long running ops in EJB/CDI. Every request will spawn a new thread, threads are limited resource and your app will be much harder to scale and maintain(long running op ~= state), what happens if your server crashes during method invocation, how would the use case work in clustered environment. Again it is hard to say, what is a better approach without understanding of your use case, but investigate java EE batch api, JMS with message driven beans or asynchronous processing with #Asynchronous
It is a very meaningful idea – to limit a complex task to a certain execution time. In practical web-computing, many users will be unwilling to wait for a complex search task to complete when its duration exceeds a maximally acceptable amount of time.
The Enterprise container controls the thread pool, and the allocation of CPU-resources among the active threads. It does so taking into account also retention times during time-consuming I/O-tasks (typically disk access).
Nevertheless, it makes sense to program a start task variable, and so now and then during the complex task verify the duration of that particular task. I advice you to program a local, runnable task, which picks scheduled tasks from a job queue. I have experience with this from a Java Enterprise backend application running under Glassfish.
First the interface definition Duration.java
// Duration.java
#Qualifier
#Target({ElementType.TYPE, ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD})
#Documented
#Retention(RetentionPolicy.RUNTIME)
public #interface Duration {
public int minutes() default 0; // Default, extended from class, within path
}
Now follows the definition of the job TimelyJob.java
// TimelyJob.java
#Duration(minutes = 5)
public class TimelyJob {
private LocalDateTime localDateTime = LocalDateTime.now();
private UUID uniqueTaskIdentifier;
private String uniqueOwnerId;
public TimelyJob(UUID uniqueTaskIdentifier, String uniqueOwnerId) {
this.uniqueTaskIdentifier = uniqueTaskIdentifier;
this.uniqueOwnerId = uniqueOwnerId;
}
public void processUntilMins() {
final int minutes = this.getClass().getAnnotation(Duration.class).minutes();
while (true) {
// do some heavy Java-task for a time unit, then pause, and check total time
// break - when finished
if (minutes > 0 && localDateTime.plusMinutes(minutes).isAfter(LocalDateTime.now())) {
break;
}
try {
Thread.sleep(5);
} catch (InterruptedException e) {
System.err.print(e);
}
}
// store result data in result class, 'synchronized' access
}
public LocalDateTime getLocalDateTime() {
return localDateTime;
}
public UUID getUniqueTaskIdentifier() {
return uniqueTaskIdentifier;
}
public String getUniqueOwnerId() {
return uniqueOwnerId;
}
}
The Runnable task that executes the timed jobs - TimedTask.java - is implemented as follows:
// TimedTask.java
public class TimedTask implements Runnable {
private LinkedBlockingQueue<TimelyJob> jobQueue = new LinkedBlockingQueue<TimelyJob>();
public void setJobQueue(TimelyJob job) {
this.jobQueue.add(job);
}
#Override
public void run() {
while (true) {
try {
TimelyJob nextJob = jobQueue.take();
nextJob.processUntilMins();
Thread.sleep(100);
} catch (InterruptedException e) {
System.err.print(e);
}
}
}
}
and in a seperate code, the staring of the TimedTask
public void initJobQueue() {
new Thread(new TimedTask()).start();
}
This functionality actually implements a batch-job scheduler in Java, using annotations to control the end-task time limit.

JMockit: Mocked apis are getting reverted after sometime

I am using JMockit to mock System.currentMillis().
Few invocations returning mocked time but after sometime, it starts returning original time.
When I run the same after disabling the JIT, it runs perfectly fine.
You obviously have an important dependency to the current time inside one or more of your components. In this case you should express this dependency with an interface:
public interface TimeService {
long currentTimeMillis();
}
In your real code you have an implementation that uses the System method:
public final SystemTimeService implements TimeService {
#Override
public long currentTimeMillis() {
return System.currentTimeMillis();
}
}
Note, with Java 8 you can reduce some code to express it more clearly (thanks #Holger):
public interface TimeService {
static final DEFAULT = System::currentTimeMillis;
long currentTimeMillis();
}
Your classes that depend on this time service should look like that:
public final ClassThatDependsOnTimeService {
private final TimeService timeService;
public ClassThatDependsOnTimeService(TimeService timeService) {
this.timeService = timeService;
}
// other features omitted
}
Now they can be fed with
TimeService timeService = new SystemTimeService();
ClassThatDependsOnTimeService someObject = new ClassThatDependsOnTimeService(timeService);
or (Java 8):
ClassThatDependsOnTimeService someObject = new ClassThatDependsOnTimeService(TimeService.DEFAULT);
or with any dependency injection framework or whatever.
In your tests you do not mock the method System.currentTimeMillis but you mock the interface TimeService and inject the mock into the depending classes.
This happens because the JIT optimizer in the JVM does not check for redefined methods (redefinition is done through a different subsystem in the JVM). So, eventually the JVM decides to optimize the code containing the call to System.currentTimeMillis(), inlining the call to the native Java method so that it starts executing the actual native method directly. At this point, the optimizer should check if currentTimeMillis() is currently redefined or not, and abandon the inlining in case it is redefined. But, unfortunately, the JDK engineers failed to account for this possibility.
If you really need to invoke a mocked System.currentTimeMillis() too many times, the only workaround is indeed to run with -Xint (which is not such a bad idea, as it usually reduces the total execution time of the test run).

Is there a way to make integration tests fail quickly when middleware fails?

Our test environment has a variety of integration tests that rely on middleware (CMS platform, underlying DB, Elasticsearch index).
They're automated and we manage our middleware with Docker, so we don't have issues with unreliable networks. However, sometimes our DB crashes and our test fails.
The problem is that the detection of this failure is through a litany of org.hibernate.exception.JDBCConnectionException messages. These come about via a timeout. When that happens, we end up with hundreds of tests failing with this exception, each one taking many seconds to fail. As a result, it takes an age for our tests to complete. Indeed, we generally just kill these builds manually when we realise they are done.
My question: In a Maven-driven Java testing environment, is there a way to direct the build system to watch out for specific kinds of Exceptions and kill the whole process, should they arrive (or reach some kind of threshold)?
We could watchdog our containers and kill the build process that way, but I'm hoping there's a cleaner way to do it with maven.
If you use TestNG instead of JUnit, there are other possibilities to define tests as dependent on other tests.
For example, like others mentioned above, you can have a method to check your database connection and declare all other tests as dependent on this method.
#Test
public void serverIsReachable() {}
#Test(dependsOnMethods = { "serverIsReachable" })
public void queryTestOne() {}
With this, if the serverIsReachable test fails, all other tests which depends on this one will be skipped and not marked as failed. Skipped methods will be reported as such in the final report, which is important since skipped methods are not necessarily failures. But since your initial test serverIsReachable failed, the build should fail completely.
The positive effect is, that non of your other tests will be executed, which should fail very fast.
You could also extend this logic with groups. Let's say you're database queries are used by some domain logic tests afterwards, you can declare each database test with a group, like
#Test(groups = { "jdbc" })
public void queryTestOne() {}
and declare you domain logic tests as dependent on these tests, with
#Test(dependsOnGroups = { "jdbc.* })
public void domainTestOne() {}
TestNG will therefore guarantee the order of execution for your tests.
Hope this helps to make your tests a bit more structured. For more infos, have a look at the TestNG dependency documentation.
I realize this is not exactly what you are asking for, but could help none the less to speed up the build:
JUnit assumptions allow to let a test pass when an assumption fails. You could have an assumption like assumeThat(db.isReachable()) that would skip those tests when a timeout is reached.
In order to actually speed things up and to not repeat this over and over, you could put this in a #ClassRule:
A failing assumption in a #Before or #BeforeClass method will have the same effect as a failing assumption in each #Test method of the class.
Of cause you would then have to mark your build as unstable via another way, but that should be easily doable.
I don't know if you can fail-fast the build itself, or even want to - since the administrative aspects of the build may not then complete, but you could do this:
In all your test classes that depend on the database - or the parent classes, because something like this is inheritable - add this:
#BeforeClass
public void testJdbc() throws Exception {
Executors.newSingleThreadExecutor()
.submit(new Callable() {
public Object call() throws Exception {
// execute the simplest SQL you can, eg. "SELECT 1"
return null;
}
})
.get(100, TimeUnit.MILLISECONDS);
}
If the JDBC simple query fails to return within 100ms, the entire test class won't run and will show as a "fail" to the build.
Make the wait time as small as you can and still be reliable.
One thing you could do is to write a new Test Runner which will stop if such an error occurs. Here is an example of what that might look like:
import org.junit.internal.AssumptionViolatedException;
import org.junit.runner.Description;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.BlockJUnit4ClassRunner;
import org.junit.runners.model.FrameworkMethod;
import org.junit.runners.model.InitializationError;
import org.junit.runners.model.Statement;
public class StopAfterSpecialExceptionRunner extends BlockJUnit4ClassRunner {
private boolean failedWithSpecialException = false;
public StopAfterSpecialExceptionRunner(Class<?> klass) throws InitializationError {
super(klass);
}
#Override
protected void runChild(final FrameworkMethod method, RunNotifier notifier) {
Description description = describeChild(method);
if (failedWithSpecialException || isIgnored(method)) {
notifier.fireTestIgnored(description);
} else {
runLeaf(methodBlock(method), description, notifier);
}
}
#Override
protected Statement methodBlock(FrameworkMethod method) {
return new FeedbackIfSpecialExceptionOccurs(super.methodBlock(method));
}
private class FeedbackIfSpecialExceptionOccurs extends Statement {
private final Statement next;
public FeedbackIfSpecialExceptionOccurs(Statement next) {
super();
this.next = next;
}
#Override
public void evaluate() throws Throwable {
boolean complete = false;
try {
next.evaluate();
complete = true;
} catch (AssumptionViolatedException e) {
throw e;
} catch (SpecialException e) {
StopAfterSpecialExceptionRunner.this.failedWithSpecialException = true;
throw e;
}
}
}
}
Then annotate your test classes with #RunWith(StopAfterSpecialExceptionRunner.class).
Basically what this does is that it checks for a certain Exception (here it's SpecialException, an Exception I wrote myself) and if this occurs it will fail the test that threw that and skip all following Tests. You could of course limit that to tests annotated with a specific annotation if you liked.
It is also possible, that a similar behavior could be achieved with a Rule and if so that may be a lot cleaner.

Categories

Resources