How to write multi-threaded unit tests? - java

I'd like to know if there are some unit testing frameworks which are capable of writing multi-threaded tests easily?
I would imagine something like:
invoke a special test method by n threads at the same time for m times. After all test threads finished, an assertion method where some constraints should be validated would be invoked.
My current approach is to create Thread objects inside a junit test method, loop manually the real test cases inside each run() method, wait for all threads and then validate the assertions. But using this, I have a large boilerplate code block for each test.
What are your experiences?

There is ConTest, and also GroboUtils.
I've used GroboUtils many years ago, and it did the job. ConTest is newer, and would be my preferred starting point now, since rather than just relying on trial and error, the instrumentation forces specific interleavings of the threads, providing a deterministic test. In contrast, GroboUtils MultiThreadedTestRunner simply runs the tests and hopes the scheduler produces an interleaving that causes the thread bug to appear.
EDIT: See also ConcuTest which also forces interleavings and is free.

There is also MultithreadedTC by Bill Pugh of FindBugs fame.

Just using the concurrency libraries would simplify your code. You can turn your boiler plate code into one method.
Something like
public static void runAll(int times, Runnable... tests) {
}

Related

JUnit concurrent access to synchronizedSet

I have a problem with running JUnit tests on my server. When I run the test on my machine, there is no problem at all. When I run it on the server, there is a failure on all my server "sometimes". It means tests pass sometimes in 60% of attempts and 40% fail.
I am using Mockito. My test starts with mocking some replies using MessageListener and map every request to a response and under the hood I am using Collections.synchronizedSet(new HashSet<>()) which is thread-safe.(Every modification on my synchronizedSet happens in a synchronized(mySynchronizedSet){....}) Then, I am using RestAssurd to get the response of a particular REST endpoint and assert some values.
When a test fails and I look on the Stacktrace, I see that one of my mappings (always on the same object) didn't work and there is no map between this specific request and response in my collection and naturally, I get null on requesting this endpoint.
I am using Jenkins to automate the compilation and running the test and I get the stack trace on fail or my Printlns otherwise, there are no debug facilities available.
It sounds like a concurrency problem to me. I mean it seems my collection does not have time to get ready before RestAssurd request for an endpoint. I've tested locks, sleep, and another simple java concurrency solutions but they don't help and the probabilistic character of this problem has led me to a dead end.
Every thought will be appreciated.
Judging by what you said, it seems you have a misunderstanding of how things work in 3 specific cases.
First
and most obvious, and I apologize for even mentioning this, but the reason that I do at all is because I'm gathering that you're still learning (I apologize further if you're not still learning! and at the same rate, you might not have even implied it with the way I read it, so sorry if I misread): you aren't compiling with Jenkins, you're compiling with whatever JDK flavor you have on your machine (be it Oracle, Apple, GCJ, etc). Jenkins is an automation tool that helps facilitate your tedious jobs you expect to run regularly. I only mention this because I know college students nowadays use IDE's in there opening classes and can't distinguish between the compiler, the runtime, and the IDE.
Secondly
by using a threadsafe library, it doesn't automatically make everything you do inherently threadsafe. Consider the following example:
final Map<Object, Object> foo = Collections.synchronizedMap(new HashMap <>());
final String bar = "bar";
foo.put(bar, new Object());
new Thread(new Runnable(){
#Override
public void run(){
foo.remove(bar);
}
}).start();
new Thread(new Runnable(){
#Override
public void run(){
if(foo.containsKey(bar)){
foo.get(bar).toString();
}
}
}).start();
There is no guarantee that the second thread's call to #get(Object) will happen before or after the first thread's call to #remove(Object). Consider that
the second thread could call #containsKey(Object)
then the first thread obtains CPU time and calls #remove(Object)
then the second thread now has CPU time and calls #get(Object)
at this point, the returned value from get(Object) will be null, and the call to #toString() will result in a NullPointerDereference. You say you're using Set, so this example using a Map is mainly to prove a point: just because you're using a threadsafe collection, doesn't automatically make everything you do threadsafe. I imagine there are things you are doing with your set that match this sort of behavior, but without code snippets, I can only speculate.
And Lastly
You should be careful with how you write JUnits. A proper JUnit test is what's called a "whitebox" test. In otherwords, you know everything that is happening in the test, and you are explicitly testing everything that is happening in only the unit under test. The unit under test is just the method you are calling - not the methods that are called by your method, only the method itself. What that means, is that you need a good mocking framework, and mock out any subsequent method calls that your unit under test may invoke. Some good frameworks are JMockit, Mockito+PowerMock, etc.
The importance of this is that your test is supposed to test your isolated code. If you're allowing network access, disk access, etc, then your test may fail and it may have nothing to do with code you wrote, and it invalidates the test entirely. In your case, you hint at network access, so imagine that there is some throughput issue with your switches/router/etc, or that your NIC buffer gets full and can't process fast enough for what your program is trying to do. Sure, the failure is not good, and should be fixed, but that should be tested in "blackbox" testing. Your tests should be written so that you eliminate these sort of issues from being present and only test your code in the particular method for the unit under test, and nothing else.
Edit: I actually posted an answer to a separate discussion about whitebox testing that might be relevant: Is using a test entity manager a legitamate testing practice?

Junit, testing timer.schedule without relying on thread sleeping or time

I have implemented a method that executes some logic, after a certain amount of time, using a TimerTask and Timer.schedule.
I want to verify this behaviour using Junit, however, I would like to know if there are better ways to test it, without using thread sleeping, or measuring time.
Thanks.
You can use a "own thread" excecutor service to get around the "multiple threads" complications.
You can further test that some class A pushes tasks into such a service; and you can also use unit tests to ensure that the parameters used when pushing tasks are what you expect them to be.
In other words: you really don't want to use unit tests to prove that scheduling is working (assuming that you didn't completely re-invent the wheel and you implemented your own scheduling ... which is something that you simply should not do). You want use unit tests to prove that your code is using existing (well tested) frameworks with the arguments you expect to see.

JUnit Tests: how to test results from a Thread

I call a method passing a parameter. If this parameter is equal to something particular then a thread is started doing something repeatedly until it is stopped. In every repetition some values are changed.
Is there any way to check these values from JUnit?
If you are spawning threads you are not unit testing anymore - you are integration testing. Refactor your code so that the logic that changes this 'value' can be tested without the thread spawning. If it works without spawning a thread then it will work when spawning threads (I know I've set myself up for a lecture on that one... You will need to make sure you are properly synchronizing any potentially shared variables and don't have any code that could cause a deadlock).
Without seeing the code it is difficult to try to suggest ways to test it. However, you are definitely not unit testing if you are spawning threads.
If you are trying to test to see if each iteration modified the values appropriately, then call the iteration code with the expected inputs and test the expected outputs. Test each peice in isolation:
pseudo java code:
for each (file : files) {
doSomething(file); // this updates some running totals or something
}
Then you want to write some unit tests that call your doSomething() on each input you want to test and see if the values update appropriately (mock where necessary). Then do an integration test where you let the thread spawn and check the resulting values.

Poor JUnit test using springframework has fragile Thread.sleep() calls. How to fix?

I have recently joined a group with some severe JUnit testing issues. One problem is an 8 minute long test! The test has several sections; each makes calls to org.springframework.context.ApplicationEventPublisher.publishEvent()
followed by Thread.sleep() of various amounts of time, then tests conditions.
There are several obvious problems with this approach, the timing of the Thread.sleep() calls is fragile:
tests occasionally fail on busy machines; and
tests take far too long when they do not fail.
Is the pool upon which these events are handled accessible for testing and is there a call to see if the event cascade has quiesced?
Worth mentioning is that test code that actually calls external services are integration tests and not unit tests. If you're truly unit testing here you should replace those calls with mocks. That way you can better control the values returned to your business logic and test for specific conditions. Also, as you've seen, this all but eliminates false positives due to external (non-code) situations. Obviously these tests aren't failing, the facility they expect to use is.
You can overwrite the default applicationEventMulticaster by adding this bean id to your application context.
Instead of the default SimpleApplicationEventMulticaster, you could set a TaskExecutor on this bean to perform the event publishing asynchronously in multiple threads.
Or you could implement your own multicaster, which prints out which event listener took so long or was blocking, for how long and on which events. That could help you to track down the real problem of the 8-Minute-Testcase.
Interestingly, the JavaDoc of the SimpleApplicationEventMulticaster, which is used by default by Spring when you are using ApplicationContext, states the following:
By default, all listeners are invoked in the calling thread. This allows the danger of a rogue listener blocking the entire application, but adds minimal overhead. Specify an alternative TaskExecutor to have listeners executed in different threads, for example from a thread pool.
I (intentionally) avoid Spring so I'm not sure I can help with the specifics but just looking at the sleep issue, you can use something like WaitFor in tempus-fugit (shameless plug) to poll for a Condition rather than "sleep and hope". It's not ideal and usually a change in the way you test (as suggested before) is preferable but it does mean you get finer grained "waits" which are more likely to avoid race-conditions / flaky tests and generally speed up the test.
See the project's documentation for details and post back if you find it useful!

What are some strategies to unit test a scheduler?

This post started out as "What are some common patterns in unit testing multi-threaded code ?", but I found some other discussions on SO that generally agreed that "It is Hard (TM)" and "It Depends (TM)". So I thought that reducing the scope of the question would be more useful.
Background : We are implementing a simple scheduler that gives you a way to register callbacks when starting and stopping jobs and of course configure the frequency of scheduling. Currently, we're making a lightweight wrapper around java.util.Timer.
Aspects:
I haven't found a way to test this scheduler by relying on only public interfaces (something like addJob(jobSchedule, jobArgs,jobListener) , removeJob(jobId)).
How do I time the fact that the the job was called according to the schedule specified ?
you could use a recorder object that record the order, timings and other useful stuff in each unit test of your scheduler. The test is simple:
create a recorder object
configure the schedule
execute a unit test
check that recorder object is "compatible" with the schedule
One thing also to remember is that you don't need to test that Timer works. You can write a mock version of Timer (by extending the class or using EasyMock) that simply checks that you are calling it correctly, possibly even replacing enough that you don't need threads at all. In this case that might be more work than needed if your job listener has enough callbacks to track the scheduler.
The other important thing to remember is that when testing the scheduler, use custom jobs that track how the scheduler is working; when testing scheduled jobs, call the callbacks directly and not through the scheduler. You may have a higher level integration test that checks both together, depending on the system.
There are many failure modes that such a scheduler could exhibit, and each would most likely require its own test case. These test cases are likely to be very different, so "it depends."
For testing concurrent software in Java in general, I recommend this presentation from JavaOne 2007: Testing Concurrent Software.
For testing that a scheduler must execute jobs in accurate accordance to their schedule, I'd create an abstraction of time itself. I've done something similar in one of my projects, where I have a Time or Clock interface. The default implementation will be MillisecondTime, but during testing I will switch it out with a TickTime. This implementation will allow my unit test to control when the time advances and by how much.
This way, you could write a test where a job is scheduled to run once every 10 tick. Then your test just advances the tick counter and checks to make sure that the jobs run at the correct ticks.
A couple of ways to test concurrent code.
run the same code many times under load, some bugs appear only occasionally, but can show up consistently if performed repeatedly.
Store the results of different threads/jobs in a collection such as a BlockingQueue. This will allow you to check the results in the current thread and finish in a timely manner (without ugly arbitrary sleep statements)
If you are finding testing concurrency difficult consider refactoring your objects/components to make them easier to test.
If the scheduler delegates to an Executor or ExecutorService to run the tasks, you could use Dependency Injection to remove a direct dependency on the type of Executor, and use a simple single threaded Executor to test much of the functionality of your scheduler without the complication of truly multi-threaded code. Once you'd got those tests debugged, you could move on the the harder, but now substantially reduced in magnitude, task of testing thread-safety.

Categories

Resources