I am trying to implement unit testing in a web application and certain parts of it use ThreadLocal.
I cannot figure out how to go about testing it.
It looks like Junit runs all its tests using a single thread, namely the main thread.
I need to be able to assign different values to my ThreadLocal variable.
Has anyone come across such a scenario ? What do you guys recommend.
Groboutils has support for running multi-threaded tests, which will allow you to test your ThreadLocal variables.
http://groboutils.sourceforge.net/testing-junit/using_mtt.html
I would simply start threads within my unit test.
I recommend you use Futures and execute them using a ThreadPoolExecutor.
It might be enough to adorn the respective test methods with a timeout, i.e. #Test(timeout=100). Please note the "THREAD SAFETY WARNING" in Test::timeout , especially when you use #BeforeClass and #After annotations.
Related
Actually, I have two questions, although a bit related:
I know that unit tests should test the public API. However, if I have a close method that closes sockets and shuts down executors, however, neither sockets nor executors are exposed to users of this API, should I test if this is done, or only that the method executed without error? Where is the borderline between public api/behavior and impl details?
if I test a method that performs some checks, then queues a task on executor service, then returns a future to monitor operation progress, should I test both this method and the task, even if the task itself is a private method or otherwise not exposed code? Or should I instead test only the public method, but arrange for the task to be executed in the same thread by mocking an executor? In the latter case, the fact that task is submitted to an executor using the execute() method would be an implementation detail, but tests would wait for tasks to complete to be able to check if the method along with it's async part works properly.
The only question you should ask yourself is this: will I or my colleagues be able to change the code with confidence without these tests frequently executed. If the answer is no - write and maintain the tests as they provide value.
With this in mind you may consider refactoring your code so that the "low level plumbing" (e.g. socket and thread management) lives in a separate module where you treat it explicitly as part of the contract that module provides.
I have implemented a method that executes some logic, after a certain amount of time, using a TimerTask and Timer.schedule.
I want to verify this behaviour using Junit, however, I would like to know if there are better ways to test it, without using thread sleeping, or measuring time.
Thanks.
You can use a "own thread" excecutor service to get around the "multiple threads" complications.
You can further test that some class A pushes tasks into such a service; and you can also use unit tests to ensure that the parameters used when pushing tasks are what you expect them to be.
In other words: you really don't want to use unit tests to prove that scheduling is working (assuming that you didn't completely re-invent the wheel and you implemented your own scheduling ... which is something that you simply should not do). You want use unit tests to prove that your code is using existing (well tested) frameworks with the arguments you expect to see.
So, I have a couple of JUnit classes, each one contains a list of test methods.
Each method is independent of each other, there is no direct connection.
But we have indirect connection: all methods processes one singleton object (it is Selenium Web Driver Instance, yes, I use 1 Web Driver Instance for all my tests, because for making new object instance we spend a really lot of time! ).
And It is all ok, when test methods execute step by step in one thread. But it is too long too,
So, I decided to increase speed,
How? - I decided to run all the test methods in the parallel mode. For this I use maven with the special configuration for parallel test execution.
But I think, it is a source a new problem, because - in result we have parallel methods execution, but we still work just with single Web Driver Instance.
I'm trying to find the optimal solution:
I want that the tests will be executed in parallel mode - it is really fast.
I don't want that for every test new object is created - it is a very long process.
What advice can you provide for me?
How would you have solved this problem?
Unfortunately, webDriver is not thread-safe. Imho, best practice is to run each test class using individual webDriver instance in separate thread. The optimal number of threads is
int threadNum = Runtime.getRuntime().availableProcessors() * 2; The executing time of my projects reduced from 30 minutes to 4.
Exactly the same method is used in Thucydides framework.
There is no alternative around this. If the tests are running in parallel, you can not use a single WebDriver instance, you must instantiate one WebDriver instance per test case.
One way to get a speedup by running the tests serially is to reuse the WebDriver object because starting up the WebDriver tends to be a step which takes a long time. Another common optimisation is to reuse the FirefoxProfile if a FirefoxDriver is being used because the creation of the profile is also slow.
If you do choose to reuse the WebDriver object, make sure you try to clean up the instance as best as possible in tearDown. For example, by clearing cookies:
driver.manage().deleteAllCookies();
Depending on where the actual performance bottlenecks are in your tests, you could do something gross like tacking a synchronized wrapper around your driver so that you still only have one but all access to it is serialized.
You could potentially change your test to have a ThreadLocal reference to a driver so you have one driver per thread.
Jute maven plugin provides isolation of JUnit test methods(!) through their start as external JVM processes, also you can define specific JRE for tests
I have recently joined a group with some severe JUnit testing issues. One problem is an 8 minute long test! The test has several sections; each makes calls to org.springframework.context.ApplicationEventPublisher.publishEvent()
followed by Thread.sleep() of various amounts of time, then tests conditions.
There are several obvious problems with this approach, the timing of the Thread.sleep() calls is fragile:
tests occasionally fail on busy machines; and
tests take far too long when they do not fail.
Is the pool upon which these events are handled accessible for testing and is there a call to see if the event cascade has quiesced?
Worth mentioning is that test code that actually calls external services are integration tests and not unit tests. If you're truly unit testing here you should replace those calls with mocks. That way you can better control the values returned to your business logic and test for specific conditions. Also, as you've seen, this all but eliminates false positives due to external (non-code) situations. Obviously these tests aren't failing, the facility they expect to use is.
You can overwrite the default applicationEventMulticaster by adding this bean id to your application context.
Instead of the default SimpleApplicationEventMulticaster, you could set a TaskExecutor on this bean to perform the event publishing asynchronously in multiple threads.
Or you could implement your own multicaster, which prints out which event listener took so long or was blocking, for how long and on which events. That could help you to track down the real problem of the 8-Minute-Testcase.
Interestingly, the JavaDoc of the SimpleApplicationEventMulticaster, which is used by default by Spring when you are using ApplicationContext, states the following:
By default, all listeners are invoked in the calling thread. This allows the danger of a rogue listener blocking the entire application, but adds minimal overhead. Specify an alternative TaskExecutor to have listeners executed in different threads, for example from a thread pool.
I (intentionally) avoid Spring so I'm not sure I can help with the specifics but just looking at the sleep issue, you can use something like WaitFor in tempus-fugit (shameless plug) to poll for a Condition rather than "sleep and hope". It's not ideal and usually a change in the way you test (as suggested before) is preferable but it does mean you get finer grained "waits" which are more likely to avoid race-conditions / flaky tests and generally speed up the test.
See the project's documentation for details and post back if you find it useful!
I'd like to know if there are some unit testing frameworks which are capable of writing multi-threaded tests easily?
I would imagine something like:
invoke a special test method by n threads at the same time for m times. After all test threads finished, an assertion method where some constraints should be validated would be invoked.
My current approach is to create Thread objects inside a junit test method, loop manually the real test cases inside each run() method, wait for all threads and then validate the assertions. But using this, I have a large boilerplate code block for each test.
What are your experiences?
There is ConTest, and also GroboUtils.
I've used GroboUtils many years ago, and it did the job. ConTest is newer, and would be my preferred starting point now, since rather than just relying on trial and error, the instrumentation forces specific interleavings of the threads, providing a deterministic test. In contrast, GroboUtils MultiThreadedTestRunner simply runs the tests and hopes the scheduler produces an interleaving that causes the thread bug to appear.
EDIT: See also ConcuTest which also forces interleavings and is free.
There is also MultithreadedTC by Bill Pugh of FindBugs fame.
Just using the concurrency libraries would simplify your code. You can turn your boiler plate code into one method.
Something like
public static void runAll(int times, Runnable... tests) {
}