JUnit concurrent access to synchronizedSet - java

I have a problem with running JUnit tests on my server. When I run the test on my machine, there is no problem at all. When I run it on the server, there is a failure on all my server "sometimes". It means tests pass sometimes in 60% of attempts and 40% fail.
I am using Mockito. My test starts with mocking some replies using MessageListener and map every request to a response and under the hood I am using Collections.synchronizedSet(new HashSet<>()) which is thread-safe.(Every modification on my synchronizedSet happens in a synchronized(mySynchronizedSet){....}) Then, I am using RestAssurd to get the response of a particular REST endpoint and assert some values.
When a test fails and I look on the Stacktrace, I see that one of my mappings (always on the same object) didn't work and there is no map between this specific request and response in my collection and naturally, I get null on requesting this endpoint.
I am using Jenkins to automate the compilation and running the test and I get the stack trace on fail or my Printlns otherwise, there are no debug facilities available.
It sounds like a concurrency problem to me. I mean it seems my collection does not have time to get ready before RestAssurd request for an endpoint. I've tested locks, sleep, and another simple java concurrency solutions but they don't help and the probabilistic character of this problem has led me to a dead end.
Every thought will be appreciated.

Judging by what you said, it seems you have a misunderstanding of how things work in 3 specific cases.
First
and most obvious, and I apologize for even mentioning this, but the reason that I do at all is because I'm gathering that you're still learning (I apologize further if you're not still learning! and at the same rate, you might not have even implied it with the way I read it, so sorry if I misread): you aren't compiling with Jenkins, you're compiling with whatever JDK flavor you have on your machine (be it Oracle, Apple, GCJ, etc). Jenkins is an automation tool that helps facilitate your tedious jobs you expect to run regularly. I only mention this because I know college students nowadays use IDE's in there opening classes and can't distinguish between the compiler, the runtime, and the IDE.
Secondly
by using a threadsafe library, it doesn't automatically make everything you do inherently threadsafe. Consider the following example:
final Map<Object, Object> foo = Collections.synchronizedMap(new HashMap <>());
final String bar = "bar";
foo.put(bar, new Object());
new Thread(new Runnable(){
#Override
public void run(){
foo.remove(bar);
}
}).start();
new Thread(new Runnable(){
#Override
public void run(){
if(foo.containsKey(bar)){
foo.get(bar).toString();
}
}
}).start();
There is no guarantee that the second thread's call to #get(Object) will happen before or after the first thread's call to #remove(Object). Consider that
the second thread could call #containsKey(Object)
then the first thread obtains CPU time and calls #remove(Object)
then the second thread now has CPU time and calls #get(Object)
at this point, the returned value from get(Object) will be null, and the call to #toString() will result in a NullPointerDereference. You say you're using Set, so this example using a Map is mainly to prove a point: just because you're using a threadsafe collection, doesn't automatically make everything you do threadsafe. I imagine there are things you are doing with your set that match this sort of behavior, but without code snippets, I can only speculate.
And Lastly
You should be careful with how you write JUnits. A proper JUnit test is what's called a "whitebox" test. In otherwords, you know everything that is happening in the test, and you are explicitly testing everything that is happening in only the unit under test. The unit under test is just the method you are calling - not the methods that are called by your method, only the method itself. What that means, is that you need a good mocking framework, and mock out any subsequent method calls that your unit under test may invoke. Some good frameworks are JMockit, Mockito+PowerMock, etc.
The importance of this is that your test is supposed to test your isolated code. If you're allowing network access, disk access, etc, then your test may fail and it may have nothing to do with code you wrote, and it invalidates the test entirely. In your case, you hint at network access, so imagine that there is some throughput issue with your switches/router/etc, or that your NIC buffer gets full and can't process fast enough for what your program is trying to do. Sure, the failure is not good, and should be fixed, but that should be tested in "blackbox" testing. Your tests should be written so that you eliminate these sort of issues from being present and only test your code in the particular method for the unit under test, and nothing else.
Edit: I actually posted an answer to a separate discussion about whitebox testing that might be relevant: Is using a test entity manager a legitamate testing practice?

Related

unit testing asynchronous code and code without externally visible effects

Actually, I have two questions, although a bit related:
I know that unit tests should test the public API. However, if I have a close method that closes sockets and shuts down executors, however, neither sockets nor executors are exposed to users of this API, should I test if this is done, or only that the method executed without error? Where is the borderline between public api/behavior and impl details?
if I test a method that performs some checks, then queues a task on executor service, then returns a future to monitor operation progress, should I test both this method and the task, even if the task itself is a private method or otherwise not exposed code? Or should I instead test only the public method, but arrange for the task to be executed in the same thread by mocking an executor? In the latter case, the fact that task is submitted to an executor using the execute() method would be an implementation detail, but tests would wait for tasks to complete to be able to check if the method along with it's async part works properly.
The only question you should ask yourself is this: will I or my colleagues be able to change the code with confidence without these tests frequently executed. If the answer is no - write and maintain the tests as they provide value.
With this in mind you may consider refactoring your code so that the "low level plumbing" (e.g. socket and thread management) lives in a separate module where you treat it explicitly as part of the contract that module provides.

Junit, testing timer.schedule without relying on thread sleeping or time

I have implemented a method that executes some logic, after a certain amount of time, using a TimerTask and Timer.schedule.
I want to verify this behaviour using Junit, however, I would like to know if there are better ways to test it, without using thread sleeping, or measuring time.
Thanks.
You can use a "own thread" excecutor service to get around the "multiple threads" complications.
You can further test that some class A pushes tasks into such a service; and you can also use unit tests to ensure that the parameters used when pushing tasks are what you expect them to be.
In other words: you really don't want to use unit tests to prove that scheduling is working (assuming that you didn't completely re-invent the wheel and you implemented your own scheduling ... which is something that you simply should not do). You want use unit tests to prove that your code is using existing (well tested) frameworks with the arguments you expect to see.

PowerMock method intermittently invoking real method

I'm having a very strange problem with PowerMock, and I'm hoping somebody more familiar with its internals can suggest a next direction to chase.
The class under test is a Jersey Resource class. The method being tested has a wait() statement in it, and it subsequently invokes 2 static methods on a Helper class. When the first static method on the Helper class is invoked, the real method is executed, not mocked.
Nuances:
if I invoke the static method before the wait, the Mocked response is returned.
if I invoke the static method twice after the wait, the first time will execute the real method, and the second time will return the Mocked response.
if I invoke the static method once before and 5 times after the wait, the invocation before the wait will return the mock response, the first invocation after the wait will execute the real method, and all subsequent invocations will return the Mocked response.
if I debug it in my IDE debugger and put a break point on the method invocation, the mocked response is returned.
if I comment out the wait(), everything is mocked as expected.
all other mocking and stubbing and spying seems to be fine
I tried writing a test stub to demonstrate my problem to post here, but even I can't reproduce it on anything except the original class. Unfortunately, I cannot post that class so I'm putting out this request for blind advice.
Why might a wait() cause a side-effect in PowerMock like this?
Not sure if this is relevant, but the wait() is due to a method invocation that would normally set up a callback. I don't need the callback (not the point of my test), so I am simply mocking this method and no callback is set up. Since there's no notify, the wait() is simply returning after the specified time limit.
In my test framework, I am using JerseyTest 2.14 (with Grizzly container), RestAssured 2.8.0, and PowerMock 1.5.5.
Not the answer you are looking for but I have seen more than once that this is the better answer to any PowerMock problem: don't use PowerMock.
Even the folks developing it recommend to not use it.
When your production codes forces you to turn to PowerMock; then consider changing your production code. In other words: if you think you need PowerMock to test your code; then most of the time, that means you are dealing with bad design & implementation.
Seriously: PowerMock and its byte code manipulations can open a can of worms of problems. In the long run, you might better spend your time to redesign your "system under test" in order to make it testable using reasonable frameworks. For example, PowerMock breaks most frameworks for measuring code coverage. Something that might not hurt you today, but maybe will later on.

Unit testing a nonblocking method (asynchronous testing)

I have a very simple Checkclass which has a blocking waitForCondition() method. This method is blocking. I want to create some unit tests for this method. First, the method should return when the condition is met. Second, the method should return when it is interrupted.
Internally the Check class has an ArrayBlockingQueue and calls its take() method, so my test is really about having coded the logic for the condition correctly (as it should be).
In the application, data for the Check class is fed by another thread via an InputData method. The InputData method executes the logic on incoming data and places a dummy object in the ArrayBlockingQueue when the condition is met. This should cause waitForCondition() to return.
So my first thought is I can test the InputData by mocking and check to see the dummy object is added to the queue when the condition is met. This would require changing the design of the class, since the queue is a private data member (unless it is possible to mock private data). Instead of InputData adding directly to the queue when the condition is met, it would have to call something which could be mocked.
But then there is the problem of checking the waitForCondition() methods themselves, given that InputData is functioning correctly. It's really simply code:
try {
myArrayBlockingQueue.take();
return true;
} catch (InterruptedException ex) {
return false;
}
So I'm wondering if it's worth the imagined trouble: a test which creates another thread with a Check, calls its waitForCondition(), then returns something when it's done. Perhaps, using an Executor service. The fuzzy part is how to synchronize the assertTrue(...). I found this article on asynchronous testing which looks like it might do the trick.
Summary of question:
Should I change the design to test the logic in InputData() and if so, how?
Should I leave out the test of waitForCondition() as long as InputData() is tested?
Or is it better to just do what needs to be done (a somewhat complicated unit test) and test waitForCondition() directly?
If you inject the instance of ArrayBlockingQueue in the constructor of the Check class, then your test can inject the appropriate value in the middle of the test.
Then you can run the unit test with a timeout, and fail if it doesn't return within 100ms or so.
Thanks for the nice link! I faced some similar problems, and maybe that link is a better way to go than what I did. (I'm also interested to see what other answers appear to this question - it's a good question)
If you don't mind changing (at least temporarily) your actual code (yes, this is not a usual unit-test practice!) you can do something I called "Error Injection".
In my implementation, you create a class that reads from properties (or a Map) to "do something funny" at a specific unique point. e.g. your properties might say
myClass.myMethod.blockingQueueTake = interrupt:
myClass.myLongCalculation = throw: java.lang.ArithmeticException(Failed to converge)
In your code, you add testing lines, e.g. right before your queue.take(), add
TestSimulator.doCommand("myClass.myMethod.blockingQueueTake");
The advantage is that everything is happening in real-real code, not some Mock, which can get really hairy. (In my case, the SW was older, not written/designed for unit-testing, so making a Mock was very difficult) The disadvantage is that you will probably want to remove or comment out the code afterwards. So it really isn't a continuous-integration type unit test, it's more of a one-time really serious reality debug. So, I admit, it's far far from ideal, but, it did find a bunch of bugs for me!
You could also use a "test runner" class to run the asserts in a loop. The loop would run the asserts in a try/catch. The exception handler would simply try to run the asserts again until a timeout has expired. I recently wrote a blog post about this technique. The example is written in groovy, but the concept should be easily adaptable for Java.
http://www.greenmoonsoftware.com/2013/08/asynchronous-functional-testing/

How to write multi-threaded unit tests?

I'd like to know if there are some unit testing frameworks which are capable of writing multi-threaded tests easily?
I would imagine something like:
invoke a special test method by n threads at the same time for m times. After all test threads finished, an assertion method where some constraints should be validated would be invoked.
My current approach is to create Thread objects inside a junit test method, loop manually the real test cases inside each run() method, wait for all threads and then validate the assertions. But using this, I have a large boilerplate code block for each test.
What are your experiences?
There is ConTest, and also GroboUtils.
I've used GroboUtils many years ago, and it did the job. ConTest is newer, and would be my preferred starting point now, since rather than just relying on trial and error, the instrumentation forces specific interleavings of the threads, providing a deterministic test. In contrast, GroboUtils MultiThreadedTestRunner simply runs the tests and hopes the scheduler produces an interleaving that causes the thread bug to appear.
EDIT: See also ConcuTest which also forces interleavings and is free.
There is also MultithreadedTC by Bill Pugh of FindBugs fame.
Just using the concurrency libraries would simplify your code. You can turn your boiler plate code into one method.
Something like
public static void runAll(int times, Runnable... tests) {
}

Categories

Resources