I'm really not sure what code to paste here. I'm including a link to my GitHub below, to the specific file with the error.
So all of a sudden a unit test that had previously been working fine started failing. It makes no sense whatsoever, the failure. I'm using Spring's MockMVC utility to simulate web API calls, and my tests with this tool mostly revolve around specific web logic, such as my security rules. The security rules are super important to me in these tests, I've got unit tests for all the access rules to all my APIs.
Anyway, this test, which should be testing a successfully authenticated request, is now returning a 401, which causes the test to fail. Looking at the code, I can't find anything wrong with it. I'm passing in a valid API token. However, I don't believe that any of my logic is to blame.
The reason I say that is because I did a test. Two computers, both on the develop branch of my project. I deleted my entire .m2 from both machines, did a clean compile, and then ran the tests. On one machine, all the tests pass. On the other machine, this one test fails.
This leads me to think one of two things is happening. Either something is seriously wrong on one of the machines, or it's a test order thing, meaning something is not being properly cleaned up between my tests.
This is reinforced by the fact that if I only run this one test file (mvn clean test -Dtest=VideoFileControllerTest), it works on both machines.
So... what could it be? I'm at a loss because I felt I was cleaning up everything properly between tests, I'm usually quite good at this. Advice and feedback would be appreciated.
https://github.com/craigmiller160/VideoManagerServer/blob/develop/src/test/kotlin/io/craigmiller160/videomanagerserver/controller/VideoFileControllerTest.kt
testAddVideoFile()
I have checked out your project and ran the tests. Although I cannot pinpoint the exact cause of failure, it indeed looks like it has something to due with a form of test(data) contamination.
The tests started to fail after I randomized the order by modifying the maven surefire configuration. I added the following snippet in the build section of your pom.xml in order to randomize the tests:
<build>
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<runOrder>random</runOrder>
</configuration>
</plugin>
...
</build>
I ran the mvn clean test command ten times using the following (linux) bash script (if you use windows, the script might work using powershell):
#!/bin/bash
for i in {1..10}
do
mvn clean test
if [[ "$?" -ne 0 ]] ; then # if the exit code from mvn clean install was different than 0
echo "Error during test ${i}" >> results.txt
else
echo "Test ${i} went fine" >> results.txt
fi
done
Without the plugin snippet, the results.txt file merely contained ten lines of Test x went fine, while after applying the plugin about half of the tests failed. Unfortunately, the randomized tests all succeed when using mvn clean test -Dtest=VideoFileControllerTest, so my guess is that the contamination occurs somewhere else in your code.
I hope the above will give you more insight in the test failure. I would suggest searching for the culprit by #Ignore-ing halve of the active test classes and running the tests. If all tests succeed retry this process on the second halve and keep cutting the active tests in halve until you have found the cause of failure. Be sure to include to failing test though.
[edit]
You could add #DirtiesContext on the involved test classes/methods to prevent reuse of the ApplicationContext between tests.
Alright, thanks for the advice, I figured it out.
So, the main purpose of my controller tests were to validate my API logic, including authentication. Which meant that there was logic that made static method calls to SecurityContextHolder. I had another test class that was also testing logic involving SecurityContextHolder, and it was doing this:
#Mock
private lateinit var securityContext: SecurityContext
#Before
fun setup() {
SecurityContextHolder.setContext(securityContext)
}
So it was setting a Mockito mock object as the security context. After much investigation, I found that all my authentication logic was working fine on the test that was returning a 401 on my laptop (but not on my desktop). I also noticed that the test file with that code snippet above was running right before my controller test on my laptop, but after it on my desktop.
Furthermore, I had plenty of tests for an unauthenticated call, which is why only one test was failing: the unauthenticated test that followed it cleared the context.
The solution to this was to add the following logic to the test file from above:
#After
fun after() {
SecurityContextHolder.clearContext()
}
This cleared the mock and got everything to work again.
Related
today I noticed something really strange.
When I have some implementation of interface org.springframework.test.context.TestExecutionListener and it fails in beforeTestClass method, the whole test gets skipped.
I understand, it's just a listener and it should not be required to run a test, but test would fail too, if it gets a chance to run.
So my CI build went green even without potentialy failing test.
Shouldn't TestContextManager log errors instead of warn in these cases? Or should I rework my test architecture anyhow? This is kinda scary for me.
Thanks in advance.
I have around 200 testNG test cases can be executed through maven by and suite.xml file. But I want to convert these test cases into web service or any other similar webservice so that anybody can call any test case from there machine and will be able to know whether that particular functionality is working fine at that moment.
But what if no one calls the test webservices for longer time? You won't know the state of your application, if you have any failures/regressions.
Instead, you can use
continuous integration to run the tests automatically on every code push; see Jenkins for a more complete solution; or, more hacky, you can create your own cron job/demon/git hook on a server to run your tests automatically
a maven plugin that displays the results of the last execution of the automated tests; see Surefire for a html report on the state of the last execution of each test
There are so many posts about running JUnit tests in a specific order and I fully understand:
Tests should not order specific
and that the creators did this with no 1 in mind
But I have test cases that creates a bunch out output files. I need to capability to have one final test that goes and collects these files, zip it and emails it of to someone.
Is there a way to group JUnit tests together for me to have a "wrap up" group that goes and do this? Or is there a better way of doing this?
I am running these from Jenkins as a maven job. I could create another job that does just that based on the previous jobs output but would prefer if I can do it all in one meaning I would be able to run it everywhere even from my IDE.
Maybe the #After and #AfterClass annotations are what you are looking for:
#AfterClass
void cleanupClass() {
//will run after all tests are finished
}
#After
void cleanup() {
//will run after every test
}
However, I would consider handling this through Jenkins if possible. In my opinion the annotations above are for cleaning up any kind of setup that was previously done in order to do the testing.
Sending these files through email does not sound like part of the testing and therefore I would be inclined to keep it separated.
I guess the real problem is that you want the results and output of the tests sent via email.
Your suggestion of using a test for this threw me on the wrong track.
Definitely use some sort of custom Jenkins post hook to do this. There are some fancy plugins that let you code groovy which will do the trick.
Do not abuse a unit test for this. These (should) also run locally as part of builds and you don't want that email being sent every time.
I have a maven project with test execution by the maven-surefire-plugin. An odd phenomenon I've observed and been dealing with is that running locally
mvn clean install
which executes my tests, results in a successful build with 0 Failures and 0 Errors.
Now when I deploy this application to our remote repo that Jenkins attempts to build, I get all sorts of random EasyMock errors, typically of the sort:
java.lang.IllegalStateException: 3 matchers expected, 4 recorded. at org.easymock.internal.ExpectedInvocation.createMissingMatchers
This is a legacy application being inherited, and we are aware that many of these tests are flawed if not plainly using EasyMock incorrectly, but I'm in a state where with test execution I get a successful build locally but not in Jenkins.
I know that the order of execution of these tests is not guaranteed, but I am wondering how I can introspect what is different in the Jenkins build pipeline vs. local to help identify the issue?
Is there anything I can do to force execute the tests in the way they're done locally? At this point, I have simply excluded many troublesome test classes but it seems that no matter how many times I see a Jenkins failure, I either fix the problem or exclude the test class, I'm only to find it complain about some other test class it didn't mention before.
Any ideas how to approach a situation like this?
I have experimented quite a similar situation, and the cause of mine was obviously some concurrency problems with the tests implementations.
And, after reading your comment:
What I actually did that fixed it (like magic am I right?) is for the maven-surefire plugin, I set the property reuseForks=false, and forkCount=1C, which is just 1*(number of CPU's of machine).
... I get more convinced that you have concurrency problems with your tests. Concurrency is not easy to diagnose, specially when your experiment runs OK on one CPU. But race conditions might arise when you run it on another system (which usually is faster or slower).
I recommend you strongly to review your tests one by one and ensure that each one of them is logically isolated:
They should not rely upon an expected previous state (files, database, etc). Instead, they should prepare the proper setup before each execution.
If they modify concurrently a common resource which might interfere other test's execution (files, database, singletons, etc), every assert must be done synchronizing as much as needed, and taking in account that its initial state is unknown:
Wrong test:
MySingleton.getInstance().put(myObject);
assertEquals(1, MySingleton.getInstance().size());
Right test:
synchronized(MySingleton.getInstance())
{
MySingleton.getInstance().put(myObject);
assertTrue(MySingleton.getInstance().contains(myObject));
}
A good start point for the reviewing is checking one of the failing tests and track the execution backwards to find the root cause of the fail.
Setting explicitly the tests' order is not a good practice, and I wouldn't recommend it to you even if I knew it was possible, because it only would hide the actual cause of the problem. Think that, in a real production environment, the executions' order is not usually guranteed.
JUnit test run order is non-deterministic.
Are the versions of Java and Maven the same on the 2 machines? If yes, make sure you're using the most recent maven-surefire-plugin version. Also, make sure to use a Freestyle Jenkins job with a Maven build step instead of the Maven project type. Using the proper Jenkins build type can either fix build problems outright or give you a better error so you can diagnose the actual issue.
You can turn on Maven debug logging to see the order tests are being run in. Each test should set up (and perhaps tear down) its own test data to make sure the tests may run independently. Perhaps seeing the test order will give you some clues as to which classes depend on others inappropriately. And - if the app uses caching, ensure the cache is cleaned out between tests (or explicitly populated depending on what the test needs to do). Also consider running the tests one package at a time to isolate the culprits - multiple surefile plugin executions might be useful.
Also check the app for classpath problems. This answer has some suggestions for cleaning the classpath.
And another possibility: Switching to a later version of JUnit might help - unless the app is using Spring 2.5.6.x. If the app is using Spring 2.5.6.x and cannot upgrade, the highest possible version of JUnit 4.x that may be used is 4.4. Later versions of JUnit are not compatible with Spring Test 2.5.6 and may lead to hard-to-diagnose test errors.
I have a set of legacy unit tests, most of which are Spring AbstractTransactionalJUnit4SpringContextTests tests, but some manage transactions on their own. Unfortunately, this seems to have introduced side-effects causing completely unrelated tests to fail when modifying the test data set, i.e., the failing test works when running it on its own (with the same initial data set), but fails when being run as part of the complete set of tests.
The tests are typically run through Maven's surefire plugin during the regular Maven build.
What I am looking for is an automated way to permute the amount and order of the executed tests to figure out the culprit. A naive, but pretty expensive approach, would take the power set of all tests and run all possible combinations. A more optimized approach would use the existing test execution order (which is mostly random, but stable) and test all potential ordered sub-sets. I am aware that the runtime of this process may be lengthy.
Are there any tools / Maven plugins that can do this out of the box?
I don't know of a tool which does specifically what you want, but you could play about with the runOrder parameter in maven surefire. From that page:
Defines the order the tests will be run in. Supported values are
"alphabetical", "reversealphabetical", "random", "hourly"
(alphabetical on even hours, reverse alphabetical on odd hours),
"failedfirst", "balanced" and "filesystem".
Odd/Even for hourly is
determined at the time the of scanning the classpath, meaning it could
change during a multi-module build.
So you could do a simple alphabetical runOrder and take the first failure, and start from there. At least you have a predictable run order. Then you run one by one (using -Dincludes) each test before the failing one & the failing one, to detect which one is making the failing test fail.
Then repeat the entire process for all of the failing tests. You could run this in a loop overnight or something.
Can you simply amend the tests to use a clean database copy each time? DBUnit is an excellent tool for doing this.
http://www.dbunit.org/