today I noticed something really strange.
When I have some implementation of interface org.springframework.test.context.TestExecutionListener and it fails in beforeTestClass method, the whole test gets skipped.
I understand, it's just a listener and it should not be required to run a test, but test would fail too, if it gets a chance to run.
So my CI build went green even without potentialy failing test.
Shouldn't TestContextManager log errors instead of warn in these cases? Or should I rework my test architecture anyhow? This is kinda scary for me.
Thanks in advance.
Related
I recently migrated a Unit Testing suite to Junit 5.8.2 and Mockito 4.5.1 + Mockito Inline to allow static mocking. Powermock was removed.
2000+ tests were migrated and they all run successfully when ran inside the IDE (IntelliJ). Both with the IDEA and Gradle runner.
However, when Jenkins attempts to run them there are over 900 failed tests. Some of the exceptions thrown.
org.mockito.exceptions.misusing.MissingMethodInvocationException:
when() requires an argument which has to be 'a method call on a mock'.
For example:
when(mock.getArticles()).thenReturn(articles);
org.mockito.exceptions.misusing.WrongTypeOfReturnValue:
Boolean cannot be returned by someMethod()
someMethod() should return Date`
I understand what causes these errors as I've seen them multiple times during the migration so this is not a duplicate asking for the solution.(Unless there's something different with the Jenkins environment) The code that throws such exceptions should not be throwing them. And it does not in the IDE. It's thrown exclusively in Jenkins.
An additional exception which I have never seen before is thrown as well.
org.mockito.exceptions.misusing.UnfinishedMockingSessionException:
Unfinished mocking session detected.
Previous MockitoSession was not concluded with 'finishMocking()'.
For examples of correct usage see javadoc for MockitoSession class.
Most of the exceptions are of this type.
However, the MockitoSession inteface is not used anywhere in the test suite. All mocks are initialized with #ExtendWith(MockitoExtension.class)
I have no idea what could be causing this.
Jenkins is running the same versions of Java/Junit/Mockito/Spring as the code in the IDE.
It seems clear to me that the different environments are causing the issue. However, what could be the difference and how would I go about finding it?
I attempted to reproduce the results locally but was unable to. Any ideas towards that are also welcome.
I'm really not sure what code to paste here. I'm including a link to my GitHub below, to the specific file with the error.
So all of a sudden a unit test that had previously been working fine started failing. It makes no sense whatsoever, the failure. I'm using Spring's MockMVC utility to simulate web API calls, and my tests with this tool mostly revolve around specific web logic, such as my security rules. The security rules are super important to me in these tests, I've got unit tests for all the access rules to all my APIs.
Anyway, this test, which should be testing a successfully authenticated request, is now returning a 401, which causes the test to fail. Looking at the code, I can't find anything wrong with it. I'm passing in a valid API token. However, I don't believe that any of my logic is to blame.
The reason I say that is because I did a test. Two computers, both on the develop branch of my project. I deleted my entire .m2 from both machines, did a clean compile, and then ran the tests. On one machine, all the tests pass. On the other machine, this one test fails.
This leads me to think one of two things is happening. Either something is seriously wrong on one of the machines, or it's a test order thing, meaning something is not being properly cleaned up between my tests.
This is reinforced by the fact that if I only run this one test file (mvn clean test -Dtest=VideoFileControllerTest), it works on both machines.
So... what could it be? I'm at a loss because I felt I was cleaning up everything properly between tests, I'm usually quite good at this. Advice and feedback would be appreciated.
https://github.com/craigmiller160/VideoManagerServer/blob/develop/src/test/kotlin/io/craigmiller160/videomanagerserver/controller/VideoFileControllerTest.kt
testAddVideoFile()
I have checked out your project and ran the tests. Although I cannot pinpoint the exact cause of failure, it indeed looks like it has something to due with a form of test(data) contamination.
The tests started to fail after I randomized the order by modifying the maven surefire configuration. I added the following snippet in the build section of your pom.xml in order to randomize the tests:
<build>
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<runOrder>random</runOrder>
</configuration>
</plugin>
...
</build>
I ran the mvn clean test command ten times using the following (linux) bash script (if you use windows, the script might work using powershell):
#!/bin/bash
for i in {1..10}
do
mvn clean test
if [[ "$?" -ne 0 ]] ; then # if the exit code from mvn clean install was different than 0
echo "Error during test ${i}" >> results.txt
else
echo "Test ${i} went fine" >> results.txt
fi
done
Without the plugin snippet, the results.txt file merely contained ten lines of Test x went fine, while after applying the plugin about half of the tests failed. Unfortunately, the randomized tests all succeed when using mvn clean test -Dtest=VideoFileControllerTest, so my guess is that the contamination occurs somewhere else in your code.
I hope the above will give you more insight in the test failure. I would suggest searching for the culprit by #Ignore-ing halve of the active test classes and running the tests. If all tests succeed retry this process on the second halve and keep cutting the active tests in halve until you have found the cause of failure. Be sure to include to failing test though.
[edit]
You could add #DirtiesContext on the involved test classes/methods to prevent reuse of the ApplicationContext between tests.
Alright, thanks for the advice, I figured it out.
So, the main purpose of my controller tests were to validate my API logic, including authentication. Which meant that there was logic that made static method calls to SecurityContextHolder. I had another test class that was also testing logic involving SecurityContextHolder, and it was doing this:
#Mock
private lateinit var securityContext: SecurityContext
#Before
fun setup() {
SecurityContextHolder.setContext(securityContext)
}
So it was setting a Mockito mock object as the security context. After much investigation, I found that all my authentication logic was working fine on the test that was returning a 401 on my laptop (but not on my desktop). I also noticed that the test file with that code snippet above was running right before my controller test on my laptop, but after it on my desktop.
Furthermore, I had plenty of tests for an unauthenticated call, which is why only one test was failing: the unauthenticated test that followed it cleared the context.
The solution to this was to add the following logic to the test file from above:
#After
fun after() {
SecurityContextHolder.clearContext()
}
This cleared the mock and got everything to work again.
I am running a suite of integration tests using maven and about 10% of the tests would fail or throw an error. However, when I start the server and run the individual failed tests manually from my IDE(intellij idea), they all pass with no problem. What could be the cause of this issue?
This is almost always caused by the unit tests running in an inconsistent order, or a race condition between two tests running in parallel via forked tests. If Test #1 finishes first, it passes. But if Test #2 finishes first, it leaves a test resource, such as a test database, in an alternate state causing Test #1 to fail. It is very common with database tests, esepecially when one or more alter the database. Even in IDEA, you may find all the tests in the com.example.FooTest class always pass when you run that class. But if you run all the tests in the com.example package or all tests in the project, sometimes (or even always) a test in FooTest fails.
The fix is to ensure your tests are always guaranteed a consistent state when run. (That is a guiding principle for good unit tests.) You need to pay attention to test setup and tear-down via the #Before, #BeforeClass, #After, and #AfterClass annotations (or TestNG equivalents). I recommend Googling database unit testing best practices. For database tests, running tests in a Transaction can prevent these type of issues. That way the database is rolled back to its starting state whether the test passes or fails. Spring has some great support for JDBC dtaabase tests. (Even if your project is not a Spring project, the classes can be very useful.) Read section 11.2.2 Unit Testing support Classes and take a look at the AbstractTransactionalJUnit4SpringContextTests / AbstractTransactionalTestNGSpringContextTests classes and the #TransactionConfiguration annotation (this latter one being from running Spring Contexts). There are also other Database testing tools out there such as DbUnit.
I am working on a project whose most of the Junit are failing. My job is to fix them and make them run. I have fixed around 200 Junit classes but there are around 136 Junits still failing, reason no Idea, some times they get fail and some they work. Try to drill down the problem and it the ehCache. It is being shut-down.
Can any body please explain me why this exception occur in Junt testing and that to not all the time.
Please Note we have test cases for "Action" classes as well(Which deal with Servlet Context)
But interesting point all action test classes are getting passed.
Error Message is :
java.lang.IllegalStateException: The CacheManager has been shut down. It can no longer be used.
at net.sf.ehcache.CacheManager.checkStatus(CacheManager.java:1504)
at net.sf.ehcache.CacheManager.getCacheNames(CacheManager.java:1491)
at net.sf.ehcache.CacheManager.clearAll(CacheManager.java:1526)
Some part of your code is shutting down the cache manager (probably in the tear down of the unit test) and then trying to clear the cache. You can see this in the stack trace:
at net.sf.ehcache.CacheManager.clearAll(CacheManager.java:1526)
But once the manager is shut-down you cannot invoke operations on it. Without seeing the code of one of the unit tests, its hard to be more specific.
I figured out what went wrong. One of the Junit is taking time to complete and it is a DAO. That method is taking around 40 to 50 Minutes to complete AND and session timed out for Cache Manager happens and when other Junit try to access I am getting that error.
I fixed the Junit means basically the Query of the DAO to run quicker and all works good.
I'm currently building a CI build script for a legacy application. There are sporadic JUnit tests available and I will be integrating a JUnit execution of all tests into the CI build. However, I'm wondering what to do with the 100'ish failures I'm encountering in the non-maintained JUnit tests. Do I:
1) Comment them out as they appear to have reasonable, if unmaintained, business logic in them in the hopes that someone eventually uncomments them and fixes them
2) Delete them as its unlikely that anyone will fix them and the commented out code will only be ignored or be clutter for evermore
3) Track down those who have left this mess in my hands and whack them over the heads with the printouts of the code (which due to long-method smell will be sufficently suited to the task) while preaching the benefits of a well maintained and unit tested code base
If you use Junit 4 you can annotate that tests with #Ignore annotation.
If you use JUnit 3 you can just rename tests so they don't start with test.
Also, try to fix tests for functionality you are modifying in order to not make code mess larger.
Follow the no broken window principle and take some action towards a solution of the problem. If you can't fix the tests, at least:
Ignore them from the unit tests (there are different ways to do this).
Enter as many issue as necessary and assign people to fix the tests.
Then to prevent such situation from happening in the future, install a plug in similar to Hudson Game Plugin. People gets assigned points during continuous integration, e.g.
-10 break the build <-- the worse
-1 break a test
+1 fix a test
etc.
Really cool tool to create a sense of responsibility about unit tests within a team.
The failing JUnit tests indicate that either
The source code under test has been worked on without the tests being maintained. In this case option 3 is definitely worth considering, or
You have a genuine failure.
Either way you need to fix/review the tests/source. Since it sounds like your job is to create the CI system and not to fix the tests, in your position i would leave a time-bomb in the tests. You can get very fancy with annotated methods with JUnit 4 (something like #IgnoreUntil(date="2010/09/16")) and a custom runner, so or you can simply add an an if statement to the first line of each test:
if (isBeforeTimeBomb()) {
return;
}
Where isBeforeTimeBomb() can simply check the current date against a future date of your choosing. Then you follow the advice given by others here and notify your development team that the build is green now, but is likely to explode in X days unless the timebombed tests are fixed.
Comment them out so that they can be fixed later.
Generate test coverage reports (with Cobertura for example). The methods that were supposed to be covered by the tests that you commented out will then be indicated as not covered by tests.
If they compile but fail: leave them in. That will get you a good history of test improvements over time when using CI. If the tests do not compile but break the build, comment them out and poke the developers to fix them.
This obviously does not preclude using option 3 (hitting them over the head), you should do that anyway, regardless of what you do with the tests.
You should definitely disable them in some way for now. Whether that's done by commenting, deleting (assuming you can get them back from source control) or some other means is up to you. You do not want these failing tests to be an obstacle for people trying to submit new changes.
If there are few enough that you feel you can fix them yourself, great -- do it. If there are too many of them, then I'd be inclined to use a "crowdsourcing" approach. File a bug for each failing test. Try to assign these bugs to the actual owners/authors of the tests/tested code if possible, but if that's too hard to determine then randomly selecting is fine as long as you tell people to reassign the bugs that were mis-assigned to them. Then encourage people to fix these bugs either by giving them a deadline or by periodically notifying everyone of the progress and encouraging them to fix all of the bugs.
A CI system that is steady red is pretty worthless. The main benefit is to maintain a quality bar, and that's made much more difficult if there's no transition to mark a quality drop.
So the immediate effort should be to disable the failing tests, and create a tracking ticket/work item for each. Each of those is resolved however you do triage - if nobody cares about the test, get rid of it. If the failure represents a problem that needs to be addressed before ship, then leave the test disabled.
Once you are in this state, you can now rely on the CI system to tell you that urgent action is required - roll back the last change, or immediately put a team on fixing the problem, or whatever.
I don't know your position in the company, but if it's possible leave them in and file the problems as errors in your ticket system. Leave it up to the developers to either fix them or remove the tests.
If that doesn't work remove them (you have version control, right?) and close the ticket with a comment like 'removed failing junit tests which apparently won't be fixed' or something a bit more polite.
The point is, junit tests are application code and as such should work. That's what developers get paid for. If a test isn't appropriate anymore (something that doesn't exist anymore got tested) developers should signal that and remove the test.