I recently migrated a Unit Testing suite to Junit 5.8.2 and Mockito 4.5.1 + Mockito Inline to allow static mocking. Powermock was removed.
2000+ tests were migrated and they all run successfully when ran inside the IDE (IntelliJ). Both with the IDEA and Gradle runner.
However, when Jenkins attempts to run them there are over 900 failed tests. Some of the exceptions thrown.
org.mockito.exceptions.misusing.MissingMethodInvocationException:
when() requires an argument which has to be 'a method call on a mock'.
For example:
when(mock.getArticles()).thenReturn(articles);
org.mockito.exceptions.misusing.WrongTypeOfReturnValue:
Boolean cannot be returned by someMethod()
someMethod() should return Date`
I understand what causes these errors as I've seen them multiple times during the migration so this is not a duplicate asking for the solution.(Unless there's something different with the Jenkins environment) The code that throws such exceptions should not be throwing them. And it does not in the IDE. It's thrown exclusively in Jenkins.
An additional exception which I have never seen before is thrown as well.
org.mockito.exceptions.misusing.UnfinishedMockingSessionException:
Unfinished mocking session detected.
Previous MockitoSession was not concluded with 'finishMocking()'.
For examples of correct usage see javadoc for MockitoSession class.
Most of the exceptions are of this type.
However, the MockitoSession inteface is not used anywhere in the test suite. All mocks are initialized with #ExtendWith(MockitoExtension.class)
I have no idea what could be causing this.
Jenkins is running the same versions of Java/Junit/Mockito/Spring as the code in the IDE.
It seems clear to me that the different environments are causing the issue. However, what could be the difference and how would I go about finding it?
I attempted to reproduce the results locally but was unable to. Any ideas towards that are also welcome.
Related
I am making a piece of software that runs a load test using the JUnit launcher. The problem is that I keep on encountering the following warning WARNING: Error scanning files for URI jar: ... with exception java.nio.file.FileSystemAlreadyExistsException emitted from the JUnit code base due to jdk.zipfs/jdk.nio.zipfs.ZipFileSystemProvider.newFileSystem(ZipFileSystemProvider.java:102) at org.junit.platform.commons.util.CloseablePath.createForJarFileSystem(CloseablePath.java:57).
I am launching the same test across multiple threads. I believe the issue is caused by the .execute() method performing test discovery each time, causing a race condition inside the JUnit code.
I was wondering if there is there currently a way to perform test discovery once in JUnit and then use the same TestPlan to execute the test multiple times? If not is then is there a fix for this problem that allows me to invoke tests which are packaged in the .jar format (unzipping the test .jar is not feasible for my use case).
It seems to me that JUnit no longer supports executing the same test plan repeatedly https://github.com/junit-team/junit5/issues/2379 forcing inefficient test discovery to happen each time leading to the FileSystemAlreadyExistsException. The following class in JUnit may not be thread safe https://github.com/junit-team/junit5/blob/main/junit-platform-commons/src/main/java/org/junit/platform/commons/util/CloseablePath.java
Thanks!
I have test written in spock. Each time when I'm running tests IntelliJ saying that the configuration is wrong - but if I press apply and etc I'm able to run these tests - What I can do, to don't see these messages?
Example of test for which I'm getting this error message:
The configuration which opens after an attempt of running the test:
Last error message:
Any ideas how to get rid of these?
For now I have created issue.
IntelliJ IDEA JUnit configuration stores method name and parameters signature in a single string instance.
While this works for Java (it doesn't allow parentheses in methods names), this fails for other JVM languages.
This issue is not related to Spock, it's reproducible for arbitrary JUnit test class written in Groovy.
Follow YouTrack ticket for updates.
I have a maven project with test execution by the maven-surefire-plugin. An odd phenomenon I've observed and been dealing with is that running locally
mvn clean install
which executes my tests, results in a successful build with 0 Failures and 0 Errors.
Now when I deploy this application to our remote repo that Jenkins attempts to build, I get all sorts of random EasyMock errors, typically of the sort:
java.lang.IllegalStateException: 3 matchers expected, 4 recorded. at org.easymock.internal.ExpectedInvocation.createMissingMatchers
This is a legacy application being inherited, and we are aware that many of these tests are flawed if not plainly using EasyMock incorrectly, but I'm in a state where with test execution I get a successful build locally but not in Jenkins.
I know that the order of execution of these tests is not guaranteed, but I am wondering how I can introspect what is different in the Jenkins build pipeline vs. local to help identify the issue?
Is there anything I can do to force execute the tests in the way they're done locally? At this point, I have simply excluded many troublesome test classes but it seems that no matter how many times I see a Jenkins failure, I either fix the problem or exclude the test class, I'm only to find it complain about some other test class it didn't mention before.
Any ideas how to approach a situation like this?
I have experimented quite a similar situation, and the cause of mine was obviously some concurrency problems with the tests implementations.
And, after reading your comment:
What I actually did that fixed it (like magic am I right?) is for the maven-surefire plugin, I set the property reuseForks=false, and forkCount=1C, which is just 1*(number of CPU's of machine).
... I get more convinced that you have concurrency problems with your tests. Concurrency is not easy to diagnose, specially when your experiment runs OK on one CPU. But race conditions might arise when you run it on another system (which usually is faster or slower).
I recommend you strongly to review your tests one by one and ensure that each one of them is logically isolated:
They should not rely upon an expected previous state (files, database, etc). Instead, they should prepare the proper setup before each execution.
If they modify concurrently a common resource which might interfere other test's execution (files, database, singletons, etc), every assert must be done synchronizing as much as needed, and taking in account that its initial state is unknown:
Wrong test:
MySingleton.getInstance().put(myObject);
assertEquals(1, MySingleton.getInstance().size());
Right test:
synchronized(MySingleton.getInstance())
{
MySingleton.getInstance().put(myObject);
assertTrue(MySingleton.getInstance().contains(myObject));
}
A good start point for the reviewing is checking one of the failing tests and track the execution backwards to find the root cause of the fail.
Setting explicitly the tests' order is not a good practice, and I wouldn't recommend it to you even if I knew it was possible, because it only would hide the actual cause of the problem. Think that, in a real production environment, the executions' order is not usually guranteed.
JUnit test run order is non-deterministic.
Are the versions of Java and Maven the same on the 2 machines? If yes, make sure you're using the most recent maven-surefire-plugin version. Also, make sure to use a Freestyle Jenkins job with a Maven build step instead of the Maven project type. Using the proper Jenkins build type can either fix build problems outright or give you a better error so you can diagnose the actual issue.
You can turn on Maven debug logging to see the order tests are being run in. Each test should set up (and perhaps tear down) its own test data to make sure the tests may run independently. Perhaps seeing the test order will give you some clues as to which classes depend on others inappropriately. And - if the app uses caching, ensure the cache is cleaned out between tests (or explicitly populated depending on what the test needs to do). Also consider running the tests one package at a time to isolate the culprits - multiple surefile plugin executions might be useful.
Also check the app for classpath problems. This answer has some suggestions for cleaning the classpath.
And another possibility: Switching to a later version of JUnit might help - unless the app is using Spring 2.5.6.x. If the app is using Spring 2.5.6.x and cannot upgrade, the highest possible version of JUnit 4.x that may be used is 4.4. Later versions of JUnit are not compatible with Spring Test 2.5.6 and may lead to hard-to-diagnose test errors.
I am running a suite of integration tests using maven and about 10% of the tests would fail or throw an error. However, when I start the server and run the individual failed tests manually from my IDE(intellij idea), they all pass with no problem. What could be the cause of this issue?
This is almost always caused by the unit tests running in an inconsistent order, or a race condition between two tests running in parallel via forked tests. If Test #1 finishes first, it passes. But if Test #2 finishes first, it leaves a test resource, such as a test database, in an alternate state causing Test #1 to fail. It is very common with database tests, esepecially when one or more alter the database. Even in IDEA, you may find all the tests in the com.example.FooTest class always pass when you run that class. But if you run all the tests in the com.example package or all tests in the project, sometimes (or even always) a test in FooTest fails.
The fix is to ensure your tests are always guaranteed a consistent state when run. (That is a guiding principle for good unit tests.) You need to pay attention to test setup and tear-down via the #Before, #BeforeClass, #After, and #AfterClass annotations (or TestNG equivalents). I recommend Googling database unit testing best practices. For database tests, running tests in a Transaction can prevent these type of issues. That way the database is rolled back to its starting state whether the test passes or fails. Spring has some great support for JDBC dtaabase tests. (Even if your project is not a Spring project, the classes can be very useful.) Read section 11.2.2 Unit Testing support Classes and take a look at the AbstractTransactionalJUnit4SpringContextTests / AbstractTransactionalTestNGSpringContextTests classes and the #TransactionConfiguration annotation (this latter one being from running Spring Contexts). There are also other Database testing tools out there such as DbUnit.
I would like to know, is there any way to set up TestNG to handle unexpected exceptions like errors and not like failures?
I tried to throw RuntimeException in my test and it was considered as failure, not like an error.
TestNG documentation talks only about success and failure states -
http://testng.org/doc/documentation-main.html#success-failure.
I would like to have TestNG behaviour similar to JUnit in first question on address
http://www.querycat.com/question/d1c9a200f18e6829cb06dda8eda8ad61
Thanks for your help.
Edit: Ignore what I had before. Unfortunately, I'm fairly certain the answer to your question is no, TestNG does not have this feature. The entirety of the TestNG documentation is here, so if a feature isn't listed on that page it probably doesn't exist. Remember that although TestNG is inspired by JUnit, it should not be considered a super-set of JUnit's features. The only thing I can think of if you want your test suite to catastrophically fail on an exception is to make it call System.exit(1).
You are using the wrong exception.
JUnit is distinguishing between success, failure (AssertionException thrown by assertXXX) and error (all other not expected exception).
If you know that a method will throw an exception you can catch it with #Test(expected=MyExpectedException).