I have a maven project with test execution by the maven-surefire-plugin. An odd phenomenon I've observed and been dealing with is that running locally
mvn clean install
which executes my tests, results in a successful build with 0 Failures and 0 Errors.
Now when I deploy this application to our remote repo that Jenkins attempts to build, I get all sorts of random EasyMock errors, typically of the sort:
java.lang.IllegalStateException: 3 matchers expected, 4 recorded. at org.easymock.internal.ExpectedInvocation.createMissingMatchers
This is a legacy application being inherited, and we are aware that many of these tests are flawed if not plainly using EasyMock incorrectly, but I'm in a state where with test execution I get a successful build locally but not in Jenkins.
I know that the order of execution of these tests is not guaranteed, but I am wondering how I can introspect what is different in the Jenkins build pipeline vs. local to help identify the issue?
Is there anything I can do to force execute the tests in the way they're done locally? At this point, I have simply excluded many troublesome test classes but it seems that no matter how many times I see a Jenkins failure, I either fix the problem or exclude the test class, I'm only to find it complain about some other test class it didn't mention before.
Any ideas how to approach a situation like this?
I have experimented quite a similar situation, and the cause of mine was obviously some concurrency problems with the tests implementations.
And, after reading your comment:
What I actually did that fixed it (like magic am I right?) is for the maven-surefire plugin, I set the property reuseForks=false, and forkCount=1C, which is just 1*(number of CPU's of machine).
... I get more convinced that you have concurrency problems with your tests. Concurrency is not easy to diagnose, specially when your experiment runs OK on one CPU. But race conditions might arise when you run it on another system (which usually is faster or slower).
I recommend you strongly to review your tests one by one and ensure that each one of them is logically isolated:
They should not rely upon an expected previous state (files, database, etc). Instead, they should prepare the proper setup before each execution.
If they modify concurrently a common resource which might interfere other test's execution (files, database, singletons, etc), every assert must be done synchronizing as much as needed, and taking in account that its initial state is unknown:
Wrong test:
MySingleton.getInstance().put(myObject);
assertEquals(1, MySingleton.getInstance().size());
Right test:
synchronized(MySingleton.getInstance())
{
MySingleton.getInstance().put(myObject);
assertTrue(MySingleton.getInstance().contains(myObject));
}
A good start point for the reviewing is checking one of the failing tests and track the execution backwards to find the root cause of the fail.
Setting explicitly the tests' order is not a good practice, and I wouldn't recommend it to you even if I knew it was possible, because it only would hide the actual cause of the problem. Think that, in a real production environment, the executions' order is not usually guranteed.
JUnit test run order is non-deterministic.
Are the versions of Java and Maven the same on the 2 machines? If yes, make sure you're using the most recent maven-surefire-plugin version. Also, make sure to use a Freestyle Jenkins job with a Maven build step instead of the Maven project type. Using the proper Jenkins build type can either fix build problems outright or give you a better error so you can diagnose the actual issue.
You can turn on Maven debug logging to see the order tests are being run in. Each test should set up (and perhaps tear down) its own test data to make sure the tests may run independently. Perhaps seeing the test order will give you some clues as to which classes depend on others inappropriately. And - if the app uses caching, ensure the cache is cleaned out between tests (or explicitly populated depending on what the test needs to do). Also consider running the tests one package at a time to isolate the culprits - multiple surefile plugin executions might be useful.
Also check the app for classpath problems. This answer has some suggestions for cleaning the classpath.
And another possibility: Switching to a later version of JUnit might help - unless the app is using Spring 2.5.6.x. If the app is using Spring 2.5.6.x and cannot upgrade, the highest possible version of JUnit 4.x that may be used is 4.4. Later versions of JUnit are not compatible with Spring Test 2.5.6 and may lead to hard-to-diagnose test errors.
Related
We have a multi-module Maven project that takes about 2 hours to build and we would like to speed that up by making use of concurrency.
We are aware of the -T option which (as explained i.e. here) allows using multiple threads within the same JVM for the build.
Sadly, there is a lot of legacy code (which uses a lot of global states) in the project which makes executing multiple test in parallel in a single JVM very hard. Removing all of these blockers from the project would be a lot of work which we would like to avoid.
The surefire and failsafe plugins have multiple options regarding parallel execution behavior, however, as I understand it, this would only parallelize the test executions. Also, spawning a separate JVM for each test (class) seems kind of overkill to me. That would probably just as soon cause the build to take even longer than it does now.
Ideally, we would like to do the parallelization on the Maven reactor level and have it build each module in its own (single threaded) JVM with up to x JVMs running in parallel.
So my question is: is there a way to make maven create a separate JVM for each module build?
Alternatively, can we parallelize the build while making sure that tests (over all modules) are executed sequentially?
I am not completely sure this works but I guess if you use Maven Toolchains, then each module will start its own forked JVM for the tests, not reusing already running ones.
I guess it is worth a try.
I unfortunately may have two questions in one, or rather the solution may go two different ways. I have log4j loggers set up in a few classes that I unit test on. When I run mvn clean install it obviously runs those tests and in turn creates a log file (that is usually empty as nothing exciting is being logged). This isn't necessarily a problem except that Jenkins doesn't seem to like this when I do Perform Maven Release. It yells about the workspace having local changes and it cites the log file before declaring failure.
I know its the unit tests because if I changed them to integration tests or ignore them, everything works fine. But I'd like a solution not a workaround.
Are there configurations in Jenkins that can allow me to remedy this?
Or is there a strategy for mocking or ignoring logging for Unit tests?
I don't necessarily want to ignore them, but it is interfering with creating a release.
I am not quite aware of what Perform Maven Release does, but I can suggest a couple of solutions:
Remove the log file from your source code repository (as the log files gets regenerated on every run, I don't think it should reside in your source code repository).
Add the path of the offending log file to a list of files that are ignored by version control (e.g. git uses a file called gitignore - https://git-scm.com/docs/gitignore)
Hope this helps!
Amit has some good ideas, and I'll suggest a few more:
Use slf4j for logging in your project, and don't bind a logging implementation during the test phase
going one better, bind slf4j-test during the test phase, which logs to memory. Then you can also write tests against your logging to ensure it happens when you expect it to.
I have a set of legacy unit tests, most of which are Spring AbstractTransactionalJUnit4SpringContextTests tests, but some manage transactions on their own. Unfortunately, this seems to have introduced side-effects causing completely unrelated tests to fail when modifying the test data set, i.e., the failing test works when running it on its own (with the same initial data set), but fails when being run as part of the complete set of tests.
The tests are typically run through Maven's surefire plugin during the regular Maven build.
What I am looking for is an automated way to permute the amount and order of the executed tests to figure out the culprit. A naive, but pretty expensive approach, would take the power set of all tests and run all possible combinations. A more optimized approach would use the existing test execution order (which is mostly random, but stable) and test all potential ordered sub-sets. I am aware that the runtime of this process may be lengthy.
Are there any tools / Maven plugins that can do this out of the box?
I don't know of a tool which does specifically what you want, but you could play about with the runOrder parameter in maven surefire. From that page:
Defines the order the tests will be run in. Supported values are
"alphabetical", "reversealphabetical", "random", "hourly"
(alphabetical on even hours, reverse alphabetical on odd hours),
"failedfirst", "balanced" and "filesystem".
Odd/Even for hourly is
determined at the time the of scanning the classpath, meaning it could
change during a multi-module build.
So you could do a simple alphabetical runOrder and take the first failure, and start from there. At least you have a predictable run order. Then you run one by one (using -Dincludes) each test before the failing one & the failing one, to detect which one is making the failing test fail.
Then repeat the entire process for all of the failing tests. You could run this in a loop overnight or something.
Can you simply amend the tests to use a clean database copy each time? DBUnit is an excellent tool for doing this.
http://www.dbunit.org/
I've recently been asked to, effectively, sell my department on unit testing. I can't tell you how excited this makes me, but I do have one concern. We're using JUnit with Spring and Maven, and this means that each time mvn test is called, it rebuilds the database. Obviously, we can't integrate that with our production server -- it would kill valuable data.
How do I prevent the rebuilding without telling maven to skip testing?
The best I could figure was to assign the script to operate in a test database (line breaks added for readability):
mvn test
-Ddbunit.schema=<database>test
-Djdbc.url=jdbc:mysql://localhost/<database>test?
createDatabaseIfNotExist=true&
useUnicode=true&characterEncoding=utf-8
I can't help but think there must be a better way.
I'm especially interested in learning if there is an easy way to tell Maven to only run tests on particular classes without building anything else? mvn -Dtest=<test-name> test still rebuilds the database.
======= update =======
Bit of egg on my face here. I didn't realize that I was using the same variable in two places, meaning that the POM was using a "skip.test" variable for both rebuilding the database and for running the tests...
Update: I guess that DBUnit does the rebuilding of the DB because it is told to do so in the test setup method. If you change your setup method, you can eliminate the DB rebuild. Of course, you should do it so that you get the DB reset when you need it, and omit it when you don't. My first bet would be to use a system property to control this. You can set the property on the command line the same way you already do with jdbc.url et al. Then in the setup method you add an if to test for that property and do the DB reset if it is set.
A test database, completely separated from your production DB is definitely the best choice if you can have it. You can even use e.g. Derby, an in-memory DB which can run embedded within the JVM. But in case you absolutely can't have a separate DB, use at least a separate test schema inside that DB.
In this scenario I would recommend you put your DB connection parameters into profiles within your pom, the default being the test DB, and a separate profile to contain the production settings. This way it can never happen that you accidentally run your tests against the production DB.
In general, however, it is also important to understand that tests run against a DB are not really unit tests in the strict sense, rather integration tests. If you have an existing set of such tests, fine, use them as much as you can. However, you should try to move towards adding more real unit tests, which test only a small, isolated portion of your code at once (a method or class at most), ideally self contained (need no DB, net, config files etc.) so they can run fast - this is a very important point. If you have 5000 unit tests and each takes only 5 seconds to run, that totals up to almost 7 hours, so you obviously won't run them very often. If a test takes only 5 milliseconds, you get the results in less than half a minute, so you can afford to run all your tests before you commit your latest change - many times a day. That makes a huge difference in the speed of feedback you get from the tests.
Hope this helps.
We're using JUnit with Spring and Maven, and this means that each time mvn test is called, it rebuilds the database.
Maven doesn't do anything with databases by itself, your code does. In any case, it's very unusual to run tests (which are not unit tests) against a production database.
How do I prevent the rebuilding without telling maven to skip testing?
Hard to say without more details (you're not showing anything) but profiles might be a way to go.
Unit tests, by definition, only operate on a single component in the system. You should not be attempting to write unit tests which integrate with any external services (web, DB, etc.). The solution I have to this is to use a good mocking framework to stub out the behaviour of any dependencies your components have. This encourages good interface APIs since most mocking frameworks work best with simple interfaces. It would be best to create a Repository pattern interface for any interactions with your DB and then mock out the impl any time you are testing a class that interacts with it. You can then functionally test your Repository impl separately. This also has the added benefit of keeping your unit tests fast enough to remain part of your CI so that your feedback cycle is as fast as possible.
I want to run my unit tests automatically when I save my Eclipse project. The project is built automatically whenever I save a file, so I think this should be possible in some way.
How do I do it? Is the only option really to get an ant script and change the project build to use the ant script with targets build and compile?
Update I will try 2 different approaches now:
Running an additional builder for my project that executes the ant target test (I have an ant script anyway)
ct-eclipse, recommended by Thorbjørn
For sure it it unwise to run all tests, because we can have for example 20.000 tests whereas our change could affect only, let's say 50 of them, among which are tests for the class we have changed and tests for classes that collaborate with our class.
There is an unseful plugin called infinitetest http://improvingworks.com/products/infinitest/ which runs only some tests ( related to class we've changed ) just after we save changes. It also integrate quite nicely with editor ( using annotations ) and problem view - displaying not-passing tests like errors.
Right click on your project > Properties > Builders > New, and there add your ant ant builder.
But, in my opinion, it is unwise to run the unit tests on each save.
See if Eclipse has a plugin for Infinitest.
I'd also consider TestNG as an alternative to JUnit. It has a lot of features that might be helpful in partitioning your unit test classes into shorter and longer running groups.
I believe you are looking for http://ct-eclipse.tigris.org/
I've experimented with the concept earlier, and my personal conclusion was that in order for this to be useful you need a lot of tests which take time. Personally I save very frequently so this would happen frequently, and I didn't find it to be an advantage. It might be different for you.
Instead we bit the bullet and set up a "build server" which watches our CVS repository and builds projects as they change. If the compilation fails or the tests fail we are notified quickly so we can remedy it.
It is as always a matter of taste what works for you. This is what I've found.
I would recommend Inifinitest for the described situation. Infinitest is nowadays a GPL v3 licensed product. Eclipse update site: http://infinitest.github.com
Then you must use INFINITEST. INFINITEST helps you to do Continuous Testing.
Whenever you make a change, Infinitest runs tests for you.
It selects tests intelligently, and only runs the ones you need. It reports unit test failures like compiler errors, and provides additional information that helps you write better tests.