I need to setup a database in my tests (schema and some test data), this takes quite a bit of time, and as such I prefer to have it done once for all tests that are being run, and reset so that any chanages to the DB are rolled back between tests.
I'm not sure though which JUnit facilities should be used for this.
It seems like I can set a #BeforeClass/#AfterClass on a test suite, but than I can't run individual tests anymore.
Is there some way to add a setup/teardown for all tests that will run even when only executing a subset of the tests and not a specific suite? (For example NUnit has SetUpFixture)
I guess the transactions/truncation of the DB can be done using JUnit Rules...
You can use in-memory databases like HSQL or H2 to speed up test.
To roll back, you can use transactional feature.
Is there some way to add a setup/teardown for all tests that will run even when only executing a subset of the tests and not a specific suite?
For this, you can create a super class which is extended by other test classes. In super class, you can set up to setup/teardown.
Related
Preface
I'm deliberatly talking about system tests. We do have a rather exhaustive suite of unit tests, some of which use mocking, and those aren't going anywhere. The system tests are supposed to complement the unit tests and as such mocking is not an option.
The Problem
I have a rather complex system that only communicates via REST and websocket events.
My team has a rather large collection of (historically grown) system tests based JUnit.
I'm currently migrating this codebase to JUnit5.
The tests usually consist of an #BeforeAll in which the system is started in a configuration specific to the test-class, which takes around a minute. Then there is a number of independent tests on this system.
The problem we routinely run into is that booting the system takes a considerable amount of time and may even fail. One could say that the booting itself can be considered a test-case. JUnit handles lifecycle methods kind of weirdly - the time they take isn't shown in the report; if they fail it messes with the count of tests; it's not descriptive; etc.
I'm currently looking for a workaround, but what my team has done over the last few years is kind of orthogonal to the core idea of JUnit (cause it's a unit testing framework).
Those problems would go away if I replaced the #BeforeAllwith a test-method (let's call it #Test public void boot(){...}) and introduce an order-dependency (which is pretty easy using JUnit 5) that enforces boot to run before any other test is run.
So far so good! This looks and works great. The actual problem starts when the tests aren't executed by the CI server but by developers who try to troubleshoot. When I try to start a single test boot is filtered from the test execution and the test fails.
Is there any solution to this in JUnit5? Or is there a completely different approach I should take?
I suspect there may be a solution in using #TestTemplate but I'm really not sure how to procede. Also afaik that would only allow me to generate new named tests that would be filtered as well. Do I have to write a custom test-engine? That doesn't seem compelling.
This more general testing problem then related to Junit5. In order to skip very long boot up you can mock some components if it is possible. Having the booting system as a test does not make sense because there are other tests depending on that. Better to use #beforeAll in this case as it was before. For testing boot up, you can make separate test class for that which will run completely independent from other tests.
Another option is to group this kind of test and separate from the plain unit test and run it only if needed (for example before deployment on CI server). This really depends on specific use case and should those test be part of regular build on your local machine.
The third option is to try to reduce boot time if it possible. This is option if you can't use mocks/stubs or exclude those tests from regular build.
I am running a suite of integration tests using maven and about 10% of the tests would fail or throw an error. However, when I start the server and run the individual failed tests manually from my IDE(intellij idea), they all pass with no problem. What could be the cause of this issue?
This is almost always caused by the unit tests running in an inconsistent order, or a race condition between two tests running in parallel via forked tests. If Test #1 finishes first, it passes. But if Test #2 finishes first, it leaves a test resource, such as a test database, in an alternate state causing Test #1 to fail. It is very common with database tests, esepecially when one or more alter the database. Even in IDEA, you may find all the tests in the com.example.FooTest class always pass when you run that class. But if you run all the tests in the com.example package or all tests in the project, sometimes (or even always) a test in FooTest fails.
The fix is to ensure your tests are always guaranteed a consistent state when run. (That is a guiding principle for good unit tests.) You need to pay attention to test setup and tear-down via the #Before, #BeforeClass, #After, and #AfterClass annotations (or TestNG equivalents). I recommend Googling database unit testing best practices. For database tests, running tests in a Transaction can prevent these type of issues. That way the database is rolled back to its starting state whether the test passes or fails. Spring has some great support for JDBC dtaabase tests. (Even if your project is not a Spring project, the classes can be very useful.) Read section 11.2.2 Unit Testing support Classes and take a look at the AbstractTransactionalJUnit4SpringContextTests / AbstractTransactionalTestNGSpringContextTests classes and the #TransactionConfiguration annotation (this latter one being from running Spring Contexts). There are also other Database testing tools out there such as DbUnit.
I have a set of JUnit test files and also a test suite (Suite class) file which holds the reference to all individual Junit test files.
All are database oriented. The database used is Mysql and I am using Eclipse IDE to run the tests.
When running each file individually I get the correct value and the assertion is correct but when running from the test suite it shows a different value.
I have made each JUnit test file to access the database independently with a different database name (even though the table structure is same)
Whether we need to prevent the JUnit test cases from running parallel / the database related statements need to be verified ?
I would suggest you look into :
any static data (a singleton, static methods, etc...)
the order of tests (is it possible that some of the tests depends on some data that another test create, erase, or modify ?)
if you're using a framework that allow suite-wide setUp or tearDown methods, could they somehow break individual tests ?
since you're using a database, is it possible that somehow your code is transactionnal, but when running the tests in a Suite some transactions are not commited at the right time (for example, not before the whole suite is finished) ?
There are 2 reasons that this may happen.
One is that you have not constructed the suite properly and some tests share resources with others.
And the other is that when you finish a test, you don't rollback the database and so the next test finds database in an erroneous state.
I have a JUnit 4 test suite with BeforeClass and AfterClass methods that make a setup/teardown for the following test classes.
What I need is to run the test classes also by them selves, but for that I need a setup/teardown scenario (BeforeClass and AfterClass or something like that) for each test class. The thing is that when I run the suite I do not want to execute the setup/teardown before and after each test class, I only want to execute the setup/teardown from the test suite (once).
Is it possible ? Thanks in advance.
I don't know of any standard way to do this with JUnit. The reason for it, as you probably already know, is that your test cases should run independently of each other. This concerns the "normal" setup/teardown methods which run before and after each test method. Class setup and teardown is a bit different though - although I would still prefer running my tests independently and staying out of the trouble zone.
However, if you really are convinced of what you are doing, you could use a global flag to signal whether or not the class setup/teardown is to run, and to check for its state in the class setup/teardown methods. In your test suite, you could include a special class as the very first one, which does nothing more than execute the setup and set the global flag to indicate to the real test cases that their class setup/teardown methods must not be run. Similarly, a special last class in the suite can execute the teardown code. The caveat is that I am afraid JUnit does not guarantee the order of execution of test classes inside a suite, although most probably it does execute them in the specified order - but this is just an implementation detail. Try it out, it may work for you - but there is no guarantee it will always do what you expect.
If you have jUnit 4.7+ I recommend looking into the new feature called Rules (which are explained in this blog post). They might not be exactly what you want, but they are probably the best you get with jUnit.
Supposedly TestNG has better test grouping possibilities, but I haven't really looked into it myself yet.
No, there's no standard way to do this in JUnit, though you could hack something up as Péter Török suggested.
Note however that you are more or less abusing JUnit in doing this. The whole point of unit tests it that they are independent of each other. This is because dependencies between tests create a total maintenance nightmare (tests failing because the run in the wrong order).
So I'd advise you to strongly consider if it's not better to just always run the setup...
I've recently been asked to, effectively, sell my department on unit testing. I can't tell you how excited this makes me, but I do have one concern. We're using JUnit with Spring and Maven, and this means that each time mvn test is called, it rebuilds the database. Obviously, we can't integrate that with our production server -- it would kill valuable data.
How do I prevent the rebuilding without telling maven to skip testing?
The best I could figure was to assign the script to operate in a test database (line breaks added for readability):
mvn test
-Ddbunit.schema=<database>test
-Djdbc.url=jdbc:mysql://localhost/<database>test?
createDatabaseIfNotExist=true&
useUnicode=true&characterEncoding=utf-8
I can't help but think there must be a better way.
I'm especially interested in learning if there is an easy way to tell Maven to only run tests on particular classes without building anything else? mvn -Dtest=<test-name> test still rebuilds the database.
======= update =======
Bit of egg on my face here. I didn't realize that I was using the same variable in two places, meaning that the POM was using a "skip.test" variable for both rebuilding the database and for running the tests...
Update: I guess that DBUnit does the rebuilding of the DB because it is told to do so in the test setup method. If you change your setup method, you can eliminate the DB rebuild. Of course, you should do it so that you get the DB reset when you need it, and omit it when you don't. My first bet would be to use a system property to control this. You can set the property on the command line the same way you already do with jdbc.url et al. Then in the setup method you add an if to test for that property and do the DB reset if it is set.
A test database, completely separated from your production DB is definitely the best choice if you can have it. You can even use e.g. Derby, an in-memory DB which can run embedded within the JVM. But in case you absolutely can't have a separate DB, use at least a separate test schema inside that DB.
In this scenario I would recommend you put your DB connection parameters into profiles within your pom, the default being the test DB, and a separate profile to contain the production settings. This way it can never happen that you accidentally run your tests against the production DB.
In general, however, it is also important to understand that tests run against a DB are not really unit tests in the strict sense, rather integration tests. If you have an existing set of such tests, fine, use them as much as you can. However, you should try to move towards adding more real unit tests, which test only a small, isolated portion of your code at once (a method or class at most), ideally self contained (need no DB, net, config files etc.) so they can run fast - this is a very important point. If you have 5000 unit tests and each takes only 5 seconds to run, that totals up to almost 7 hours, so you obviously won't run them very often. If a test takes only 5 milliseconds, you get the results in less than half a minute, so you can afford to run all your tests before you commit your latest change - many times a day. That makes a huge difference in the speed of feedback you get from the tests.
Hope this helps.
We're using JUnit with Spring and Maven, and this means that each time mvn test is called, it rebuilds the database.
Maven doesn't do anything with databases by itself, your code does. In any case, it's very unusual to run tests (which are not unit tests) against a production database.
How do I prevent the rebuilding without telling maven to skip testing?
Hard to say without more details (you're not showing anything) but profiles might be a way to go.
Unit tests, by definition, only operate on a single component in the system. You should not be attempting to write unit tests which integrate with any external services (web, DB, etc.). The solution I have to this is to use a good mocking framework to stub out the behaviour of any dependencies your components have. This encourages good interface APIs since most mocking frameworks work best with simple interfaces. It would be best to create a Repository pattern interface for any interactions with your DB and then mock out the impl any time you are testing a class that interacts with it. You can then functionally test your Repository impl separately. This also has the added benefit of keeping your unit tests fast enough to remain part of your CI so that your feedback cycle is as fast as possible.