I've recently been asked to, effectively, sell my department on unit testing. I can't tell you how excited this makes me, but I do have one concern. We're using JUnit with Spring and Maven, and this means that each time mvn test is called, it rebuilds the database. Obviously, we can't integrate that with our production server -- it would kill valuable data.
How do I prevent the rebuilding without telling maven to skip testing?
The best I could figure was to assign the script to operate in a test database (line breaks added for readability):
mvn test
-Ddbunit.schema=<database>test
-Djdbc.url=jdbc:mysql://localhost/<database>test?
createDatabaseIfNotExist=true&
useUnicode=true&characterEncoding=utf-8
I can't help but think there must be a better way.
I'm especially interested in learning if there is an easy way to tell Maven to only run tests on particular classes without building anything else? mvn -Dtest=<test-name> test still rebuilds the database.
======= update =======
Bit of egg on my face here. I didn't realize that I was using the same variable in two places, meaning that the POM was using a "skip.test" variable for both rebuilding the database and for running the tests...
Update: I guess that DBUnit does the rebuilding of the DB because it is told to do so in the test setup method. If you change your setup method, you can eliminate the DB rebuild. Of course, you should do it so that you get the DB reset when you need it, and omit it when you don't. My first bet would be to use a system property to control this. You can set the property on the command line the same way you already do with jdbc.url et al. Then in the setup method you add an if to test for that property and do the DB reset if it is set.
A test database, completely separated from your production DB is definitely the best choice if you can have it. You can even use e.g. Derby, an in-memory DB which can run embedded within the JVM. But in case you absolutely can't have a separate DB, use at least a separate test schema inside that DB.
In this scenario I would recommend you put your DB connection parameters into profiles within your pom, the default being the test DB, and a separate profile to contain the production settings. This way it can never happen that you accidentally run your tests against the production DB.
In general, however, it is also important to understand that tests run against a DB are not really unit tests in the strict sense, rather integration tests. If you have an existing set of such tests, fine, use them as much as you can. However, you should try to move towards adding more real unit tests, which test only a small, isolated portion of your code at once (a method or class at most), ideally self contained (need no DB, net, config files etc.) so they can run fast - this is a very important point. If you have 5000 unit tests and each takes only 5 seconds to run, that totals up to almost 7 hours, so you obviously won't run them very often. If a test takes only 5 milliseconds, you get the results in less than half a minute, so you can afford to run all your tests before you commit your latest change - many times a day. That makes a huge difference in the speed of feedback you get from the tests.
Hope this helps.
We're using JUnit with Spring and Maven, and this means that each time mvn test is called, it rebuilds the database.
Maven doesn't do anything with databases by itself, your code does. In any case, it's very unusual to run tests (which are not unit tests) against a production database.
How do I prevent the rebuilding without telling maven to skip testing?
Hard to say without more details (you're not showing anything) but profiles might be a way to go.
Unit tests, by definition, only operate on a single component in the system. You should not be attempting to write unit tests which integrate with any external services (web, DB, etc.). The solution I have to this is to use a good mocking framework to stub out the behaviour of any dependencies your components have. This encourages good interface APIs since most mocking frameworks work best with simple interfaces. It would be best to create a Repository pattern interface for any interactions with your DB and then mock out the impl any time you are testing a class that interacts with it. You can then functionally test your Repository impl separately. This also has the added benefit of keeping your unit tests fast enough to remain part of your CI so that your feedback cycle is as fast as possible.
Related
Preface
I'm deliberatly talking about system tests. We do have a rather exhaustive suite of unit tests, some of which use mocking, and those aren't going anywhere. The system tests are supposed to complement the unit tests and as such mocking is not an option.
The Problem
I have a rather complex system that only communicates via REST and websocket events.
My team has a rather large collection of (historically grown) system tests based JUnit.
I'm currently migrating this codebase to JUnit5.
The tests usually consist of an #BeforeAll in which the system is started in a configuration specific to the test-class, which takes around a minute. Then there is a number of independent tests on this system.
The problem we routinely run into is that booting the system takes a considerable amount of time and may even fail. One could say that the booting itself can be considered a test-case. JUnit handles lifecycle methods kind of weirdly - the time they take isn't shown in the report; if they fail it messes with the count of tests; it's not descriptive; etc.
I'm currently looking for a workaround, but what my team has done over the last few years is kind of orthogonal to the core idea of JUnit (cause it's a unit testing framework).
Those problems would go away if I replaced the #BeforeAllwith a test-method (let's call it #Test public void boot(){...}) and introduce an order-dependency (which is pretty easy using JUnit 5) that enforces boot to run before any other test is run.
So far so good! This looks and works great. The actual problem starts when the tests aren't executed by the CI server but by developers who try to troubleshoot. When I try to start a single test boot is filtered from the test execution and the test fails.
Is there any solution to this in JUnit5? Or is there a completely different approach I should take?
I suspect there may be a solution in using #TestTemplate but I'm really not sure how to procede. Also afaik that would only allow me to generate new named tests that would be filtered as well. Do I have to write a custom test-engine? That doesn't seem compelling.
This more general testing problem then related to Junit5. In order to skip very long boot up you can mock some components if it is possible. Having the booting system as a test does not make sense because there are other tests depending on that. Better to use #beforeAll in this case as it was before. For testing boot up, you can make separate test class for that which will run completely independent from other tests.
Another option is to group this kind of test and separate from the plain unit test and run it only if needed (for example before deployment on CI server). This really depends on specific use case and should those test be part of regular build on your local machine.
The third option is to try to reduce boot time if it possible. This is option if you can't use mocks/stubs or exclude those tests from regular build.
We are trying to use Feign + Ribbon in one of our projects. In production code, we do not have a problem, but we have a few in JUnit tests.
We are trying to simulate number of situations (failing services, normal runs, exceptions etc.), hence we need to configure Ribbon in our integration test many times. Unfortunately, we noticed that even when we destroy the Spring context, part of the state still survives probably somewhere in static variables (for example: new tests still connect to balancer from the previous suite).
Is there any recommended way, how to purge the static state of both these tools? (something like Hystrix.reset())
Thanks in advance!
We tried to reset JVM after each suite - it works perfectly, but its not very practical (we must set it up in both Gradle and Idea (as Idea test tunner does not honor this out of the box)). We also tried renaming the service between tests - this works on lets say 99% (it sometimes fails for some reason...).
You should submit a bug to Ribbon if it is the case that there is some static state somewhere. Figure out what minimal code causes the issue, if you are not able to do that though then they won't do anything. In your code base you should do a search for any use of static which is not also final and refactor them as well if any exist.
Furthermore you may find it useful to make strong distinctions between the various different types of tests. It doesn't sound like you are doing a unit test to me. Even though you are just simulating these services, and simulating failures, this sort of test is really an integration test, because you are testing if Ribbon is configured correctly with your own components, which is really an integration test. It would be a unit test if you would test only that your component is configuring Ribbon correctly, not sure if I made sense there haha it's a subtle distinction but it has large implications in your test.
On another note don't dismiss what you have now as necessarily a bad idea. It may be very useful to have some heavy weight integration tests checking the behaviour of Feign if this is a mission critical function, IMO it's a great idea in that case. But it's a heavy weight integration test and should be treated as such. You might want to even use some container magic ect to achieve this sort of test, with services which fail in your various different failure scenarios. This would live in CI and usually developers wouldn't run those guys with each commit unless they were working directly with a piece of functionality concerning integration.
We are currently improving the test coverage of a set of database-backed applications (or 'services') we are running by introducing functional tests. For me, functional tests treat the system under test (SUT) as a black box and test it through its public interface (be it a Web interface, REST, or our potential adventure into the messaging realm using AMQP).
For that, the test cases either A) bootstrap an instance of the application or B) use an instance that is already running.
The A version allows for test cases to easily test the current version of the system through the test phase of a build tool or inside a CI job. That is what e.g. the Grails functional test phase is for. Or Maven could be set up to do this.
The B version requires the system to already run but the system could be inside (or at least closer to) a production environment. Grails can do this through the -baseUrl option when executing functional tests.
What now puzzles me is how to achieve a required state of the service prior to the execution of every test case?
If I e.g. want to test a REST interface that does basic CRUD, how do I create an entity in the database so that I can test the HTTP GET for it?
I see different possibilities:
Using the same API (e.g. HTTP POST) to create the entity. Downside: Changing the creation method breaks two test cases. Furthermore, there might not be a creation method for all APIs.
Adding an additional CRUD API for testing and only activating that in non-production environments. That API is then used for testing. Downside: adds additional code to the production system, API logic might not be trivial, e.g. creation of complex entity graphs (through aggregation/composition), and we need to make sure the API is not activated for production.
Basically the same approach is followed by the Grails Remote Control plugin. It allows you to "grab into your application" and invoke arbitrary code through serialisation. Downside: Feels "brittle". There might be similar mechanisms for different languages/frameworks (this question is not Grails specific).
Directly accessing the relational database and creating/deleting content, e.g. using DbUnit or just manually creating entities through JDBC. Downside: you duplicate creation/deletion logic and/or ORM inside the test case. Refactoring the DB breaks the test case though the SUT still works.
Besides these possibilities, Grails when using the (-inline) option for functional tests allows accessing Spring services (since the application instance is run inside the same JVM as the test case). Same applies for Spring Boot "integration tests". But I cannot run the tests against an already running application version (as described as option B above).
So how do you do that? Did I miss any option for that?
Also, how do you guarantee that each test case cleans up after itself properly so that the next test case sees the SUT in the same state?
as with unit testing you want to have a "clean" database before you run a functional test. You will need some setup/teardown functionality to bring the database into a defined state.
easiest/fastest solution to clean the database is to delete all content with an sql script. (For debugging it is also useful to run this in the test setup to keep the state of the database after a test failure.) This can be maintained manually (it just contains delete <table> statements). If your database changes often you could try to generate the clean script (disable foreign keys (to avoid ordering problem), delete tables).
to generate test data you can use an sql script too but that will be hard to maintain, or create it by code. The code can be placed in ordinary Services. If you don't need real production data the build-test-data plugin is a great help at simplifying test data creation. If you are on the code side it also makes sense to re-use the production code to create test data to avoid duplication.
to call the test data setup simply use remote-control. I don't think it is more brittle than all the http & ajax stuff ;-). Since we now have all the creation code in a service the only thing you need to call with remote control is the Service that does create the data. It does not have to get more complicated than remote { ctx.testDataService.setupDataForXyz() }. If it is that simple you can even drop remote-control and use a controller/action to run it.
do not test too much detail with functional tests to make it not more complicated as it already is. :)
I am using Maven and TestNG.
How to distinguish at runtime when a particular method is being called by a TestNG/JUnit test-case or by the main java code
Several comments are alluding to this, however it's generally extremely poor practice to build in statements that work one way under test, and another way when the app is running standalone. This increases the probability that the app will pass tests, but fail in production.
Instead, you should look at why you're wanting to make this distinction. In general, it will be for the sake of some dependent object, or due to input of one variety or another. In these cases, it's better to engineer the class to accept dependent objects to be inserted into it via configuration and under test, the only thing that changes is the configuration. The class under test should not distinguish the dependant classes from one another. Instead, just work with their interfaces, so you can create mock classes for testing.
When accepting input, redirect the input source to take scripted input.
For databases, redirect to an in-memory DB which is configured for the test, etc.
You will find this approach will VASTLY improve the quality of the code you write, and decrease the probability of bugs sneaking past your unit tests.
At runtime, your code is never running unit tests. Unless you invoke them explicitly from your code, which you should never do.
Unit tests are only run manually, or during the test phase of the maven lifecycle.
Here's the scenario. I have VO (Value Objects) or DTO objects that are just containers for data. When I take those and split them apart for saving into a DB that (for lots of reasons) doesn't map to the VO's elegantly, I want to test to see if each field is successfully being created in the database and successfully read back in to rebuild the VO.
Is there a way I can test that my tests cover every field in the VO? I had an idea about using reflection to iterate through the fields of the VO's as part of the solution, but maybe you guys have solved the problem before?
I want this test to fail when I add fields in the VO, and don't remember to add checks for it in my tests.
dev environment:
Using JUnit, Hibernate/Spring, and Eclipse
Keep it simple: write one test per VO/DTO:
fill the VO/DTO with test data
save it
(optional: check everything has been correctly save at the database level, using pure JDBC)
load it
check that the loaded VO/DTO and the original one matches
Productive code will evolve and tests will need to be maintained as well. Making tests the simplest as possible, even if they are repetitive, is IMHO the best approach. Over-engineering the tests or testing framework itself to make tests generic (e.g. by reading fields with reflection and filling VO/DTO automatically) leads to several problems:
time spent to write the test is higher
bug might be introduced in the test themselves
maintenance of the test is harder because they are more sophisticated
tests are harder to evolve, e.g. the generic code will maybe not work for new kinds of VO/DTO that differ slightly from the other and will be introduced later (it's just an example)
tests can not be used easily as example of how the productive code works
Test and productive code are very different in nature. In productive code, you try to avoid duplication and maximize reuse. Productive code can be complicated, because it is tested. On the other hand, you should try to have tests as simple as possible, and duplication is ok. If a duplicated portion is broken, the test will fail anyway.
When productive code change, this may require several tests to be trivially changed. With the problem that tests are seen as boring piece of code. But I think that's the way they should be.
If I however got your question wrong, just let me know.
I would recommend cobertura for this task.
You will get a complete code coverage report after you run your tests and if you use the cobertura-check ant task you can add checks for the coverage and stop the ant call with the property haltonfailure.
You could make it part of the validation of the VO. If the fields aren't set when you use a getter it can throw an exception.