I am working on spwrap to simplify calling to stored procedures.
Currently I wrote a couple of automated integration tests against HSQL which provides in-memeory database mode (and run on travis-ci).
Still I need to write more integration tests against other DBMS, example MySQL, SQL Server, Oracle, etc.
According to this answer, I can use MariaDB4j for MySQL in-memory testing.
But what about other DBMS in particular SQL Server and Oracle?
UPDATE: Is HSQLDB database compatibility sufficient?
One possible solution is to run on docker/docker-compose:
An important part of any Continuous Deployment or Continuous
Integration process is the automated test suite. Automated end-to-end
testing requires an environment in which to run tests. Compose
provides a convenient way to create and destroy isolated testing
environments for your test suite. By defining the full environment in
a Compose file you can create and destroy these environments in just a
few commands
https://docs.docker.com/compose/overview/#automated-testing-environments
BTW, Travis-ci support docker
Related
I have a spring + hibernate application that uses postgres database. I need to write unit tests for the controllers. For tests I wanted to use h2 database but unfortunately test crashes during create-drop leaving me with information that bpchar data type is invalid. I wonder how to solve this issue so I could run tests.
I can't change my columns with bpchar to varchar, it need to stay as it is. I also tried to set postgresql mode but it didn't helped.
Am I right that the only solution I have is to use embedded postgres database in order to perform tests or is there any other approach that I could use?
Am I right that the only solution I have is to use embedded postgres database in order to perform tests or is there any other approach that I could use?
You try to use postgres-specific data type with h2 (which does not have it). Of course, it does not work.
If you cannot change type of this field - use embedded postgres in tests.
Actually you can do this in your application.properties to let H2 know:
spring.datasource.url=jdbc:h2:mem:testdb;INIT=CREATE TYPE BPCHAR AS CHARACTER NOT NULL
Also make sure auto configuration of the database is turned off for your test. You can do this by adding:
#AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
public class MyTestForTableWithBpcharColumn {
One interesting approach to this issue is Test Containers
Since Postgres doesn't have an embedded mode, but you can use the aforementioned framework to start the docker container before the test, create a schema and apply migrations if you're using something like Flyway or Liquibase or integrate your custom solution.
The idea is that the container will be prepared and available to the test when it runs.
After the test passes (regardless of the actual result, success or failure) you can stop the container.
Firing up the container can be quite expensive (a matter of seconds), however you can take advantage of spring caching configurations during the tests, so when the first test in the module starts, the container is actually started, however, it gets reused between the tests and the test cases, since the application context doesn't get re-started.
Keeping the database clean between tests also becomes a trivial task due to Spring's #Transactional annotation that you put on a test case, so that spring artificially rolls back the transaction after each test. Since in Postgres, even DDL commands can be transactional it should be good enough.
The only limitation of this approach is that you should have a docker available on build machine or local development machine if you're planning to run these tests locally (on Linux and Mac OS its not a problem anyway, but on Windows you need to have Windows 10 Professional edition at least in order to be able to install docker environment).
I've used this approach in real projects and found it very effective for integration testing.
I'm building a basic HTTP API and some actions like POST /users create a new user record in the database.
I understand that I could mock these calls, but at some level I'm wondering if it's easier to let my Junit tests run against a real (test) database? Is this a bad practice? Should only integration tests run against a real DB?
I'm using flyway to maintain my test schema and maven for my build, so I can have it recreate the test DB with the proper schema on each build. But I'm also worried that I'd need some additional overhead to maintain/clean the state of the database between each test, and I'm not sure if there's a good way to do that.
Unit tests are used to test single unit of code. This means that you write a unit test by writing something that tests a method only. If there are external dependencies then you mock them instead of actually calling and using those dependencies.
So, if you write code and it interacts with the real database, then it is not a unit test. Say, for some reason your call to db fails then unit test will also fail. Success or failure of your unit test should not be dependent on the external dependencies like db in your case. You have to assume that db call is successful and then hard code the data using some mocking framework(Mockito) and then test your method using that data.
As often, it depends.
On big projects with lots of JUnit tests, the overhead for the performance can be a point. Also the work time needed for the setup of the test data within the database as well as the needed concept for your tests not interfering with the test data of other tests while parallel execution of JUnit tests is a very big argument for only testing against a database if needed and otherwise mock it away.
On small projects this problems may be easier to handle so that you can always use a database but I personally wouldn't do that even on small projects.
As several other answers suggest you should create unit tests for testing small pieces of code with mocking all external dependencies.
However sometimes ( a lot of times) it should worth to test whole features. Especially when you use some kind of framework like Spring. Or you use a lot of annotations. When your classes or methods have annotations on them the effects of those annotations usually cannot be tested via unit-tests. You need the whole framework running during the test to make sure it works as expected.
In our current project we have almost as much integration tests as unit tests. We use the H2 in-memory DB for these tests, this way we can avoid failures because of connectivity problems, and Spring's test package could collect and run multiple integration tests into the same test-context, so it has to build the context only once for multiple tests and this way running these tests are not too expensive.
Also you can create separate test context for different part of the project (with different settings and DB content), so this way the tests running under different context won't interfere with each-other.
Do not afraid of using a lot of integration tests. You need some anyway, and if you already have a test-context it's not a big deal adding some more tests into the same context.
Also there are a lot of cases which would take a LOT of effort to cover with unit-tests (or cannot be covered fully at all) but can be covered simply by an integration tests.
A personal experience:
Our numerous integration tests were extremely useful when we switched from Spring Boot to Spring Boot 2.
Back to the original question:
Unit tests should not connect to real DB, but feel free to use more integration tests. (with in-memory DB)
Modern development practices recommend that every developer runs the full suite of unit tests often. Unit tests should be reliable (should not fail if the code is OK) Using an external database can interfere with those desiradata.
If the database is shared, simultaneous runs of the testsuite by different developers could interfere with each other.
Setting up and tearing down the database for each test is typically expensive, and thus can make the tests too slow for frequent execution.
However, using a real database for integration tests is OK. If you use an in-memory database instead of a fully real database, even set up and tear down of the database for each integration test can be acceptably fast.
A popular choice is the use of an in-memory database to run tests. This makes it easy to test, for example, repository methods and business logic involving database calls.
When opting for a "real" database, make sure that every developer has his/her own test database to avoid conflicts. The advantage of using a real database is that this prevents possible issues that could arise because of slight differences in behavior between in-memory and real database. However, test execution performance can be an issue when running a large test suite against a real database.
Some databases can be embedded in a way that the database doesn't even need to be installed for test execution. For example, there is an SO thread about firing up an embedded Postgres in Spring Boot tests.
I have a postgres database that has users and roles already defined. There are multiple schemas in this database that are all controlled via different projects/flyway scripts. I am working on adding Flyway integration into a new project where we will use an embedded Postgres instance for testing.
Since none of these users/roles will exist on this instance, they need to be created in a migration script. However, since these users/roles will already exist in my operational databases, the migrations will fail when they attempt to create the roles.
I already considered writing a function for this, but then the function would have to be included in any project that uses the embedded Postgres and would have to be maintained across multiple code bases. This seems very sloppy. Can anyone recommend a way for me to handle these DCL operations using Flyway that will work with the embedded approach as well as my operational databases?
In a previous project we use for this approach a set of additional Flyway migration scripts. These scripts we add to the test environment classpath.
We used this for a Flyway version before the feature of Callback and repeatable migrations were added.
Add a callback configuration for your Test environment and you add in the before or after migration phase your user and roles.
Third solution use repeatable migration scripts for your user and roles setup see https://flywaydb.org/documentation/migration/repeatable. Use this scripts in production and test. But in this case your sql must done correct and repeatable otherwise you will break your production environment.
I currently wonder what options do I have to configure Cassandra for unit testing.
Currently I just use a SSD drive and set the Cassandra directory differently and start the tests loading test scenarios. It is dead slow but I reuse the server and heal the scenarios (restore instead of delete and start over) but beside from that what else can I do?
I also pondered if I can create a ram drive and mount it just for those tests.
What options are useful in conjunction with tests without introducing functional differences that make acceptance tests worthless?
Is there an in-memory replacement like one replaces MySQL/PostgreSQL with H2 for unit testing?
I'd avoid using a real Cassandra instance for your unit testing, it'll make your tests brittle and will mean they won't be able to run anywhere. Theres a couple of options for unit testing your dao's without a real Cassandra needing to be available.
One option is Cassandra Unit. This works by starting up an embedded Cassandra for you to connect to, you can create keyspaces/tables and insert data to prime it just like a real Cassandra.
Another option is Scassandra. This starts up a stubbed Cassandra and needs to be primed with what to return. The great thing about Scassandra is that you can test all of your error scenarios such as timeouts, NoHostAvailable etc.
I am developing a Spring/Hibernate/MySql application. The application is not yet in production and I currently use the Hibernate's hbm2ddl feature which is very convenient for managing changes on the domain. I also intend to use Flyway for database migrations.
At some point in the future, the application will be put in production for the first time which leads to my first set of questions:
What is the best practice to use for schema creation (first time the app is released into production)? Specifically, should I let Hibernate's hbm2ddl create the schema on the production database or let Flyway create the first schema using a SQL script? If the second option (i.e. Flyway) is preferable, then should I generate a SQL script from a hbm2ddl-created database?
Let's then assume I have my application's first version running in production and I intend to resume development on the second version of the application using Hibernate's hbm2ddl.
How I am going to manage changes to the domain and especially compute the difference between version one and version two of the database schema for migrating the database during the release into production of version two?
The best trade-off is to use hbm2ddl for integration testing only and Flyway for run time, be it QA testing or the production environment.
You can use the hbmddl as the base of your first script for Flyway too, but then every time you make a change to JPA model you need to manually create a new update script, which is not that difficult anyway. This will also enable using DB specific features too.
Because the integration testing and run-time use different strategies it's mandatory to write a system integration test that compares the schemas created by both hbmddl and Flyway. Again this is not difficult either, just need yo make sure you compare against the actual production DB (not the in-memory integration testing one).