How to handle DCL in Flyway migration scripts? - java

I have a postgres database that has users and roles already defined. There are multiple schemas in this database that are all controlled via different projects/flyway scripts. I am working on adding Flyway integration into a new project where we will use an embedded Postgres instance for testing.
Since none of these users/roles will exist on this instance, they need to be created in a migration script. However, since these users/roles will already exist in my operational databases, the migrations will fail when they attempt to create the roles.
I already considered writing a function for this, but then the function would have to be included in any project that uses the embedded Postgres and would have to be maintained across multiple code bases. This seems very sloppy. Can anyone recommend a way for me to handle these DCL operations using Flyway that will work with the embedded approach as well as my operational databases?

In a previous project we use for this approach a set of additional Flyway migration scripts. These scripts we add to the test environment classpath.
We used this for a Flyway version before the feature of Callback and repeatable migrations were added.
Add a callback configuration for your Test environment and you add in the before or after migration phase your user and roles.
Third solution use repeatable migration scripts for your user and roles setup see https://flywaydb.org/documentation/migration/repeatable. Use this scripts in production and test. But in this case your sql must done correct and repeatable otherwise you will break your production environment.

Related

Recreate flyway migration table

Is there a way to tell flyway to recreate the flyway-table without applying the migrations. Eg, look into the migration-folder for scripts and assume that they all have been applied and simply make sure that the flyway table contains all of them.
Our scenario is that we are not allowed to run flyway in production and for (ISO;Banking;certifications) reasons. The rules says that we need to remove the tables completely. So when we reset our test-environments from a copy of production we need to recreate the flyway-table. Now we are copy and pasting from an existing test-environment but sometime that isn't in sync with production and all kind of problems occur.
So, we would like to setup our production-copy with the same version as in production and then recreate the tables from that making sure that everything are in sync. But to my understanding the repair-option in flyway doesn't recreate the non-applied scripts...
It looks like what you're describing is called a baseline:
You tell flyway that the database you're working on is at a version number so all scripts previous to this version will be ignored during migrations.
https://flywaydb.org/documentation/commandline/baseline

How to perform unit tests with h2 database where bpchar is used?

I have a spring + hibernate application that uses postgres database. I need to write unit tests for the controllers. For tests I wanted to use h2 database but unfortunately test crashes during create-drop leaving me with information that bpchar data type is invalid. I wonder how to solve this issue so I could run tests.
I can't change my columns with bpchar to varchar, it need to stay as it is. I also tried to set postgresql mode but it didn't helped.
Am I right that the only solution I have is to use embedded postgres database in order to perform tests or is there any other approach that I could use?
Am I right that the only solution I have is to use embedded postgres database in order to perform tests or is there any other approach that I could use?
You try to use postgres-specific data type with h2 (which does not have it). Of course, it does not work.
If you cannot change type of this field - use embedded postgres in tests.
Actually you can do this in your application.properties to let H2 know:
spring.datasource.url=jdbc:h2:mem:testdb;INIT=CREATE TYPE BPCHAR AS CHARACTER NOT NULL
Also make sure auto configuration of the database is turned off for your test. You can do this by adding:
#AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
public class MyTestForTableWithBpcharColumn {
One interesting approach to this issue is Test Containers
Since Postgres doesn't have an embedded mode, but you can use the aforementioned framework to start the docker container before the test, create a schema and apply migrations if you're using something like Flyway or Liquibase or integrate your custom solution.
The idea is that the container will be prepared and available to the test when it runs.
After the test passes (regardless of the actual result, success or failure) you can stop the container.
Firing up the container can be quite expensive (a matter of seconds), however you can take advantage of spring caching configurations during the tests, so when the first test in the module starts, the container is actually started, however, it gets reused between the tests and the test cases, since the application context doesn't get re-started.
Keeping the database clean between tests also becomes a trivial task due to Spring's #Transactional annotation that you put on a test case, so that spring artificially rolls back the transaction after each test. Since in Postgres, even DDL commands can be transactional it should be good enough.
The only limitation of this approach is that you should have a docker available on build machine or local development machine if you're planning to run these tests locally (on Linux and Mac OS its not a problem anyway, but on Windows you need to have Windows 10 Professional edition at least in order to be able to install docker environment).
I've used this approach in real projects and found it very effective for integration testing.

Can Liquibase be used only to validate that ChangeSets have been applied but not actually execute them?

I have a constraint for my production system that all SQL changes must be executed manually by a DBA for security purposes. Consequently, I want to use Liquibase to generate the SQL, and have the DBA execute it.
However, on application startup in Production, I would like to configure Liquibase to ensure that all changesets have been executed, and have the proper signatures. If either any of the changesets have an invalid signature or have not been executed, I would like Liquibase to throw an exception (which I can then handle in my startup sequence). Under no circumstances would i want Liquibase to update the DB when run in this environment.
In other environments, I would like to leave it to Liquibase to run in default configuration - that is validate that existing changesets have not been modified and execute any missing changesets.
Does Liquibase support this kind of configuration? I've looked through the liquibase.configuration.GlobalConfiguration class, but do not see any config parameters that would provide this config.
You don't specify how you run Liquibase from your application, so it is a bit hard to say exactly. I think you will want to use two different commands - one for production, and one for all other environments. In most environments, you use the update command. In production, you would need to use the status command which returns either a count of the number of undeployed changesets or a list of undeployed changesets.
I have created a Liquibase-CDI addon that supports this functionality. It is based on the liquibase-cdi extension, but uses the CDI observer pattern instead. It can be found on github at https://github.com/benze/liquibase-cdi

Managing database schemas during the lifecycle of an application using Flyway together with Hibernate's hbm2ddl

I am developing a Spring/Hibernate/MySql application. The application is not yet in production and I currently use the Hibernate's hbm2ddl feature which is very convenient for managing changes on the domain. I also intend to use Flyway for database migrations.
At some point in the future, the application will be put in production for the first time which leads to my first set of questions:
What is the best practice to use for schema creation (first time the app is released into production)? Specifically, should I let Hibernate's hbm2ddl create the schema on the production database or let Flyway create the first schema using a SQL script? If the second option (i.e. Flyway) is preferable, then should I generate a SQL script from a hbm2ddl-created database?
Let's then assume I have my application's first version running in production and I intend to resume development on the second version of the application using Hibernate's hbm2ddl.
How I am going to manage changes to the domain and especially compute the difference between version one and version two of the database schema for migrating the database during the release into production of version two?
The best trade-off is to use hbm2ddl for integration testing only and Flyway for run time, be it QA testing or the production environment.
You can use the hbmddl as the base of your first script for Flyway too, but then every time you make a change to JPA model you need to manually create a new update script, which is not that difficult anyway. This will also enable using DB specific features too.
Because the integration testing and run-time use different strategies it's mandatory to write a system integration test that compares the schemas created by both hbmddl and Flyway. Again this is not difficult either, just need yo make sure you compare against the actual production DB (not the in-memory integration testing one).

Database migration pattern for Java?

Im working on some database migration code in Java. Im also using a factory pattern so I can use different kinds of databases. And each kind of database im using implements a common interface.
What I would like to do is have a migration check that is internal to the class and runs some database schema update code automatically. The actual update is pretty straight forward (I check schema version in a table and compare against a constant in my app to decide whether to migrate or not and between which versions of schema).
To make this automatic I was thinking the test should live inside (or be called from) the constructor. OK, fair enough, that's simple enough. My problem is that I dont want the test to run every single time I instantiate a database object (it runs a query so having it run on every construction is not efficient). So maybe this should be a class static method? I guess my question is, what is a good design pattern for this type of problem? There ought to be a clean way to ensure the migration test runs only once OR is super-efficient.
Have a look at liquibase.
Here's an ibm developerworks article that has a nice walk-thru http://www.ibm.com/developerworks/java/library/j-ap08058/index.html
Flyway fits your needs perfectly. It supports multiple databases, compares the schema version with the available migrations on the classpath and upgrades the database accordingly.
You can embed it in your application and have it run once on startup as described in the Flyway docs.
Note: Flyway also comes with a Maven plugin and the ability to clean an existing schema in case you messed things up in development.
[Disclaimer: I'm one of Flyway's developers]
I've been using the iBatis SQL Mapper and really like it. The next version, iBatis 3.0, has schema migrations support. This is still in beta, but I'm planning on using it when it gets closer to a release candidate.

Categories

Resources