Is there a way to tell flyway to recreate the flyway-table without applying the migrations. Eg, look into the migration-folder for scripts and assume that they all have been applied and simply make sure that the flyway table contains all of them.
Our scenario is that we are not allowed to run flyway in production and for (ISO;Banking;certifications) reasons. The rules says that we need to remove the tables completely. So when we reset our test-environments from a copy of production we need to recreate the flyway-table. Now we are copy and pasting from an existing test-environment but sometime that isn't in sync with production and all kind of problems occur.
So, we would like to setup our production-copy with the same version as in production and then recreate the tables from that making sure that everything are in sync. But to my understanding the repair-option in flyway doesn't recreate the non-applied scripts...
It looks like what you're describing is called a baseline:
You tell flyway that the database you're working on is at a version number so all scripts previous to this version will be ignored during migrations.
https://flywaydb.org/documentation/commandline/baseline
Related
We're using liquibase 3.5.1 to help maintain mysql/mariadb installations across dozens of client computers. Our stand-alone app creates a local DB and prepopulates that DB with 'seed' data. With updates to mysql and some other tools we're using, we've been forced to alter some legacy liquibase changesets which obviously changes the checksums for these changesets.
We'd like to have liquibase completely ignore the changes to the checksums.
If it were just a couple of changes or even a lot of changes where a developer could intervene we would just update the databasechangelog table directly. However, there are affected databases on clients' computers who would have no idea how to make the database changes needed.
I know about 'validCheckSum' and thought I could use
--validCheckSum: ANY
in the formatted sql files but that doesn't appear to do anything.
--liquibase formatted sql
--changeset db-scripter:1
--comment: fixing issues with this after upgrading
--validCheckSum: ANY
INSERT INTO ...
'runOnChange' is also not an option as we don't want to rerun any of these old changesets (and insert twice the 'seed' data)
Are we missing any options here? Or perhaps we're not using the validCheckSum correctly?
I had a similar situation, and I solved using clearCheckSums command.
I have small project written in Spring. For database migration and seeds I use liquibase.
After some time I have request to downgrade my database to previous version. Is this possible to do with liquibase and what workflow do you recommend? I can do that with packaging new war file and run some pure sql scripts but that is not a good way for me. I just want to do that with liquibase - maybe some rollback to previous version.
Thank you in advance.
Liquibase can not simply (automatically) rollback already existent updates. Only thing you could do - write additional changesets, where you will manually describe needed changes in the DB, which will return structure to needed state.
You can also describe rollback actions in advance in order to make this process more nice, it is can be done via rollback section in each changeset.
remember that db rollback is not something that is in general feasible. for example in v1 you have a column A full of data (to make discussion easier: with not null constrain and without a default value). in v2 you delete column A. how do you want to automatically perform rollback / downgrade?
i suggest to add another migration that will migrate your db to state v3 that looks exactly like / similar to v1. inside this migration you can handle all the missing data etc
I have a postgres database that has users and roles already defined. There are multiple schemas in this database that are all controlled via different projects/flyway scripts. I am working on adding Flyway integration into a new project where we will use an embedded Postgres instance for testing.
Since none of these users/roles will exist on this instance, they need to be created in a migration script. However, since these users/roles will already exist in my operational databases, the migrations will fail when they attempt to create the roles.
I already considered writing a function for this, but then the function would have to be included in any project that uses the embedded Postgres and would have to be maintained across multiple code bases. This seems very sloppy. Can anyone recommend a way for me to handle these DCL operations using Flyway that will work with the embedded approach as well as my operational databases?
In a previous project we use for this approach a set of additional Flyway migration scripts. These scripts we add to the test environment classpath.
We used this for a Flyway version before the feature of Callback and repeatable migrations were added.
Add a callback configuration for your Test environment and you add in the before or after migration phase your user and roles.
Third solution use repeatable migration scripts for your user and roles setup see https://flywaydb.org/documentation/migration/repeatable. Use this scripts in production and test. But in this case your sql must done correct and repeatable otherwise you will break your production environment.
I am at the almost ready stage of my JEE development. With a lot of recommendation NOT to use Hibernate's hbm2ddl.auto in production, I decided to remove it.
So now, I found out about Flyway, which seems great for future db changes and migrations, but I am stuck at first step: I have many entities, some entities inherit from base entities. This makes the CREATE statement very complex.
What is the best practice to create the first migration file?
Thanks!
If you've taken an "entities first" approach during development you'll need to generate the initial schema in the same way for the first live deployment: This will produce the first creation script used by Flyway and there may also need to be a second associated script for populating reference data.
In a nutshell, the reasons for no longer being able to use hbm2ddl.auto after the first deployment are that create will destroy existing data and update isn't reliable enough to cover all types of schema changes (as it sounds like you may already know from this SO question).
Flyway is a very useful tool but it does require a level of discipline that may not have existed during development. When going forward from the initial release, database update scripts need to be produced for Flyway that are equivalent to the changes made to the entities since the last release. There are tools (e.g. various commercial products from Redgate) that may help here: These attempt to "diff" two schemas and generate schema and/or data update scripts for getting from database A to database B. But in my experience, none of them are perfect and they don't quite reach the holy grail of enabling a completely automated approach.
Arguably, the best way is an "as you go" manual approach to ensure that non-destructive update scripts are committed to source control whenever an entity change is made that affects the schema or reference data - but as already mentioned, this will require some discipline and/or documented processes for all team members to follow.
For the first migration file, you just need the current ddl of your database. There are many tools which can get this for you (such as the "copy ddl" option in the IntelliJ IDEA Database tool or a GUI client from your database vendor).
I am not sure about Flyway but there is an alternate way, you can use ant tasks for hibernate to generate or update schema.
Hope it helps.
If you build your project with Maven, you could use Hibernate maven plugin.
Im working on some database migration code in Java. Im also using a factory pattern so I can use different kinds of databases. And each kind of database im using implements a common interface.
What I would like to do is have a migration check that is internal to the class and runs some database schema update code automatically. The actual update is pretty straight forward (I check schema version in a table and compare against a constant in my app to decide whether to migrate or not and between which versions of schema).
To make this automatic I was thinking the test should live inside (or be called from) the constructor. OK, fair enough, that's simple enough. My problem is that I dont want the test to run every single time I instantiate a database object (it runs a query so having it run on every construction is not efficient). So maybe this should be a class static method? I guess my question is, what is a good design pattern for this type of problem? There ought to be a clean way to ensure the migration test runs only once OR is super-efficient.
Have a look at liquibase.
Here's an ibm developerworks article that has a nice walk-thru http://www.ibm.com/developerworks/java/library/j-ap08058/index.html
Flyway fits your needs perfectly. It supports multiple databases, compares the schema version with the available migrations on the classpath and upgrades the database accordingly.
You can embed it in your application and have it run once on startup as described in the Flyway docs.
Note: Flyway also comes with a Maven plugin and the ability to clean an existing schema in case you messed things up in development.
[Disclaimer: I'm one of Flyway's developers]
I've been using the iBatis SQL Mapper and really like it. The next version, iBatis 3.0, has schema migrations support. This is still in beta, but I'm planning on using it when it gets closer to a release candidate.