I have encountered a problem with flyway and I don't know how to resolve it.
I have template of sql database, which initiates all tables, and basic data. This sql have been migrated on few production servers, and when I want to make little changes to this template (add more data for example) - of course there is a checksum mismatch. So the idea is to not put checksum in this base template migration, like in Flyway's schema creation.
Should I create custom MigrationExecutor? If that's correct approach, could you provide basic example how to use that custom executor, and within that executor how to get information about current running migration? I couldn't find anything about creating custom callbacks, resolvers or executers except Flyways doc's which doesn't give me any idea how to get what I want.
Flyway's version is 4.0.3
Related
I am trying to understand "changing database without changing code". Currently working with micro services using springboot, java, thymeleaf and cloud foundry.
I have a spring boot application and attached a database as a service using cloud foundry.
My problem is I am seeing that the purpose of micro service is allowing the ease to change services without changing code.
Here is where I got stuck
In java I have a sql script, "select * from ORDER where Status = 'ACCEPTED';"
Images source
My database would be attached as a service on cloud foundry using CUPS
"jdbc:oracle:thin:username/password//host:port/servicename"
So let say I want to change this database to CUSTOMER table(take it as a different database). This will throw an error because CUSTOMER table will not have "select * from ORDER where Status = 'ACCEPTED';"
I've changed database, but wouldn't I still have to go back to my code and change the sql script?
My Attempt to resolve this issue
So instead of hard coding my sql script in java "select * from ORDER where Status = 'ACCEPTED';"
I created a system environment variable and set it as sqlScript with value of select * from ORDER where Status = 'ACCEPTED'
Then in java I called the env variable String sqlScript= System.getenv("sqlScript");
So now instead of going back into java to change sql script, user can change it through environment variables.
this is a very dirty method to go around my issue, what would be a better alternative?
I know my logic of understanding is really wrong. Please guide me to the right path.
I think the phrase 'changing database without changing code' doesn't mean that if you add/remove fields in DB you do not have to modify your codebase - it just doesn't make any sense.
What it really means is that you should use good database abstractions, so in case you need to change your database vendor from, let's say, MYSQL to OracleDB your Java code should stay the same. The only thing that may differ is some configurations.
A good example of it is ORM like Hibernate. You write your java code once, no matter what is the SQL Database that you are using underneath. To switch databases the only thing that you need to change is a dialect configuration property (In reality it's not that easy to do, but probably easier than if we were coupled to a one specific DB).
Hibernate gives you a good abstraction over SQL databases. Nowadays we have a new trend - having the abstraction over different DB families like SQL and NoSQL. So in the ideal world, your codebase should stay unchanged even if you want to change MySQL to MongoDB or even Neo4j. Spring Data probably is the most popular framework that tries to solve this problem. Another framework that I found recently is Kundera but I haven't used it so far.
So answering your question - you do not need to keep your SQL queries as system variables. All you need to do is to use proper abstractions in your language of choice.
In my opinion, it would be better to use something like Flyway or Liquibase, which are integrated really well in Spring Boot. You can find more information here.
I prefer Liquibase, since it uses a higher level format to describe your database migrations, allowing you to switch databases quite easily. This way, you can also use different databases per environment, for example:
HSQLDB during local development
MySQL in DEV and TEST
Oracle in Production
It's also possible to export your current database schema from an existing database to have an initial version in Flyway or Liquibase, this will give you a good baseline for your scripts.
I'm trying to include my own schema changes (such as varchar to text and create Index) just before entities get bound, just like Hibernate does the schema update. It is ideal if I can include my own custom SQLs in hibernate's schema update itself. For instance by extending an existing hibernate class and allowing application to use mine instead of built-in one.
org.hibernate.internal.SessionFactoryImpl is final.
May be by implementing my own org.hibernate.tool.schema.spi.SchemaMigrator. Anyone tried this before?
BTW, I'm using spring boot v1.4.0.M3, in case it has any specific dependencies, I don't think so.
Cheers!
Spring Boot has a JDBC initializer, it runs .sql scripts at startup.
I would recommend using Flyway, it keeps track of the version of your database and which scripts have been executed.
http://docs.spring.io/spring-boot/docs/current/reference/html/howto-database-initialization.html#howto-initialize-a-database-using-spring-jdbc
I have created some mysql databases in mysql.
Now I am trying to get them into my web application by using the play framework.
I added the mysql configs in the application.conf, added the dependency for the mysql driver in the build.sbt, created my first model and added the models packages as the ebean default in the application.conf.
Now when I go into my browser I get this error:
I`m a little confused right now, because I do not want to create a new table, but use the one I created already.
Any idea what I am doing wrong??
Play's default behaviour during development is to manage your database via the evolutions plugin. You define your initial schema in conf/evolutions/default/1.sql and then apply subsequent modifications in 2.sql, 3.sql etc etc. Whenever there are changes to these evolution files the plugin will attempt to run these on the database, which is what you're seeing here (although it looks like an error, it's really just trying to be helpful.)
If you want to manage the schema yourself (and you probably should on a production DB, for example) add evolutionplugin=disabled to the application.conf file.
Currently I'm working on a small web service using Dropwizard, connecting to a Postgresql DB using hibernate(build in package in Dropwizard) and with a bit of Migrations(also from Dropwizard).
Coming from a .NET environment, I'm used to a code - first/centric approach.
Currently I'm looking into generating the migrations.xml from the current state of my entity class based on the JPA annotations on them.
I feel this is a case somebody might have already resolved.
Is there a way to automatically update the migrations.xml based on the classes I'm writting?
It is possible. See the liquibase-hibernate plugin at https://github.com/liquibase/liquibase-hibernate/wiki.
Make sure you look at the generated migrations.xml changes before applying them because, like any diff-based process, the schema transformation may not be what you intended and that matters with data. For example, if you rename a class it will generate a drop + create process rather than a rename operation. The result is a valid schema, but you lose data.
Im working on some database migration code in Java. Im also using a factory pattern so I can use different kinds of databases. And each kind of database im using implements a common interface.
What I would like to do is have a migration check that is internal to the class and runs some database schema update code automatically. The actual update is pretty straight forward (I check schema version in a table and compare against a constant in my app to decide whether to migrate or not and between which versions of schema).
To make this automatic I was thinking the test should live inside (or be called from) the constructor. OK, fair enough, that's simple enough. My problem is that I dont want the test to run every single time I instantiate a database object (it runs a query so having it run on every construction is not efficient). So maybe this should be a class static method? I guess my question is, what is a good design pattern for this type of problem? There ought to be a clean way to ensure the migration test runs only once OR is super-efficient.
Have a look at liquibase.
Here's an ibm developerworks article that has a nice walk-thru http://www.ibm.com/developerworks/java/library/j-ap08058/index.html
Flyway fits your needs perfectly. It supports multiple databases, compares the schema version with the available migrations on the classpath and upgrades the database accordingly.
You can embed it in your application and have it run once on startup as described in the Flyway docs.
Note: Flyway also comes with a Maven plugin and the ability to clean an existing schema in case you messed things up in development.
[Disclaimer: I'm one of Flyway's developers]
I've been using the iBatis SQL Mapper and really like it. The next version, iBatis 3.0, has schema migrations support. This is still in beta, but I'm planning on using it when it gets closer to a release candidate.