Database migration pattern for Java? - java

Im working on some database migration code in Java. Im also using a factory pattern so I can use different kinds of databases. And each kind of database im using implements a common interface.
What I would like to do is have a migration check that is internal to the class and runs some database schema update code automatically. The actual update is pretty straight forward (I check schema version in a table and compare against a constant in my app to decide whether to migrate or not and between which versions of schema).
To make this automatic I was thinking the test should live inside (or be called from) the constructor. OK, fair enough, that's simple enough. My problem is that I dont want the test to run every single time I instantiate a database object (it runs a query so having it run on every construction is not efficient). So maybe this should be a class static method? I guess my question is, what is a good design pattern for this type of problem? There ought to be a clean way to ensure the migration test runs only once OR is super-efficient.

Have a look at liquibase.
Here's an ibm developerworks article that has a nice walk-thru http://www.ibm.com/developerworks/java/library/j-ap08058/index.html

Flyway fits your needs perfectly. It supports multiple databases, compares the schema version with the available migrations on the classpath and upgrades the database accordingly.
You can embed it in your application and have it run once on startup as described in the Flyway docs.
Note: Flyway also comes with a Maven plugin and the ability to clean an existing schema in case you messed things up in development.
[Disclaimer: I'm one of Flyway's developers]

I've been using the iBatis SQL Mapper and really like it. The next version, iBatis 3.0, has schema migrations support. This is still in beta, but I'm planning on using it when it gets closer to a release candidate.

Related

Configuring database development environment along with Hibernate and Spring

We have a web-based application in dev phase where we use Spring 5, JPA(Hibernate) and Postgresql 9.4
Till this moment we were using one instance of the posgresql db for our work. Basically, we don't have any schema generation script and we simply were updating the db if we needed some new table, column etc. For the Hibernate we were generating classes from the db.
Now when we have some amount of test data and each change in the db brings a lot of trouble and confusion. We realized that we need to create and start maintaining some schema generation file along with some scripts which generate test data.
After some research, we see two options
Create two *.sql files. The first will contain the schema generation script the second one SQL to create test data. Then add a small module with a class which will execute the *.sql files using plain jdbc. Basically, we will continue developing and whenever we made some changes we quickly wipe->create->populate the db. This approach looks the most appealing to us at this point. It quick, simple, robust.
Second is to set up some tool which may help with that e.g. Liquibase
This approach also looks good in terms of versioning support and other capabilities. However, we are not in production yet, we are in an active development phase. We don't have much of the devs who do the db changes and we are not sure how frequently we will update the db schema in production, it could be rare.
The question is the following. Would the first approach be a bad practice and applying the second one will give the most benefits and it worth to use it?
Would appreciate any comments or any other suggestions!
First approach is NOT a bad practice, until this generation. But it will be considering the growth of tools like Liquibase.
If you are in the early or middle of the Development Phase, go ahead with LiquiBase, along with Spring Data. Contrarily, in the closing stages of the Development Phase, Think you real need for it.
I would suggest second approach as it will automatically find the new script as you add and execute the script on startup. Moreover, when you have tools available like liquibase and flyway why reinvent the wheel ?.
2nd approach will also reduce the un-necessary code for manually executing the *.sql files. Moreover this code also needs testing and if updated can be error prone.
Moreover 1st approach where you write manual code to execute script also has to check which scripts needs to be executed.. If you already has existing database and you are adding some new scripts you need to execute those new scripts only. These things are taken care of automatically with 2nd approach and you don't need to worry about already executed script being executed again
Hope this answers your concern. Happy coding

New Flyway migrations break existing jOOQ generated code

I currently use jOOQ to generate Java code from my database and Flyway to manage my binary (Java) migrations as well as SQL migrations.
However, I run into problems when I modify existing tables. For example, if I were to drop a column in one migration and a past binary migration was dependent on that column, the migration will have a syntax error because the field wouldn't exist in jOOQ anymore.
I know I could just comment out the body of the migration but that kind of defeats the whole purpose of Flyway or any sort of database version manager if I can't rerun my migrations or makes it very tedious (run 1 migration, uncomment, run next, generate jOOQ, etc)
Is there a better way to approach this problem?
I'd argue this is a workflow problem.
You are effectively upgrading an API with each migration, expecting legacy consumers of that API to continue to work would be nothing short of miraculous.
jOOQ is a great tool but using it in this context (to assist migrations) is certainly going to lead to trouble.
My suggestion would be to rethink you schema evolution strategy; using raw SQL, which comes naturally to Flyway, and leave jOOQ for exclusively assisting your application instead.

Without using hibernate.hbm2ddl.auto, how do I export all the initial schema into Flyway?

I am at the almost ready stage of my JEE development. With a lot of recommendation NOT to use Hibernate's hbm2ddl.auto in production, I decided to remove it.
So now, I found out about Flyway, which seems great for future db changes and migrations, but I am stuck at first step: I have many entities, some entities inherit from base entities. This makes the CREATE statement very complex.
What is the best practice to create the first migration file?
Thanks!
If you've taken an "entities first" approach during development you'll need to generate the initial schema in the same way for the first live deployment: This will produce the first creation script used by Flyway and there may also need to be a second associated script for populating reference data.
In a nutshell, the reasons for no longer being able to use hbm2ddl.auto after the first deployment are that create will destroy existing data and update isn't reliable enough to cover all types of schema changes (as it sounds like you may already know from this SO question).
Flyway is a very useful tool but it does require a level of discipline that may not have existed during development. When going forward from the initial release, database update scripts need to be produced for Flyway that are equivalent to the changes made to the entities since the last release. There are tools (e.g. various commercial products from Redgate) that may help here: These attempt to "diff" two schemas and generate schema and/or data update scripts for getting from database A to database B. But in my experience, none of them are perfect and they don't quite reach the holy grail of enabling a completely automated approach.
Arguably, the best way is an "as you go" manual approach to ensure that non-destructive update scripts are committed to source control whenever an entity change is made that affects the schema or reference data - but as already mentioned, this will require some discipline and/or documented processes for all team members to follow.
For the first migration file, you just need the current ddl of your database. There are many tools which can get this for you (such as the "copy ddl" option in the IntelliJ IDEA Database tool or a GUI client from your database vendor).
I am not sure about Flyway but there is an alternate way, you can use ant tasks for hibernate to generate or update schema.
Hope it helps.
If you build your project with Maven, you could use Hibernate maven plugin.

Play framework with JPA initial insertion

I use Play 2.3 with Hibernate.
On starting up the application the first time, I want to have some data inserted into the database as default values.
In my case I have an entity class "Studycourse". All tables are created through JPA on first run.
I use DB evolution (1.sql) to insert the default data, e.g.:
INSERT INTO studycourse (id, title) VALUES (1, 'Computer Science');
This works when using the normal "activator run" command. But if I do "activator test" and start a simple integration test with inMemoryDatabase(), I get following error:
[error] play - Table "STUDYCOURSE" not found; SQL statement: INSERT INTO studycourse (id, title) VALUES (1, 'Computer Science')
I guess, that the initial JPA setup is not done in the in-memory DB.
Question: Is there a best practice on how to do this?
The integration test looks like:
public class IntegrationTest {
#Test
public void test() {
running(testServer(3333, fakeApplication(inMemoryDatabase())), HTMLUNIT, new Callback<TestBrowser>() {
public void invoke(TestBrowser browser) {
browser.goTo("http://localhost:3333");
assertThat(browser.pageSource()).contains("Your new application is ready.");
}
});
}
}
Thanks in advance.
Your original question was essentially asking "How can I execute my JPA initialization steps in a test environment so that my in-memory database is populated when I run integration tests involving my database?".
My answer will not directly address that but it will summarize how we solve the same underlying issues you're trying to solve.
My interpretation of your objectives is that you want to:
Establish a good practice for database schema migration
Establish a common practice for database integration testing
Database Schema Migration
As I mentioned in my comment above, we use http://flywaydb.org for database schema migrations and it has been an outstanding tool. Flyway has an SBT plugin so you can run flywayClean and flywayMigrate right from activator to delete and re-initialize your database instantly.
Flyway supports sophisticated file name versions so that you can execute SQL scripts like v1.1.0.sql, v1.1.1.sql, and v1.2.0.sql. Flyway will also complain if you try to execute migration scripts that are not a pure improvement on the existing state of the database. This means we're using flyway to push database schema migrations to production, resting confident that this will fail if we've done something silly. Of course, we always take a DB backup right before the migration just to be safe.
Finally, Flyway will even let you execute java programs to populate the database in case you want to use service methods instead of just raw SQL.
Anyway, your choices here are basically Play evolutions, Flyway, or Liquibase.
In-Memory Database vs. Dev-Database
On this issue, I've seen two primary positions:
Never test on an in-memory database because then your tests won't
reveal the subtle differences between your in-memory database and
your production database, or
Use an in-memory database for local testing, but at least have your
build server use a dev database.
You can see, for example, the comments at the end of http://blog.jooq.org/2014/06/26/stop-unit-testing-database-code/.
Option #1 gives you higher speed overall, but delays the feedback time between writing bad code and getting a failing integration test.
Option #2 gives you slightly lower speed overall, but gives you immediate feedback on the real database.
As with most things in engineering, there is no "best" solution, just a set of tradeoffs which make the most sense for your team.
Choosing an ORM Layer
We initially began with Hibernate but eventually switched to http://jooq.org. See http://www.vertabelo.com/blog/technical-articles/jooq-a-really-nice-alternative-to-hibernate for a jOOQ-positive overview, and http://blog.jooq.org/2012/04/21/jooq-and-hibernate-a-discussion/ for a good discussion on the two.
Hibernate seemed attractive to us because it was so mature and so popular, but when we began running into classic SQL vs. Object-Oriented impedance mismatches like how to handle inheritance, Hibernate required a learning curve and some setup overhead.
We reasoned that, if we're going to incur that overhead at all, why not just use SQL directly to do the mappings? So, we switched to JOOQ and have been able to write some very clean, elegant, and testable code. If you're not too far down the hibernate path, I would encourage you to take a look at jOOQ.
If you're already deep into Hibernate, and it's working well for you, there's probably little value to switching.
Best Practices for Database Integration Testing
I wondered this exact question and posted about it at https://groups.google.com/forum/#!topic/jooq-user/GkBW5ZGdgwQ. Lukas, the author of jOOQ responded with some general remarks.
At this point, we integration test most of our DAO's and service classes. Our tests are run after flywayClean and flywayMigrate have been run. Then, each test is written to clean up after itself. The biggest issue is performance, which so far is not a problem, but may be an issue later.
I also posted on that and received a helpful answer. See https://groups.google.com/d/msg/play-framework/BgOCIgz_9q0/jBy8zxejPEkJ.
Disclaimer: we are close to launching our app but not yet running it in production, so others may have additional best practices to add.

How to upgrade database schema built with an ORM tool?

I'm looking for a general solution for upgrading database schema with ORM tools, like JPOX or Hibernate. How do you do it in your projects?
The first solution that comes to my mind is to create my own mechanism for upgrading databases, with SQL scripts doing all the work. But in this case I'll have to remember about creating new scripts every time the object mappings are updated. And I'll still have to deal with low-level SQL queries, instead of just defining mappings and allowing the ORM tools to do all the job...
So the question is how to do it properly. Maybe some tools allow for simplifying this task (for example, I heard that Rails have such mechanism built-in), if so please help me decide which ORM tool to choose for my next Java project.
LiquiBase is an interesting open source library for handling database refactorings (upgrades). I have not used it, but will definitely give it a try on my next project where I need to upgrade a db schema.
I don't see why ORM generated schemas are any different to other DB schemas - the problem is the same. Assuming your ORM will spit out a generation script, you can use an external tool to do the diff
I've not tried it but google came back with SQLCompare as one option - I'm sure there are others.
We hand code SQL update scripts and we tear down the schema and rebuild it applying the update scripts as part of our continuous build process. If any hibernate mappings do not match the schema, the build will fail.
You can check this feature comparison of some database schema upgrade tools.
A comparison of the number of questions in SOW of some of those tools:
mybatis (1049 questions tagged)
Liquibase (663 questions tagged)
Flyway (400 questions tagged)
DBDeploy (24 questions tagged).
DbMaintain can also help here.
I think your best bet is to use an ORM-tool that includes database migration like SubSonic:
http://subsonicproject.com/2-1-pakala/subsonic-using-migrations/
We ended up making update scripts each time we changed the database. So there's a script from version 10 to 11, from 11 to 12, etc.. Then we can run any consecutive set of scripts to skip from some existing version to the new version. We stored the existing version in the database so we could detect this upon startup.
Yes this involved database-specific code! One of the main problems with Hibernate!
When working with Hibernate, I use an installer class that runs from the command-line and has options for creating database schema, inserting base data, and dynamically updating the database schema using SchemaUpdate. I find it to be extremely useful. It also gives me a place to put one-off scripts that will be run when a new version is launched to, for example, populate a new field in an existing DB table.

Categories

Resources