I'm currently using Java Play and persisting models through Ebean to MySQL. This is going to be a generic question – what I see is that whenever I make changes to a model – sometimes just adding a property, after applying the evolution script, the existing data in the corresponding table gets truncated.
Since I love play and I'm thinking about deploying my next project using Play, this is an important question for me – is there a workaround to securely make model changes? Or is the behaviour I'm seeing only when running the application in development mode?
I can't find much about this subject elsewhere.
That's common approach of Ebean - it doesn't truncate your tables it just drops whole DB and recreates it with new DDL: #see answer to the other question for explanation.
Note: In meantime I found that using standalone approach which is MyBatis Migrations is little bit more comfortable then Play's evolutions, anyway you need still to create the migrations manually (as evolutions).
Related
We have a web-based application in dev phase where we use Spring 5, JPA(Hibernate) and Postgresql 9.4
Till this moment we were using one instance of the posgresql db for our work. Basically, we don't have any schema generation script and we simply were updating the db if we needed some new table, column etc. For the Hibernate we were generating classes from the db.
Now when we have some amount of test data and each change in the db brings a lot of trouble and confusion. We realized that we need to create and start maintaining some schema generation file along with some scripts which generate test data.
After some research, we see two options
Create two *.sql files. The first will contain the schema generation script the second one SQL to create test data. Then add a small module with a class which will execute the *.sql files using plain jdbc. Basically, we will continue developing and whenever we made some changes we quickly wipe->create->populate the db. This approach looks the most appealing to us at this point. It quick, simple, robust.
Second is to set up some tool which may help with that e.g. Liquibase
This approach also looks good in terms of versioning support and other capabilities. However, we are not in production yet, we are in an active development phase. We don't have much of the devs who do the db changes and we are not sure how frequently we will update the db schema in production, it could be rare.
The question is the following. Would the first approach be a bad practice and applying the second one will give the most benefits and it worth to use it?
Would appreciate any comments or any other suggestions!
First approach is NOT a bad practice, until this generation. But it will be considering the growth of tools like Liquibase.
If you are in the early or middle of the Development Phase, go ahead with LiquiBase, along with Spring Data. Contrarily, in the closing stages of the Development Phase, Think you real need for it.
I would suggest second approach as it will automatically find the new script as you add and execute the script on startup. Moreover, when you have tools available like liquibase and flyway why reinvent the wheel ?.
2nd approach will also reduce the un-necessary code for manually executing the *.sql files. Moreover this code also needs testing and if updated can be error prone.
Moreover 1st approach where you write manual code to execute script also has to check which scripts needs to be executed.. If you already has existing database and you are adding some new scripts you need to execute those new scripts only. These things are taken care of automatically with 2nd approach and you don't need to worry about already executed script being executed again
Hope this answers your concern. Happy coding
I am at the almost ready stage of my JEE development. With a lot of recommendation NOT to use Hibernate's hbm2ddl.auto in production, I decided to remove it.
So now, I found out about Flyway, which seems great for future db changes and migrations, but I am stuck at first step: I have many entities, some entities inherit from base entities. This makes the CREATE statement very complex.
What is the best practice to create the first migration file?
Thanks!
If you've taken an "entities first" approach during development you'll need to generate the initial schema in the same way for the first live deployment: This will produce the first creation script used by Flyway and there may also need to be a second associated script for populating reference data.
In a nutshell, the reasons for no longer being able to use hbm2ddl.auto after the first deployment are that create will destroy existing data and update isn't reliable enough to cover all types of schema changes (as it sounds like you may already know from this SO question).
Flyway is a very useful tool but it does require a level of discipline that may not have existed during development. When going forward from the initial release, database update scripts need to be produced for Flyway that are equivalent to the changes made to the entities since the last release. There are tools (e.g. various commercial products from Redgate) that may help here: These attempt to "diff" two schemas and generate schema and/or data update scripts for getting from database A to database B. But in my experience, none of them are perfect and they don't quite reach the holy grail of enabling a completely automated approach.
Arguably, the best way is an "as you go" manual approach to ensure that non-destructive update scripts are committed to source control whenever an entity change is made that affects the schema or reference data - but as already mentioned, this will require some discipline and/or documented processes for all team members to follow.
For the first migration file, you just need the current ddl of your database. There are many tools which can get this for you (such as the "copy ddl" option in the IntelliJ IDEA Database tool or a GUI client from your database vendor).
I am not sure about Flyway but there is an alternate way, you can use ant tasks for hibernate to generate or update schema.
Hope it helps.
If you build your project with Maven, you could use Hibernate maven plugin.
Currently I'm working on a small web service using Dropwizard, connecting to a Postgresql DB using hibernate(build in package in Dropwizard) and with a bit of Migrations(also from Dropwizard).
Coming from a .NET environment, I'm used to a code - first/centric approach.
Currently I'm looking into generating the migrations.xml from the current state of my entity class based on the JPA annotations on them.
I feel this is a case somebody might have already resolved.
Is there a way to automatically update the migrations.xml based on the classes I'm writting?
It is possible. See the liquibase-hibernate plugin at https://github.com/liquibase/liquibase-hibernate/wiki.
Make sure you look at the generated migrations.xml changes before applying them because, like any diff-based process, the schema transformation may not be what you intended and that matters with data. For example, if you rename a class it will generate a drop + create process rather than a rename operation. The result is a valid schema, but you lose data.
I'm using Hibernate EntityManager and Hibernate Annotations for ORM in a very early stage project. The project needs to launch soon, but the specs are changing constantly and I am concerned that the system will be launched and live data will be collected, and then the specs will change again and I will be in a situation where I need to change the database schema.
How can I set things up in order to minimize the impact of this? Are there any open source projects that deal with this kind of migration? Can Hibernate do this automatically (without wiping the database)?
Your advice is much appreciated.
It's more a functional or organizational problem than a technical one. No tool will automatically guess how to migrate data from one schema to another one. You'd better learn how to write stored procedure in order to migrate your data.
You'll probably need to disable constraints, create temporary table and columns, copy lots of data, and then delete the temporary tables and columns and re-enable constraints to have migrate your data.
Once in maintenance mode, every new feature that modifies the schema should also come with the script allowing to migrate from the current schema and data in production to the new one.
No system can possibly create datamigration scripts automatically from just the original and the final schema. There just isn't enough information.
Consider for example a new column. Should it just contain the default value? Or a value calculated from other fields/tables.
There is a good book about refactoring databases: http://www.amazon.com/Refactoring-Databases-Evolutionary-Addison-Wesley-Signature/dp/0321774515/ref=sr_1_1?ie=UTF8&qid=1300140045&sr=8-1
But there is little to no tool support for this kind of stuff.
I think the best thing you can do in advance:
Don't let anybody access the database but your application
If something else absolutely must access the db directly, give it a separate set of view specially for that purpose. This allows you to change your table structure by keeping at least the structure of what other systems see.
Have tons of tests. I just posted an article wich (with the upcoming 2nd and 3rd part) might help a little with this: http://blog.schauderhaft.de/2011/03/13/testing-databases-with-junit-and-hibernate-part-1-one-to-rule-them/
Hibernate can update the database entity model with data in the database. So do that and write migration code in java which sets or removes data relationships.
This works, and we have done it multiple times. But of course, try to follow a flexible development process; make what you know for sure first, then reevaluate the requirements - scrum etc.
In your case, I would recommend a NoSQL database. I don't have much experience with such kind of databases so I can't recommend any current implementation so you may want to check this too.
Im working on some database migration code in Java. Im also using a factory pattern so I can use different kinds of databases. And each kind of database im using implements a common interface.
What I would like to do is have a migration check that is internal to the class and runs some database schema update code automatically. The actual update is pretty straight forward (I check schema version in a table and compare against a constant in my app to decide whether to migrate or not and between which versions of schema).
To make this automatic I was thinking the test should live inside (or be called from) the constructor. OK, fair enough, that's simple enough. My problem is that I dont want the test to run every single time I instantiate a database object (it runs a query so having it run on every construction is not efficient). So maybe this should be a class static method? I guess my question is, what is a good design pattern for this type of problem? There ought to be a clean way to ensure the migration test runs only once OR is super-efficient.
Have a look at liquibase.
Here's an ibm developerworks article that has a nice walk-thru http://www.ibm.com/developerworks/java/library/j-ap08058/index.html
Flyway fits your needs perfectly. It supports multiple databases, compares the schema version with the available migrations on the classpath and upgrades the database accordingly.
You can embed it in your application and have it run once on startup as described in the Flyway docs.
Note: Flyway also comes with a Maven plugin and the ability to clean an existing schema in case you messed things up in development.
[Disclaimer: I'm one of Flyway's developers]
I've been using the iBatis SQL Mapper and really like it. The next version, iBatis 3.0, has schema migrations support. This is still in beta, but I'm planning on using it when it gets closer to a release candidate.