What happens when changing ORMLite database structure within an app - java

I'm using ORMLite database in my android application, now I want to change the whole structure of the Database like (renaming tables, add/remove columns, change relations, ...etc).
The question here, is there are any conflicts would happen in the devices with my app previously installed? in another words, when updating the app, is ORMLite leave any trails from the previous install that would make conflicts with the new one?!! so if I have a table named parent and I changed its name to guardian will I have two tables now in the new release?!!
If the answer is No, so why there is something like database version?
and if the answer is Yes, so how would I drop a table that is not exist anymore in my application? and can I just use the same class name with a different table name annotation to override the previous table?

I have not used ORMLite specifically. But it's just an ORM which means, it will won't decide if the table would be dropped or not based on a certain condition. That is something client has to do specifically based on their business rules. Now in Android there are specific ways you can upgrade the current database schema without dropping existing tables - https://developer.android.com/reference/android/database/sqlite/SQLiteOpenHelper.html
But upgrading database schema on SQLIte has lots of limitations i.e. there are so many operations that are not supported unlike a full blown DBMS.But that's part of the reason why SQLite is so light. Generally during your development cycle, try to conclude to a stable database schema as early as possible which needs minor additions later (SQLIte specifically does not support removing columns and etc.). Once you are in production and you don't wanna play with user's data, implementing an upgrade logic is the best bet you got.
But if you still want to drop a table explicitly, i see there are API's for the same in ORMLite -
http://ormlite.com/javadoc/ormlite-core/com/j256/ormlite/table/TableUtils.html

Related

database independent data migration

My goal is to enable schema and data migration for an existing application.
This kind of question seems to have been asked many times, however with different requirements and circumstances as mine, I think.
Since I am inexperienced in this domain, allow me to lay out the architecture of the app and my assumptions first.
Architecture
The app is a multi-user, enterprise desktop application with a backend server that can persist to any major DB (MySql, Postgresql, SQL Server, Oracle DB, etc). It is assumed the DB is on-premise and maintained by our clients.
The tech stack used is a fairly common Hibernate+Spring+RMI/JMS-Combo.
Currently, migrations are done by the server in the following way:
On server start it checks for the latest expected schema version
If larger than the current version, start migration to next version until current==latest:
Create new database
Load (whole) latest schema (SQL script with a lot of CREATE TABLE ...)
Migrate data (in Java classes using 2 JDBC-Connections to old and new schema)
Load (all) latest constraints (SQL script with a lot of ALTER TABLE ...)
This migration is slow and forward-only. But it is simple. The problem is, that until now the schema scripts and the queries in the data migrations have been using MySQL-syntax and features.
Note that by migrate data I mean: the backend server copies the data from the old schema to the new one, transforming it if necessary.
Also, the migration process starts automatically on-premise of our clients. Meaning, we only have control over the JDBC connection, but no direct access to the database nor knowledge about the specific database being used (MySQL, SQL Server,...).
The goal is to either replace or augment this migration scheme with a database independent one.
Assumptions and research
StackOverflow 1 2 3 4 5 6 7: Answers state to use Hibernate's inbuilt feature. However, the docs state that this is not production ready. Also, AFAICT, all answers are concerned with schema migration only.
Liquibase: Uses a custom DSL (in XML/JSON/YAML/etc) to allow for database independent schema migration only.
DBUnit: Uses custom XML-DSL to capture snapshots of states of databases. Can not recreate a snapshot of schema version 1 to version 2.
flyway: In principle same as Liquibase. But is not database independent because SQL-Scripts are used for migrations.
JOOQ: A database independent Query-DSL in Java on top of JDBC. Comparable to Criteria API but without the drawbacks of JPA. Should in principle allow for database independent data migration, however, does not help with schema migration.
JPA-Query languages like HQL, JPQL, Criteria API are not sufficient because
One cannot reference tables not mapped by the entity manager. E.g. join tables, metadata and audit tables.
A copy of all versions of the Entity classes needs to be kept around for the mapping.
Question
I realize, that as this question stands now, it will be dismissed as opinion-based.
However, I am not necessarily looking for specific solutions to this problem ( I doubt there exists a clear solution for such a complex problem space ) but rather to validate my assumptions.
Namely, is it true, that
Liquibase and Flyway are mainly concerned with schema migration and data migration is left as an exercise for the reader?
in order for Flyway to support multiple, different databases, one needs to duplicate the migrations scripts per database?
by and large, the problem of database independent data migration remains unresolved in enterprise Java?
Even if I was to combine Liquibase/Flyway with JOOQ, I do not see how to perform a data migration, because Liquibase/Flyway migrate databases in place. The old database gets destroyed and with it the opportunity to transform the old data to the new schema.
Thanks for your attention!
Let's break it down a little bit. You're right in that this is largely opinion based, but here's what I've noticed in my experiences.
Liquibase and Flyway are mainly concerned with schema migration and data migration is left as an exercise for the reader?
You can do data migration with liquibase and flyway. It's something I've done pretty often. Take the example where I want to split a User table into User and Address tables. I'd write a migration script, which is basically just a sql file, to create the new Address table and the copy all the relevant data into it.
in order for Flyway to support multiple, different databases, one needs to duplicate the migrations scripts per database?
Possibly, flyway and liquibase are better thought of as database versioning tools. If my app needs version 10 of the database, these tools would help me get to that point. Again, the migration scripts are just basic .sql files. If you're using some mysql specific functions then those will just go in the migration script and they wouldn't work on a sql server
by and large, the problem of database independent data migration remains unresolved in enterprise Java?
Eh, I'm not sure about this one. I agree its a problem, but in practice it's not a huge one. For the past 8+ years, I've only written ansi sql. It should be portable everywhere. So in theory, we can lift those applications on to a different database. JPA and the various implementations help with those differences. Depending on how your project was built, say an application that has all of its business logic in implementation specific sql functions, then it's going to be a headache. If you're using the database for CRUD, and I'd argue that's all you should be using it for, then it's not a huge deal.
So all that said, I think you might have the wrong idea about flyway and liquibase. Like i said earlier, they aren't really 'migration tools' so much as they are database versioning tools. With a list of specific sql migration scripts that are ordered, i can guarantee the state of my database at any version. I'm not sure these are tools that I'd use to 'migrate' a legacy SQL Server based application into a PostGres based application.

Play + Ebean: Changes to the model + database

I'm currently using Java Play and persisting models through Ebean to MySQL. This is going to be a generic question – what I see is that whenever I make changes to a model – sometimes just adding a property, after applying the evolution script, the existing data in the corresponding table gets truncated.
Since I love play and I'm thinking about deploying my next project using Play, this is an important question for me – is there a workaround to securely make model changes? Or is the behaviour I'm seeing only when running the application in development mode?
I can't find much about this subject elsewhere.
That's common approach of Ebean - it doesn't truncate your tables it just drops whole DB and recreates it with new DDL: #see answer to the other question for explanation.
Note: In meantime I found that using standalone approach which is MyBatis Migrations is little bit more comfortable then Play's evolutions, anyway you need still to create the migrations manually (as evolutions).

Best practices in database changes in web applications already deployed

I am trying to find a standard approach on the following problem I have.
I have a web application deployed in a container (specifically Tomcat) and it uses a database for its functionality (in my case it is an SQL database in file mode, so there is no back-end SQL server).
What I am interested in is what is the best way to handle the various changes of my database on newer versions of my web application as the database schema changes (new tables/ new columns, removal of columns etc).
I.e. how can I handle the case of someone upgrading to a newer version of my web application and still retain his old data from the old database in the best (automatic? seemless? less manual?) manner.
I think that this is not a rare case so I believe there some best practice I can follow here.
Can anyone help me on this?
Recently we discovered Flyway - it works pretty well and embraces versioning of database schema changes (plain SQL scripts).
Obviously this topic is much broader. For instance you need to be extra careful when both the old and the new version of the application should run flawlessly in updated schema. Also you should consider rollback strategy (when upgrade didn't work well or you want to downgrade your application) - sometimes it is as simple as removing added objects (tables, columns), but when your scripts removes something, rollback should restore them.
First of all, you'd want to keep changes to the database and especially to existing columns as low as possible.
Second, if you need to rename a column or change some constraints (be careful not to get more restrictive because there might be some data that would not match), use ALTER TABLE statements. This way the data in the columns is preserved unless you drop columns. :)
Additionally, provide default values for new columns that have constraints (like not null) because there might already be datasets in that table that need to be updated in order not to violate those constraints. (Alternatively add the column, run some code to fill the column and then add the constraint.)
Third, since there seem to be multiple users of your application and they might have different versions, the easiest way for providing updates is to provide for sequential updates to the next higher version. Thus if someone wants to update from version 2 to 5, you'd first do the 2->3 update, then 3->4 and finally 4->5.
This might take longer to run but should reduce complexity since you'd bot have to worry about all possible combinations (e.g. 2->4, 2->5, 3->5 etc.)

Strategies for dealing with constantly changing requirements for MySQL schemas?

I'm using Hibernate EntityManager and Hibernate Annotations for ORM in a very early stage project. The project needs to launch soon, but the specs are changing constantly and I am concerned that the system will be launched and live data will be collected, and then the specs will change again and I will be in a situation where I need to change the database schema.
How can I set things up in order to minimize the impact of this? Are there any open source projects that deal with this kind of migration? Can Hibernate do this automatically (without wiping the database)?
Your advice is much appreciated.
It's more a functional or organizational problem than a technical one. No tool will automatically guess how to migrate data from one schema to another one. You'd better learn how to write stored procedure in order to migrate your data.
You'll probably need to disable constraints, create temporary table and columns, copy lots of data, and then delete the temporary tables and columns and re-enable constraints to have migrate your data.
Once in maintenance mode, every new feature that modifies the schema should also come with the script allowing to migrate from the current schema and data in production to the new one.
No system can possibly create datamigration scripts automatically from just the original and the final schema. There just isn't enough information.
Consider for example a new column. Should it just contain the default value? Or a value calculated from other fields/tables.
There is a good book about refactoring databases: http://www.amazon.com/Refactoring-Databases-Evolutionary-Addison-Wesley-Signature/dp/0321774515/ref=sr_1_1?ie=UTF8&qid=1300140045&sr=8-1
But there is little to no tool support for this kind of stuff.
I think the best thing you can do in advance:
Don't let anybody access the database but your application
If something else absolutely must access the db directly, give it a separate set of view specially for that purpose. This allows you to change your table structure by keeping at least the structure of what other systems see.
Have tons of tests. I just posted an article wich (with the upcoming 2nd and 3rd part) might help a little with this: http://blog.schauderhaft.de/2011/03/13/testing-databases-with-junit-and-hibernate-part-1-one-to-rule-them/
Hibernate can update the database entity model with data in the database. So do that and write migration code in java which sets or removes data relationships.
This works, and we have done it multiple times. But of course, try to follow a flexible development process; make what you know for sure first, then reevaluate the requirements - scrum etc.
In your case, I would recommend a NoSQL database. I don't have much experience with such kind of databases so I can't recommend any current implementation so you may want to check this too.

Dynamic table name in Hibernate

I am developing an application in Java that uses Hibernate to connect to MySQL database.
My application manages students of different batches. If a student joined in 2010 then they are in the 2010 batch, so whenever the administrators of the application create a new batch, my application has to create new tables for that batch. While the scheme is much more like the old tables that are already there in the database, the table name changes. How do I accomplish this using Hibernate?
How do I create the XML files and the classes required dynamically?
If I understood your problem right, I think you want to check Hibernate Shards. Note that this is an advanced feature, unsupported and not really tested (nor maintained). So, use it at your own risk. You may want to pay special attention to the "Shard Selection Strategy" section:
http://docs.jboss.org/hibernate/stable/shards/reference/en/html_single/#shards-strategy-shardselection
From the documentation:
We expect many applications will want to implement attribute-based sharding, so for our example application that stores weather reports let's shard reports by the continents on which the reports originate
But as the others said: think twice before splitting your data. Do it only if you expect really large volumes of data. A couple million records are not really that much.

Categories

Resources