I use Autogenerate DDL feature from Hibernate to create my tables and columns.
However, I end up deleting many columns in my entities but those columns remain in the database.
Is there a script that could identify those unmapped columns and tables so I could manually delete them?
Right now, I do it manually but it's becoming problematic as the database grows.
I am afraid you there is no automated way to do that.
Alternatively you can drop the whole table before building and loading the project and let hibernate re-create it with your defined column names.
The recommended thing is:
We should use the hibernate.hbm2ddl.auto property only in the development environment, not is the production
If hibernate.hbm2ddl.auto=create-drop, hibernate will drop and create a new database every time of deployment,
that means we can track the database state and its consistency.
If hibernate.hbm2ddl.auto=update, hibernate will only update the database with the changes to made in model/entity classes,
but it should not be trusted for the production environment/database.
hibernate.hbm2ddl.auto should be set to default for the production environment, so that no databases changes should be reflected from hibernate in production.
Related
I am using a Postgresql database with JPA / Hibernate. When I'm adding a constraint to a column, i.e. "nullable=false", the database column is not altered to reflect this. Deleting the table and rerunning the application does the job.
Can this be achieved with JPA/Hibernate - mechanisms only WITHOUT deleting entries or the table? Like "Try to alter the table and refuse to do so on inconsistent data"? In my application.properties, I've set
hibernate.hbm2ddl.auto=update
Any other setting seems to be deleting data and/or tables.
A working solution would be to run an ALTER TABLE script and adding a constraint annotation accordingly, but I'm not really fond of this.
It is impossible to add constraints to the existing table. Think wide about it - what if the data in the table is not satisfied with the constraint? You will get the whole application crash. So Hibernate not even trying to apply the potential harmful operation.
In general, there is the wrong way to manage database schema with hibernate. Auto-generation schema is appropriate for learning or MVP purposes but not for production. Also, SQL migration is a bad idea for many reasons. You should use a special tool for schema management: liquibase (better for me) or flywaydb
I am little confused, when see, why most programmers use annotation-based setup for database table constraints. For example
#Column(unique=true, nullable=false)
Why do we need that, if(as I heard) in real projects mostly used SQL migrations, so you are able to create this constraints in table creations, like CREATE table... name varchar UNIQUE NOT NULL.
Do I need to setup it in both ways, or is it enough to do in SQL?
And how often SQL migrations used(Flyway, Liquibase) in projects?
Additionally, Hibernate creates unreadable constraints in database, otherwise in SQL you create understandable names of constrains.
You can choose whether to let Hibernate to manage your schema or not. If yes , the database schema will be created or updated based on the changes of the annotation mapping.
I personally will not let Hibernate to manage my database schema as I want to be exactly know what is going on with the schema changes . The hibernate documentation also suggests it somehow:
Although the automatic schema generation is very useful for testing
and prototyping purposes, in a production environment, it’s much more
flexible to manage the schema using incremental migration scripts.
while Flyway, Liquibase is the kind of incremental migration scripts tool.
Do I need to setup it in both ways, or is it enough to do in SQL? And
how often SQL migrations used(Flyway, Liquibase) in projects?
If you do not use the automatic schema generation feature , you don't need to specify unique in #Column which only has meaning in case of automatic schema generation.
For nullable , it depends on hibernate.check_nullability setting.If it is turned on and you set #Column(nullable=false) , Hibernate will help to check this column cannot be null in the application level without asking the DB to check it. But even if do not set it , the database constraint (assuming you create a non-null constraint for it in DB) will eventually check it and not allow you to save the null value
I have some columns which has to be loaded based on some configurations,One production Database doesn't have those columns and my other Production box have the column.In this case the same piece of code should work in both the boxes.What is the configuration need to be done to load the columns dynamically?
In general,
for production we should not rely on hibernate to create the ddl . we need to have to have patch so that we can apply that patch into production.
or, we can use database migration tool such as liquibase .
For development and deployment of my WAR application I use the drop-and-create functionality. Basically erasing everything from the database and then automatically recreating all the necessary tables and fields according to my #Entity-classes.
Obviously, for production the drop-and-create functionality is out of question. How would I have to create the database tables and fields?
The nice thing about #Entity-classes is that due to OQL and the use of EntityManager all the database queries are generated, hence the WAR application gets database independent. If I now had to create the queries by hand in SQL and then let the application execute them, then I would have to decide in which sql dialect they are (i.e. MySQL, Oracly, SQL Server, ...). Is there a way to create the tables database independently? Is there a way to run structural database updates as well database independently (i.e. for database version 1 to database version 2)? Like altering field or table names, adding tables, droping tables, etc.?
Thank you #Qwerky for mentioning Liquibase. This absolutely is a solution and perfect for my case as I won't have to worry about versioning anymore. Liquibase is very easy to understand and studied in minutes.
For anyone looking for database versioning / scheme appliance:
Liquibase
I use JPA annotations (Hibernate implementation) to initialize my DB schema. And i follow the article DYNAMIC DATASOURCE ROUTING to implement the dynamic datasource routing class.
However, i have two databases (mapped 2 data sources). I set the first data source as defaultTargetDataSource. then start my application. When my application try to access 2nd data source, it tell me the table doesn't exist. It seems AbstractRoutingDataSource only create the table for the default data source but other data sources.
Is there any idea to create schema in all databases ?
PS.I'm using AbstractRoutingDataSource to implement my own DB shards.
I guess that you are using the hibenate configuration:
spring:
jpa:
hibernate:
ddl-auto: update
to reflect the entity changes to the database schema. This works fine as long as we use a single data source that is configured to be connected at startup.
However, if you have multiple data sources it is not possible to use this feature. The general approach with AbstractRoutingDataSource is to not have a data source at startup but select it at runtime.
If you select a primary data source, then it will be only applied to the primary one as hibernates applies this feature at startup, but the remaining databases will not be migrated.
To reflect the changes to all of your databases you can use a database migration tool such as Flyway or Liquibase.
Flyway is using SQL and pretty easy to configure and use to use.