I have the following liquibase configuration for the database in the spring boot application.
Initially, these YAML scripts were executed when the application started and the database was created, now I want to update the datatype for one column so do I need to update the existing create-tables.YAML with column configuration or need to create another file with a different name and add the entry in "db.changelog-master.yaml" file.
Please suggest, Thanks
You have to create a different change log and needs to be added to the master file
Best Practices Using Liquibase
Organizing Changelogs: Create a master changelog file that does not have actual changeSets but includes other changelogs (only YAML, JSON, and XML support using include, SQL does not). Doing so allows us to organize our changeSets in different changelog files. Every time we add a new feature to the application that requires a database change, we can create a new changelog file, add it to version control, and include it in the master changelog.
One Change per ChangeSet: Have only one change per changeSet, as this allows easier rollback in case of a failure in applying the changeSet.
Don’t Modify a ChangeSet: Never modify a changeSet once it has been executed. Instead, add a new changeSet if modifications are needed for the change that has been applied by an existing changeSet. Liquibase keeps track of the checksums of the changeSets that it already executed. If an already run changeSet is modified, Liquibase by default will fail to run that changeSet again, and it will not proceed with the execution of other changeSets.
Sample Setup
Currently, I am using SQL format liquibase changelog but you can use XML, YAML or any other format.
test_schema.sql
--liquibase formatted sql
--changeset test_user:test_schema splitStatements:true endDelimiter:; dbms:postgresql runOnChange:true runInTransaction:true context:dev, qa, prod logicalFilePath:schema/test_schema.sql
--preconditions onFail:HALT onError:HALT
--comment test_schema is added
CREATE TABLE test_data
(
test_id bigint PRIMARY KEY,
test_data character varying(100),
created_time timestamp with time zone,
last_modified_time timestamp with time zone
);
changelog-master.xml
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-4.3.xsd">
<include file="schemas/test_schema.sql" relativeToChangelogFile="true"/>
</databaseChangeLog>
directory setup
[![directory_setup][1]][1]
Soure : https://reflectoring.io/database-migration-spring-boot-liquibase/
As you can read here ,
Liquibase provides version control for your database
, the main reason Liquibase became so popular was that it allowed some type of version control to be applied on databases.
For this to be able to happen you need to always record your changes in database into changelogs.
So an example of Liquibase project will have
create-initial-tables-changelog.xml
update-schema-10-5-22.changelog.xml
update-data-15-5-22.changelog.xml
update-schema-10-6-22.changelog.xml
update-data-15-6-22.changelog.xml
Each changelog file with changes may be named according to a representation of what the main changes are (dates are used here only for simplifying).
Then the user can use a version control system (like git) and when for example the user checkouts a commit of the past (ex commit A on 15/05/2022) then he is able to view the database on that version as it existed on 15/05/2022, since liquibase will execute only the scripts that existed on that commit, namely create-initial-tables-changelog.xml, update-schema-10-5-22.changelog.xml, update-data-15-5-22.changelog.xml.
Also all changelog files need to be referenced in the master changelog file for a specific version, because this file is used for executing scripts in database when asked for.
The master changelog file works as a configuration file that will hold
all the references to all your other changelogs. Your master changelog
file must be in an XML, YAML, or JSON format.
From docs
Having said all the above, now I can answer your question
So do I need to update the existing create-tables.YAML with column
configuration or need to create another file with a different name
Probably the main reason you use Liquibase, is to have a version control of your database. If you wish to respect this reason, then you must create a different changelog file which would be a snapshot of a different version of database than the initial version that existed.
Also if for example you had the initial create-tables-changelog.xml and then you also had 3 more changelog.xml files that applied changes in database, you could not afford to make changes only in the create-tables-changelog.xml since this would risk that the execution of the 3 next files changelog1.xml, changelog2.xml, changelog3.xml would break.
Related
Is it possible to use Liquibase just to check the consistency of the database?
We have several java application modules using the same database. We decided that only one of the modules is responsible for the execution of database migrations, while the other modules (several batch jobs) include the scripts as a dependency. For the batch job modules we want to prevent the migration of the database schema, but we need to be sure that the code base uses the same version as the database.
Is it possible to configure liquibase in a way to perform the validation but not the migration?
We want to try this approach because the migration of two modules starting at the same time caused conflicts that prevented the application from starting.
It's possible to use Liquibase for DB schema validation, however it wasn't quite designed for this.
So, for instance, if you want to always check weather certain table exists or not, you can do the following:
<changeSet author="author_name" id="changeset_id" runAlways="true">
<preConditions onFail="HALT">
<tableExists tableName="foo_bar"/>
</preConditions>
<comment>Table "foo_bar" exists.</comment>
</changeSet>
This changeSet doen't do anything but checking that foo_bar table exists.
runAlways="true" attribute will tell Liquibase to execute the changeSet every time the application starts.
onFail="HALT" will throw an error if foo_bar table doesn't exist, hence if preConditions weren't met.
You can use the liquibase status command to check that all changesets listed in the changelog have been applied, and add the --verbose flag to that to see which changesets have not been applied.
This does not ensure that no drift has happened though - it is certainly possible for someone to manually alter the database and make changes that cause the status command to be inaccurate. As long as you are generally confident that schema changes are always made through liquibase, the status command should be sufficient.
Bonjour,
I am working on changing me Java application from using postgres to an embedded database. I would like the application to deploy with an initial set of data in the database. In the past during installation I have executed an sql script to fully generate the schema and insert the data in to my tables.
Ideally (becasue I don't really want to work out how to connect to the embedded database to generate it) I want to let JPA create my schema for the first time, and when it does I then want to be able to run my SQL to insert the data.
My search has turned up the obvious hibernate and JPA properties that allow running of an SQL script.
Firstly I found when using "hibernate.hbm2ddl.auto" you can define an import.sql file this made me very happy for a day until I realised it only works with create and not with update. My application when using postgres had this set to update. And what i would really like is for it to know if it's had to create the schema and if it has then run the import.sql. No Joy though.
I then moved on to using "javax.persistence.schema-generation.database.action" set to "create" I figured using the JPA specification was probably wiser anyway and so I defined "javax.persistence.sql-load-script-source" the spec says for "create"
The provider will create the database artifacts on application
deployment. The artifacts will remain unchanged after application
redeployment.
This lead me to believe it would do exactly what I wanted, only create the tables "on application deployment" however when I ran my tests using this, each test (creating a new spring context) tried to just create all the tables again and obviously failed, which made me realise application deployment didn't mean what i thought it meant (wishful thinking) and now I realise that JPA doesn't seem to even have an equivalent of Hibernates "update" property, so it's always going to generate the tables?
What I want is to have my tables and data generated when you first spin up the app and for subsequent executions to know the data is there and use it, I am assuming it's too much to hope for that this exists, but i'm sure that this must be a common requirement? So my question is what is the general recommended way to achieve the goal of allowing JPA to create my schema but being able to insert some data in to a db that persists between executions?
The answer is flyway. It is a database migration library, and if you are using Spring boot it is seamlessly integrated, with regular Spring you have to create a bean, which get a reference to the connection pool, creates a connection and does the migration.
Flyway creates a table so it keeps track of which scripts has already been applied to the database, and the scripts are simply part of the resources.
We normally use JPA to generate the initial script. This script becomes V1__initial.sql, if we need to add some data we can add V2__addUsers.sql and V3__addCustomers.sql etc.
Later when we need to rename columns or add additional tables, we simply add new scripts as part of the War file, and when the application is loaded Flyway looks at it's internal table, to see the current version, and then applies any new scripts to bring it up to de desired version.
In Spring the code would look like this
private void performFlywayMigration(DataSource dataSource) {
Flyway flyway = new Flyway();
flyway.setLocations("db/migration");
flyway.setDataSource(dataSource);
log.debug("Starting database migration.");
flyway.migrate();
log.debug("Database migration completed.");
MigrationInfo current = flyway.info().current();
if (current.getState() == MigrationState.FUTURE_SUCCESS) {
log.warn("The Database schema is version " + current.getVersion() + ", this application expects version " + flyway.getBaselineVersion().getVersion());
}
}
In general you should not JPA to create tables directly. because you sometimes need to modify the scripts, for instance on Sybase Varchar(255) means 255 bytes, so if you are storing 2 or 3 byte Unicode chars, you need more space - JPA implementation does not account for that (last time I checked).
I have a glassfish application which creates its DB schema with liquibase. I have migrated the same application to Spring Boot. I did not drop the DB schema. When I deploy the Spring application and the liquibase scripts run, I get
java.sql.SQLSyntaxErrorException: ORA-00955: name is already used by an existing object
when executing the changeset for creating one of the tables.
I need to specify there is no change in the liquibase scripts and the database changelog lock is acquired successfully.
Shouldn't it skip all the table creation steps? I plug in the same application to the same DB. Have you encountered this situation before?
UPDATE: is it possible that this might be related to the MD5 sum stored in the changelog file ? So the md5 computed by the new application doesn't match the one computed by the old one and the scripts are triggered, causing the obvious exception ?
Many thanks
I don't think you have a checksum difference - that would cause a different error message. What I think is likely is that the DATABASECHANGELOG table has a different changelog path for the changes than what is being reported by Liquibase.
Changesets are identified by 3 things - the changeset id, the author, and the path. When Liquibase is deciding whether a changeset from a changelog should be deployed to a particular database, it looks at the DATABASECHANGELOG table and retrieves that information, compares it with the information in the changelog file, and doeesn't try to deploy anything that matches up. In this case, I think it detects differences in the path and tries to re-deploy the change.
We have a system made in java using a postgres database.
This database changes often, and once a week or less we are updating it. These changes are in the struture of the DB (DDL), usually in functions and fields to add new functionality.
For the changes in the DB we usually use navicat as follows:
1- We made the change in the structure of the DB using navicat and we copy the SQL that gives us to an XML file for each change we made.
2- When we have to update the DB in production we check files, identified by a version number, and update the DB.
3- Then we repeat this for each DB installed (30 in total)
The problem that we are having is that as the whole process is manual and is very easy to forget to copy a change to the XML so when we use it the script does not work or even worse when the system needs this change fails.
Therefore we are looking for a way to automate this task and we came with the following idea:
1- We make changes in navicat
2- Configure the postgres to LOG the changes in the DDL into a CSV file
3- Later we read the CSV file and pass the changes to the XML to update the producction DB
The problem we are having is that the LOG will save all attempts to change the structure, including errors so if we use that script to update it will fail too.
Is there some way to save only successful DDL changes in the log in postgres?
Is there a script or application to get the DDL changes and put it in script automatically?
Is there a better way to automate this process?
there are many answers for the questions above :-) i have managed rapidly changing databases using a number of schemes. one way to do it is maintain a master database (like you have). Use dbtoyaml to create a yaml description of the database. Then use yamltodb on all of the (30) targets, which will do everything necessary to make the target databases look exactly like the master. I have used this software for about 6 months, it is fantastic. pyrseas. -g
I'm new to liquibase (I'm using Spring Liquibase)... Can someone please explain whether it's possible to have a manual db change get reflected in the changeset file of liquibase... I mean if we have a table A at the first time and then after I removed a column from it, how to get it automatically updated in the change set file...
Thanx
In other words, I'm looking for a liquibase diff operation in spring liquibase or from java
Thanx again
The key concept of liquibase is that you don't make manual DB changes (or at least, only when testing) outside of the liquibase changelog. You should add a new changeset that reflects the change you want to be made. If you're confused about what the changeset should look like, read the docs.