I have a glassfish application which creates its DB schema with liquibase. I have migrated the same application to Spring Boot. I did not drop the DB schema. When I deploy the Spring application and the liquibase scripts run, I get
java.sql.SQLSyntaxErrorException: ORA-00955: name is already used by an existing object
when executing the changeset for creating one of the tables.
I need to specify there is no change in the liquibase scripts and the database changelog lock is acquired successfully.
Shouldn't it skip all the table creation steps? I plug in the same application to the same DB. Have you encountered this situation before?
UPDATE: is it possible that this might be related to the MD5 sum stored in the changelog file ? So the md5 computed by the new application doesn't match the one computed by the old one and the scripts are triggered, causing the obvious exception ?
Many thanks
I don't think you have a checksum difference - that would cause a different error message. What I think is likely is that the DATABASECHANGELOG table has a different changelog path for the changes than what is being reported by Liquibase.
Changesets are identified by 3 things - the changeset id, the author, and the path. When Liquibase is deciding whether a changeset from a changelog should be deployed to a particular database, it looks at the DATABASECHANGELOG table and retrieves that information, compares it with the information in the changelog file, and doeesn't try to deploy anything that matches up. In this case, I think it detects differences in the path and tries to re-deploy the change.
Related
I have the following liquibase configuration for the database in the spring boot application.
Initially, these YAML scripts were executed when the application started and the database was created, now I want to update the datatype for one column so do I need to update the existing create-tables.YAML with column configuration or need to create another file with a different name and add the entry in "db.changelog-master.yaml" file.
Please suggest, Thanks
You have to create a different change log and needs to be added to the master file
Best Practices Using Liquibase
Organizing Changelogs: Create a master changelog file that does not have actual changeSets but includes other changelogs (only YAML, JSON, and XML support using include, SQL does not). Doing so allows us to organize our changeSets in different changelog files. Every time we add a new feature to the application that requires a database change, we can create a new changelog file, add it to version control, and include it in the master changelog.
One Change per ChangeSet: Have only one change per changeSet, as this allows easier rollback in case of a failure in applying the changeSet.
Don’t Modify a ChangeSet: Never modify a changeSet once it has been executed. Instead, add a new changeSet if modifications are needed for the change that has been applied by an existing changeSet. Liquibase keeps track of the checksums of the changeSets that it already executed. If an already run changeSet is modified, Liquibase by default will fail to run that changeSet again, and it will not proceed with the execution of other changeSets.
Sample Setup
Currently, I am using SQL format liquibase changelog but you can use XML, YAML or any other format.
test_schema.sql
--liquibase formatted sql
--changeset test_user:test_schema splitStatements:true endDelimiter:; dbms:postgresql runOnChange:true runInTransaction:true context:dev, qa, prod logicalFilePath:schema/test_schema.sql
--preconditions onFail:HALT onError:HALT
--comment test_schema is added
CREATE TABLE test_data
(
test_id bigint PRIMARY KEY,
test_data character varying(100),
created_time timestamp with time zone,
last_modified_time timestamp with time zone
);
changelog-master.xml
<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-4.3.xsd">
<include file="schemas/test_schema.sql" relativeToChangelogFile="true"/>
</databaseChangeLog>
directory setup
[![directory_setup][1]][1]
Soure : https://reflectoring.io/database-migration-spring-boot-liquibase/
As you can read here ,
Liquibase provides version control for your database
, the main reason Liquibase became so popular was that it allowed some type of version control to be applied on databases.
For this to be able to happen you need to always record your changes in database into changelogs.
So an example of Liquibase project will have
create-initial-tables-changelog.xml
update-schema-10-5-22.changelog.xml
update-data-15-5-22.changelog.xml
update-schema-10-6-22.changelog.xml
update-data-15-6-22.changelog.xml
Each changelog file with changes may be named according to a representation of what the main changes are (dates are used here only for simplifying).
Then the user can use a version control system (like git) and when for example the user checkouts a commit of the past (ex commit A on 15/05/2022) then he is able to view the database on that version as it existed on 15/05/2022, since liquibase will execute only the scripts that existed on that commit, namely create-initial-tables-changelog.xml, update-schema-10-5-22.changelog.xml, update-data-15-5-22.changelog.xml.
Also all changelog files need to be referenced in the master changelog file for a specific version, because this file is used for executing scripts in database when asked for.
The master changelog file works as a configuration file that will hold
all the references to all your other changelogs. Your master changelog
file must be in an XML, YAML, or JSON format.
From docs
Having said all the above, now I can answer your question
So do I need to update the existing create-tables.YAML with column
configuration or need to create another file with a different name
Probably the main reason you use Liquibase, is to have a version control of your database. If you wish to respect this reason, then you must create a different changelog file which would be a snapshot of a different version of database than the initial version that existed.
Also if for example you had the initial create-tables-changelog.xml and then you also had 3 more changelog.xml files that applied changes in database, you could not afford to make changes only in the create-tables-changelog.xml since this would risk that the execution of the 3 next files changelog1.xml, changelog2.xml, changelog3.xml would break.
We use liquibase to specify the database layout and the changes of it. In development we use liquibase integrated in our Java application to perform the update if necessary.
Because our application does not have the rights to alter the schema in production, liquibase does not run in production. Instead we use liquibase to generate the SQL scripts. These scripts are then executed manually before our application is deployed.
We would like to make sure that the database layout matches to the changelog that corresponds to the application. The SQL scripts create the DATABASECHANGELOG table and insert the rows like the update command of liquibase would do, so the information about the applied changesets is stored in the database.
However I could not find a suitable liquibase method that only checks if the application's changesets and the database layout/DATABASECHANGELOG are equal. This method must not attempt to fix that, it should only return true/false. Is there such a method available in the liquibase Java API?
After digging through the liquibase source code, I found the method that I was looking for:
liquibase.listUnrunChangeSets(null, null);
This is exactly what I wanted it seems to be working fine.
Is it possible to use Liquibase just to check the consistency of the database?
We have several java application modules using the same database. We decided that only one of the modules is responsible for the execution of database migrations, while the other modules (several batch jobs) include the scripts as a dependency. For the batch job modules we want to prevent the migration of the database schema, but we need to be sure that the code base uses the same version as the database.
Is it possible to configure liquibase in a way to perform the validation but not the migration?
We want to try this approach because the migration of two modules starting at the same time caused conflicts that prevented the application from starting.
It's possible to use Liquibase for DB schema validation, however it wasn't quite designed for this.
So, for instance, if you want to always check weather certain table exists or not, you can do the following:
<changeSet author="author_name" id="changeset_id" runAlways="true">
<preConditions onFail="HALT">
<tableExists tableName="foo_bar"/>
</preConditions>
<comment>Table "foo_bar" exists.</comment>
</changeSet>
This changeSet doen't do anything but checking that foo_bar table exists.
runAlways="true" attribute will tell Liquibase to execute the changeSet every time the application starts.
onFail="HALT" will throw an error if foo_bar table doesn't exist, hence if preConditions weren't met.
You can use the liquibase status command to check that all changesets listed in the changelog have been applied, and add the --verbose flag to that to see which changesets have not been applied.
This does not ensure that no drift has happened though - it is certainly possible for someone to manually alter the database and make changes that cause the status command to be inaccurate. As long as you are generally confident that schema changes are always made through liquibase, the status command should be sufficient.
I have a constraint for my production system that all SQL changes must be executed manually by a DBA for security purposes. Consequently, I want to use Liquibase to generate the SQL, and have the DBA execute it.
However, on application startup in Production, I would like to configure Liquibase to ensure that all changesets have been executed, and have the proper signatures. If either any of the changesets have an invalid signature or have not been executed, I would like Liquibase to throw an exception (which I can then handle in my startup sequence). Under no circumstances would i want Liquibase to update the DB when run in this environment.
In other environments, I would like to leave it to Liquibase to run in default configuration - that is validate that existing changesets have not been modified and execute any missing changesets.
Does Liquibase support this kind of configuration? I've looked through the liquibase.configuration.GlobalConfiguration class, but do not see any config parameters that would provide this config.
You don't specify how you run Liquibase from your application, so it is a bit hard to say exactly. I think you will want to use two different commands - one for production, and one for all other environments. In most environments, you use the update command. In production, you would need to use the status command which returns either a count of the number of undeployed changesets or a list of undeployed changesets.
I have created a Liquibase-CDI addon that supports this functionality. It is based on the liquibase-cdi extension, but uses the CDI observer pattern instead. It can be found on github at https://github.com/benze/liquibase-cdi
Bonjour,
I am working on changing me Java application from using postgres to an embedded database. I would like the application to deploy with an initial set of data in the database. In the past during installation I have executed an sql script to fully generate the schema and insert the data in to my tables.
Ideally (becasue I don't really want to work out how to connect to the embedded database to generate it) I want to let JPA create my schema for the first time, and when it does I then want to be able to run my SQL to insert the data.
My search has turned up the obvious hibernate and JPA properties that allow running of an SQL script.
Firstly I found when using "hibernate.hbm2ddl.auto" you can define an import.sql file this made me very happy for a day until I realised it only works with create and not with update. My application when using postgres had this set to update. And what i would really like is for it to know if it's had to create the schema and if it has then run the import.sql. No Joy though.
I then moved on to using "javax.persistence.schema-generation.database.action" set to "create" I figured using the JPA specification was probably wiser anyway and so I defined "javax.persistence.sql-load-script-source" the spec says for "create"
The provider will create the database artifacts on application
deployment. The artifacts will remain unchanged after application
redeployment.
This lead me to believe it would do exactly what I wanted, only create the tables "on application deployment" however when I ran my tests using this, each test (creating a new spring context) tried to just create all the tables again and obviously failed, which made me realise application deployment didn't mean what i thought it meant (wishful thinking) and now I realise that JPA doesn't seem to even have an equivalent of Hibernates "update" property, so it's always going to generate the tables?
What I want is to have my tables and data generated when you first spin up the app and for subsequent executions to know the data is there and use it, I am assuming it's too much to hope for that this exists, but i'm sure that this must be a common requirement? So my question is what is the general recommended way to achieve the goal of allowing JPA to create my schema but being able to insert some data in to a db that persists between executions?
The answer is flyway. It is a database migration library, and if you are using Spring boot it is seamlessly integrated, with regular Spring you have to create a bean, which get a reference to the connection pool, creates a connection and does the migration.
Flyway creates a table so it keeps track of which scripts has already been applied to the database, and the scripts are simply part of the resources.
We normally use JPA to generate the initial script. This script becomes V1__initial.sql, if we need to add some data we can add V2__addUsers.sql and V3__addCustomers.sql etc.
Later when we need to rename columns or add additional tables, we simply add new scripts as part of the War file, and when the application is loaded Flyway looks at it's internal table, to see the current version, and then applies any new scripts to bring it up to de desired version.
In Spring the code would look like this
private void performFlywayMigration(DataSource dataSource) {
Flyway flyway = new Flyway();
flyway.setLocations("db/migration");
flyway.setDataSource(dataSource);
log.debug("Starting database migration.");
flyway.migrate();
log.debug("Database migration completed.");
MigrationInfo current = flyway.info().current();
if (current.getState() == MigrationState.FUTURE_SUCCESS) {
log.warn("The Database schema is version " + current.getVersion() + ", this application expects version " + flyway.getBaselineVersion().getVersion());
}
}
In general you should not JPA to create tables directly. because you sometimes need to modify the scripts, for instance on Sybase Varchar(255) means 255 bytes, so if you are storing 2 or 3 byte Unicode chars, you need more space - JPA implementation does not account for that (last time I checked).