My requirement is as follows
As i run demo.sql file using the Liquibase. It consist of the 10 sql statements.
At the 5th statement if it encounters error, it should completely roll back all the changes.
My requirement is to continue it till finishes the script and try to commit the rest statements which are valid.
Any solution on
that?
You could also separate it out into 10 changesets, e.g.
<changeSet author="liquibase-docs" id="sql-example" failOnError="false">
<sql>SELECT * from one_table;</sql>
</changeSet>
<changeSet author="liquibase-docs" id="sql-example2" failOnError="false">
<sql>SELECT * from another_table;</sql>
</changeSet>
You should add preConditions and rollback to you liquibase script.
From the https://www.liquibase.org/documentation/changes/sql_file.html
<changeSet author="liquibase-docs" id="sqlFile-example">
<preConditions onFail="MARK_RAN">
<!-- your preconditions here -->
</preConditions>
<sqlFile dbms="h2, oracle"
encoding="utf8"
endDelimiter="\nGO"
path="my/path/file.sql"
relativeToChangelogFile="true"
splitStatements="true"
stripComments="true"/>
<rollback>
<!-- your rollback here -->
</rollback>
</changeSet>
If you want to rollback specifically on the 5th statement in demo.sql, then perhaps you can split your demo.sql into two files and execute them in separate changeSets: the first will include statements 1-5, and the second will include statements 6-10.
Related
I am developing a spring boot app with hibernate an liquibase on a postsgre db. When i start the spring boot app, the changelog is updated in the db and the log says, that the script run successfully, but the table is still in the schema.
This is my db.changelog.xml:
<changeSet author="stimpson" id="deletetablehans">
<sqlFile dbms="postgre" encoding="UTF-8" endDelimiter=";"
path="sql/00003_deleteTableHans.sql" relativeToChangelogFile="true"
splitStatements="true" stripComments="true" />
</changeSet>
This is my script (deletetablehans.sql in the sql folder):
--liquibase formatted sql
--changeset stimpson:deleteTableHans
DROP TABLE HANS;
commit;
;
This is part of my logfile:
2021-01-25 15:43:50,438 INFO liquibase.lockservice : Changelog-Protokoll erfolgreich gesperrt.
2021-01-25 15:43:50,686 INFO liquibase.changelog : Reading from public.databasechangelog
2021-01-25 15:43:50,704 INFO liquibase.changelog : ChangeSet db/dbchangelog.xml::deletetablehans::stimpson ran successfully in 0ms
2021-01-25 15:43:50,709 INFO liquibase.lockservice : Successfully released change log lock
I do not care why the language changes, but when i look at the database, I see that the table named Hans is still there. But why? I did try with an explicit commit and the delimiter in its own line, but i do not understand the outcome?
This is my liquibase.properties file:
changeLogFile=src/main/resources/db/dbchangelog.xml
url=jdbc:postgresql://localhost:5432/padsyhw3
username=dbuser
password=dbpass
driver=org.postgresql.Driver
This is the only data row in the databasechangelog table in DB:
deletetablehans stimpson db/dbchangelog.xml 2021-01-25 15:49:05 1 EXECUTED 8:90e2fa99c6beeace580e429bd2bf9ae3 sqlFile 4.2.2
I have a working solution to your problem
I wonder if the case difference between deletetablehans and deleteTableHans could have an impact on the execution of the changeset
According to the liquibase supported databases, when using dbms for postgre you should use the value postgresql while you used postgre
Do you have any specific reason to use a changeset inside of an SQL file to do the drop action ? I can see at least two alternatives if no one can bring you a valid solution
Alternative 1: Use sql instead of sqlFile
<changeSet author="stimpson" id="deletetablehans">
<sql dbms="postgresql">
DROP TABLE HANS;
</sql>
</changeSet>
Alternative 2: Use liquibase dropTable
<changeSet author="stimpson" id="deletetablehans">
<dropTable tableName="HANS" />
</changeSet>
Using the following changeset in liquibase to create a table with a foreign key is possible and works.
<changeSet author="cibn" context="initialSchema" id="initialSchema-edited-1.0.4">
<createTable tableName="prices">
<column name="articleId" type="String">
<constraints nullable="false" foreignKeyName="fk_articles_articleId" references="articles(articleId)"/>
</column>
...
</changeSet>
However, the addForeignKeyConstraint change after creation of the initial schema is not supported.
https://www.liquibase.org/documentation/changes/add_foreign_key_constraint.html
Why? and could this be changed?
I believe that's because ADD CONSTRAINT is not supported by SQLite for ALTER TABLE feature, and that's exactly what Liquibase does during addForeignKeyConstraint change.
Here's the documentation SQL Features That SQLite Does Not Implement
Only the RENAME TABLE, ADD COLUMN, and RENAME COLUMN variants of the ALTER TABLE command are supported. Other kinds of ALTER TABLE operations such as DROP COLUMN, ALTER COLUMN, ADD CONSTRAINT, and so forth are omitted.
I'm working in posgresql database, and I want to overwrite an existing comment in a column of a table using liquibase, so I have :
mytable (column1 int) --This is a comment
I know that I can do it in a SQL native way, like this:
<changeSet author="myuser" id="123456">
<sql dbms="postgresql">
COMMENT ON COLUMN mytable.column1 IS 'This is my new comment';
</sql>
</changeSet>
Is there any way to make this change without relying on a native mechanism?
There is a special Change Type setColumnRemarks to add a remark on existing column:
<changeSet author="myuser" id="123456">
<setColumnRemarks
columnName="column1"
remarks="This is my new comment"
tableName="mytable"/>
</changeSet>
You can use the renameColumn Type to liquibase providing the newColumnName the same as the oldColumnName - even thought I am not sure if it will work with PostgreSQL:
<changeSet author="myuser" id="123456">
<renameColumn
newColumnName="column1"
oldColumnName="column1"
remarks="This is my new comment"
tableName="mytable"/>
</changeSet>
I have situation like this:
have existing table
need add new column using
type of the column is TIMESTAMP
Code:
<changeSet author="name" id="bla-bla-bla">
<addColumn tableName="table_name">
<column name="new_col_name" type="TIMESTAMP"/>
</addColumn>
</changeSet>
This code creating column, and its cool! But! Its also sets to all existing rows default value to 0000-00-00 00:00:00.
But I need leave all existing rows without changes. And TIMESTAMPS should be set only for new rows.
A TIMESTAMP column in MySQL defaults to NOT NULL and that dreaded zero date as the default value (See the manual for details).
The only way I can see how to avoid this, is to modify the generated SQL to include the DEFAULT NULL clause in the changeset.
<addColumn tableName="foo">
<column name="new_date" type="TIMESTAMP"/>
</addColumn>
<modifySql>
<replace replace="TIMESTAMP" with="TIMESTAMP NULL DEFAULT NULL"/>
</modifySql>
Specifying defaultValueDate="NULL" does not seem to work. I guess that's because Liquibase does not know about the timestamp quirks of MySQL and thinks it's no necessary to state the obvious - that a column should be filled with NULL.
Edit
I forgot that this will not work for new rows of course. There are two ways to re-apply the default value using Liquibase:
Adding a second changeSet that changes the default value to CURRENT_TIMESTAMP:
<sql>
alter table foo modify new_date timestamp null default current_timestamp
</sql>
Or by not using DEFAULT NULL when adding the column, but then running a statement that sets all (existing) rows back to NULL. A sql tag with update foo set new_date = null.
one more time, i need modify this code.
The idea is the same, how to create new Timestamp column but all existing row should be modified with current timestamp and only once during updating but all new rows should be with creation timestamps.
Code:
<changeSet author="name" id="bla-bla-bla">
<addColumn tableName="table_name">
<column name="new_col_name" type="TIMESTAMP"/>
</addColumn>
</changeSet>
This code creates column with 0000-00-00 time stamps for all existing rows.
I'm trying to add a lot of records (currently located in an Excel file) into my DB using Liquibase (so that I know how to do it for future DB changes)
My idea was to read the excel file using Java, and then fill the ChangeLogParameters from my Spring initialization class like this:
SpringLiquibase liqui = new SpringLiquibase();
liqui.setBeanName("liquibaseBean");
liqui.setDataSource(dataSource());
liqui.setChangeLog("classpath:changelog.xml");
HashMap<String, String> values = new HashMap<String, String>();
values.put("line1col1", ExcelValue1);
values.put("line1col2", ExcelValue2);
values.put("line1col3", ExcelValue3);
values.put("line2col1", ExcelValue4);
values.put("line2col2", ExcelValue5);
values.put("line2col3", ExcelValue6);
...
liqui.setChangeLogParameters(values);
The problem with this approach is that my changelog.xml would be very strange (and non productive)
<changeSet author="gcardoso" id="2012082707">
<insert tableName="t_user">
<column name="login" value="${ExcelValue1}"/>
<column name="name" value="${ExcelValue2}}"/>
<column name="password" value="${ExcelValue3}"/>
</insert>
<insert tableName="t_user">
<column name="login" value="${ExcelValue4}"/>
<column name="name" value="${ExcelValue5}}"/>
<column name="password" value="${ExcelValue6}"/>
</insert>
...
</changeSet>
Is there any way that I could do something like this:
HashMap<String, ArrayList<String>> values = new HashMap<String, ArrayList<String>>();
values.put("col1", Column1);
values.put("col2", Column2);
values.put("col3", Column3);
liqui.setChangeLogParameters(values);
<changeSet author="gcardoso" id="2012082707">
<insert tableName="t_user">
<column name="login" value="${Column1}"/>
<column name="name" value="${Column2}}"/>
<column name="password" value="${Column3}"/>
</insert>
</changeSet>
Or is there any other way?
EDIT :
My current option is to convert the Excel into a CSV file and import the data using
<changeSet author="gcardoso" id="InitialImport2" runOnChange="true">
<loadData tableName="T_ENTITY" file="com/exictos/dbUpdate/entity.csv">
<column header="SHORTNAME" name="SHORTNAME" />
<column header="DESCRIPTION" name="DESCRIPTION" />
</loadData>
<loadData tableName="T_CLIENT" file="com/exictos/dbUpdate/client.csv">
<column header="fdbhdf" name="ENTITYID" defaultValueComputed="(SELECT ID FROM T_ENTITY WHERE SHORTNAME = ENTITY_REFERENCE"/>
<column header="DESCRIPTION" name="DESCRIPTION" />
</loadData>
</changeSet>
with these CSV files:
entity.csv
SHORTNAME,DESCRIPTION
nome1,descricao1
nome2,descricao2
client.csv
DESCRIPTION,ENTITY_REFERENCE
descricaoCliente1,nome1
descricaoCliente2,nome2
But I get this error:
liquibase.exception.DatabaseException: Error executing SQL INSERT INTO `T_CLIENT` (`DESCRIPTION`, `ENTITY_REFERENCE`) VALUES ('descricaoCliente1', 'nome1'): Unknown column 'ENTITY_REFERENCE' in 'field list'
If I change the header of my client.csv to DESCRIPTION,ENTITYID I get this error:
liquibase.exception.DatabaseException: Error executing SQL INSERT INTO `T_CLIENT` (`DESCRIPTION`, `ENTITYID`) VALUES ('descricaoCliente1', 'nome1'): Incorrect integer value: 'nome1' for column 'entityid' at row 1
I any of these cases, it looks like defaultValueComputed doesn't work in the same way as valueComputed in the following example
<changeSet author="gcardoso" id="InitialImport1">
<insert tableName="T_ENTITY">
<column name="SHORTNAME">nome1</column>
<column name="DESCRIPTION">descricao1</column>
</insert>
<insert tableName="T_CLIENT">
<column name="ENTITYID" valueComputed="(SELECT ID FROM T_ENTITY WHERE SHORTNAME = 'nome1')"/>
<column name="DESCRIPTION">descricaoCliente</column>
</insert>
</changeSet>
Is this the expected behavior? Bug of LiquiBase? Or just me doing something wrong (the most likely) ?
Or is there any other way to import massive amount of data? But always using LiquiBase and/or Spring.
EDIT2 : My problem is that I'm unable to insert the data into the second table with the correct foreign key
I would say that Liquibase is not the ideal tool for what you want to achieve. Liquibase is well-suited to managing the database structure, not the database's data.
If you still want to use Liquibase to manage the data, you have a couple of options (see here) -
Record your insert statements as SQL, and refer to them from changelog.xml like this:
<sqlFile path="/path/to/file.sql"/>
Use a Custom Refactoring Class which you refer to from the changelog.xml like this:
<customChange class="com.example.YourJavaClass"
csvFile="/path/to/file.csv"/>
YourJavaClass would read the records from the CSV file, and apply them to the database, implementing this method:
void execute(Database database) throws CustomChangeException;
Bear in mind, that once you have loaded this data via Liquibase, you shouldn't modify the data in the file, because those changes won't be re-applied. If you want to make changes to it, you would have to do it in subsequent changesets. So after a while you might end up with a lot of different CSV files/liquibase changesets, all operating on the same/similar data (this depends on how you are going to use this data - will it ever change once inserted?).
I would recommend looking at using DBUnit for managing your reference data. Its a tool primarily used in unit testing, but it is very mature, suitable for use in production I would say. You can store information in CSV or XML. I would suggest using a Spring 'InitializingBean' to load the dataset from the classpath and perform a DBUnit 'refresh' operation, which will, from the docs:
This operation literally refreshes dataset contents into the database. This
means that data of existing rows is updated and non-existing row get
inserted. Any rows which exist in the database but not in dataset stay
unaffected.
This way, you can keep your reference data in one place, and add to it over time so that there is only one source of the information, and it isn't split across multiple Liquibase changesets. Keeping your DBUnit datasets in version control would provide trace-ability, and as a bonus, DBUnit datasets are portable across databases, and can manage things like insert order to prevent foreign key violations for you.
It depends on your target database. If you are using Sybase or MSSQL server then you can use the BCP tool that comes along with your installed client+driver. It is the fastest way of moving large amounts of data in/out of these databases.
Googling around I also found these links...
Oracle has the SQL*LOADER tool
MySQL has the LOAD DATA INFILE command
I would expect each database vendor to supply a tool of some description for bulk loading of data.