Using the following changeset in liquibase to create a table with a foreign key is possible and works.
<changeSet author="cibn" context="initialSchema" id="initialSchema-edited-1.0.4">
<createTable tableName="prices">
<column name="articleId" type="String">
<constraints nullable="false" foreignKeyName="fk_articles_articleId" references="articles(articleId)"/>
</column>
...
</changeSet>
However, the addForeignKeyConstraint change after creation of the initial schema is not supported.
https://www.liquibase.org/documentation/changes/add_foreign_key_constraint.html
Why? and could this be changed?
I believe that's because ADD CONSTRAINT is not supported by SQLite for ALTER TABLE feature, and that's exactly what Liquibase does during addForeignKeyConstraint change.
Here's the documentation SQL Features That SQLite Does Not Implement
Only the RENAME TABLE, ADD COLUMN, and RENAME COLUMN variants of the ALTER TABLE command are supported. Other kinds of ALTER TABLE operations such as DROP COLUMN, ALTER COLUMN, ADD CONSTRAINT, and so forth are omitted.
Related
I'm working in posgresql database, and I want to overwrite an existing comment in a column of a table using liquibase, so I have :
mytable (column1 int) --This is a comment
I know that I can do it in a SQL native way, like this:
<changeSet author="myuser" id="123456">
<sql dbms="postgresql">
COMMENT ON COLUMN mytable.column1 IS 'This is my new comment';
</sql>
</changeSet>
Is there any way to make this change without relying on a native mechanism?
There is a special Change Type setColumnRemarks to add a remark on existing column:
<changeSet author="myuser" id="123456">
<setColumnRemarks
columnName="column1"
remarks="This is my new comment"
tableName="mytable"/>
</changeSet>
You can use the renameColumn Type to liquibase providing the newColumnName the same as the oldColumnName - even thought I am not sure if it will work with PostgreSQL:
<changeSet author="myuser" id="123456">
<renameColumn
newColumnName="column1"
oldColumnName="column1"
remarks="This is my new comment"
tableName="mytable"/>
</changeSet>
How do i set the autoincrement property using 'startWith' on a column in PostgreSQL using liquibase??
For some reason it always starts from 1. I tried using a custom sequence but that didn't help either.
<column autoIncrement="true" startWith="100" name="id" type="bigint">
That's my current column definition which does not work.
EDIT:
I want to import data from csv using liquibase. I tried the following:
<changeSet author="author" id="createSequence">
<createSequence
incrementBy="1"
sequenceName="mytable_id_seq"
startValue="1000"/>
</changeSet>
</changeSet>
<changeSet author="author" id="1-mytable">
<createTable tableName="mytable">
<column name="id" type="BIGSERIAL" defaultValueComputed="nextval('mytable_id_seq')">
<constraints primaryKey="true" primaryKeyName="mytable_pkey"/>
</column>
</createTable>
<loadData encoding="UTF-8"
file="liquibase/data/mytable.csv"
separator=","
tableName="mytable">
</loadData>
</changeSet>
If i try this I receive the following error 'currval of sequence "table_id_seq" is not yet defined in this session' and I think that it uses the sequence from the public schema instead of what i have set to liquibase.
Another thing i tried was to update it manually:
ALTER SEQUENCE mytable_id_seq restart with 100;
In this case the sequence used was the one from the public schema, but i want to use the schema set to liquibase
Instead of using bigserial which is an autoincrementing bigint specific to postgres use bigint if you are going to be setting up your own increment and sequence.
"The data types smallserial, serial and bigserial are not true types, but merely a notational convenience for creating unique identifier columns (similar to the AUTO_INCREMENT property supported by some other databases). In the current implementation, specifying:"
CREATE TABLE tablename (
colname SERIAL
);
is the same as
CREATE SEQUENCE tablename_colname_seq AS integer;
CREATE TABLE tablename (
colname integer NOT NULL DEFAULT nextval('tablename_colname_seq')
);
ALTER SEQUENCE tablename_colname_seq OWNED BY tablename.colname;
From here
https://www.postgresql.org/docs/12/datatype-numeric.html
I have situation like this:
have existing table
need add new column using
type of the column is TIMESTAMP
Code:
<changeSet author="name" id="bla-bla-bla">
<addColumn tableName="table_name">
<column name="new_col_name" type="TIMESTAMP"/>
</addColumn>
</changeSet>
This code creating column, and its cool! But! Its also sets to all existing rows default value to 0000-00-00 00:00:00.
But I need leave all existing rows without changes. And TIMESTAMPS should be set only for new rows.
A TIMESTAMP column in MySQL defaults to NOT NULL and that dreaded zero date as the default value (See the manual for details).
The only way I can see how to avoid this, is to modify the generated SQL to include the DEFAULT NULL clause in the changeset.
<addColumn tableName="foo">
<column name="new_date" type="TIMESTAMP"/>
</addColumn>
<modifySql>
<replace replace="TIMESTAMP" with="TIMESTAMP NULL DEFAULT NULL"/>
</modifySql>
Specifying defaultValueDate="NULL" does not seem to work. I guess that's because Liquibase does not know about the timestamp quirks of MySQL and thinks it's no necessary to state the obvious - that a column should be filled with NULL.
Edit
I forgot that this will not work for new rows of course. There are two ways to re-apply the default value using Liquibase:
Adding a second changeSet that changes the default value to CURRENT_TIMESTAMP:
<sql>
alter table foo modify new_date timestamp null default current_timestamp
</sql>
Or by not using DEFAULT NULL when adding the column, but then running a statement that sets all (existing) rows back to NULL. A sql tag with update foo set new_date = null.
one more time, i need modify this code.
The idea is the same, how to create new Timestamp column but all existing row should be modified with current timestamp and only once during updating but all new rows should be with creation timestamps.
Code:
<changeSet author="name" id="bla-bla-bla">
<addColumn tableName="table_name">
<column name="new_col_name" type="TIMESTAMP"/>
</addColumn>
</changeSet>
This code creates column with 0000-00-00 time stamps for all existing rows.
I'm trying to add a lot of records (currently located in an Excel file) into my DB using Liquibase (so that I know how to do it for future DB changes)
My idea was to read the excel file using Java, and then fill the ChangeLogParameters from my Spring initialization class like this:
SpringLiquibase liqui = new SpringLiquibase();
liqui.setBeanName("liquibaseBean");
liqui.setDataSource(dataSource());
liqui.setChangeLog("classpath:changelog.xml");
HashMap<String, String> values = new HashMap<String, String>();
values.put("line1col1", ExcelValue1);
values.put("line1col2", ExcelValue2);
values.put("line1col3", ExcelValue3);
values.put("line2col1", ExcelValue4);
values.put("line2col2", ExcelValue5);
values.put("line2col3", ExcelValue6);
...
liqui.setChangeLogParameters(values);
The problem with this approach is that my changelog.xml would be very strange (and non productive)
<changeSet author="gcardoso" id="2012082707">
<insert tableName="t_user">
<column name="login" value="${ExcelValue1}"/>
<column name="name" value="${ExcelValue2}}"/>
<column name="password" value="${ExcelValue3}"/>
</insert>
<insert tableName="t_user">
<column name="login" value="${ExcelValue4}"/>
<column name="name" value="${ExcelValue5}}"/>
<column name="password" value="${ExcelValue6}"/>
</insert>
...
</changeSet>
Is there any way that I could do something like this:
HashMap<String, ArrayList<String>> values = new HashMap<String, ArrayList<String>>();
values.put("col1", Column1);
values.put("col2", Column2);
values.put("col3", Column3);
liqui.setChangeLogParameters(values);
<changeSet author="gcardoso" id="2012082707">
<insert tableName="t_user">
<column name="login" value="${Column1}"/>
<column name="name" value="${Column2}}"/>
<column name="password" value="${Column3}"/>
</insert>
</changeSet>
Or is there any other way?
EDIT :
My current option is to convert the Excel into a CSV file and import the data using
<changeSet author="gcardoso" id="InitialImport2" runOnChange="true">
<loadData tableName="T_ENTITY" file="com/exictos/dbUpdate/entity.csv">
<column header="SHORTNAME" name="SHORTNAME" />
<column header="DESCRIPTION" name="DESCRIPTION" />
</loadData>
<loadData tableName="T_CLIENT" file="com/exictos/dbUpdate/client.csv">
<column header="fdbhdf" name="ENTITYID" defaultValueComputed="(SELECT ID FROM T_ENTITY WHERE SHORTNAME = ENTITY_REFERENCE"/>
<column header="DESCRIPTION" name="DESCRIPTION" />
</loadData>
</changeSet>
with these CSV files:
entity.csv
SHORTNAME,DESCRIPTION
nome1,descricao1
nome2,descricao2
client.csv
DESCRIPTION,ENTITY_REFERENCE
descricaoCliente1,nome1
descricaoCliente2,nome2
But I get this error:
liquibase.exception.DatabaseException: Error executing SQL INSERT INTO `T_CLIENT` (`DESCRIPTION`, `ENTITY_REFERENCE`) VALUES ('descricaoCliente1', 'nome1'): Unknown column 'ENTITY_REFERENCE' in 'field list'
If I change the header of my client.csv to DESCRIPTION,ENTITYID I get this error:
liquibase.exception.DatabaseException: Error executing SQL INSERT INTO `T_CLIENT` (`DESCRIPTION`, `ENTITYID`) VALUES ('descricaoCliente1', 'nome1'): Incorrect integer value: 'nome1' for column 'entityid' at row 1
I any of these cases, it looks like defaultValueComputed doesn't work in the same way as valueComputed in the following example
<changeSet author="gcardoso" id="InitialImport1">
<insert tableName="T_ENTITY">
<column name="SHORTNAME">nome1</column>
<column name="DESCRIPTION">descricao1</column>
</insert>
<insert tableName="T_CLIENT">
<column name="ENTITYID" valueComputed="(SELECT ID FROM T_ENTITY WHERE SHORTNAME = 'nome1')"/>
<column name="DESCRIPTION">descricaoCliente</column>
</insert>
</changeSet>
Is this the expected behavior? Bug of LiquiBase? Or just me doing something wrong (the most likely) ?
Or is there any other way to import massive amount of data? But always using LiquiBase and/or Spring.
EDIT2 : My problem is that I'm unable to insert the data into the second table with the correct foreign key
I would say that Liquibase is not the ideal tool for what you want to achieve. Liquibase is well-suited to managing the database structure, not the database's data.
If you still want to use Liquibase to manage the data, you have a couple of options (see here) -
Record your insert statements as SQL, and refer to them from changelog.xml like this:
<sqlFile path="/path/to/file.sql"/>
Use a Custom Refactoring Class which you refer to from the changelog.xml like this:
<customChange class="com.example.YourJavaClass"
csvFile="/path/to/file.csv"/>
YourJavaClass would read the records from the CSV file, and apply them to the database, implementing this method:
void execute(Database database) throws CustomChangeException;
Bear in mind, that once you have loaded this data via Liquibase, you shouldn't modify the data in the file, because those changes won't be re-applied. If you want to make changes to it, you would have to do it in subsequent changesets. So after a while you might end up with a lot of different CSV files/liquibase changesets, all operating on the same/similar data (this depends on how you are going to use this data - will it ever change once inserted?).
I would recommend looking at using DBUnit for managing your reference data. Its a tool primarily used in unit testing, but it is very mature, suitable for use in production I would say. You can store information in CSV or XML. I would suggest using a Spring 'InitializingBean' to load the dataset from the classpath and perform a DBUnit 'refresh' operation, which will, from the docs:
This operation literally refreshes dataset contents into the database. This
means that data of existing rows is updated and non-existing row get
inserted. Any rows which exist in the database but not in dataset stay
unaffected.
This way, you can keep your reference data in one place, and add to it over time so that there is only one source of the information, and it isn't split across multiple Liquibase changesets. Keeping your DBUnit datasets in version control would provide trace-ability, and as a bonus, DBUnit datasets are portable across databases, and can manage things like insert order to prevent foreign key violations for you.
It depends on your target database. If you are using Sybase or MSSQL server then you can use the BCP tool that comes along with your installed client+driver. It is the fastest way of moving large amounts of data in/out of these databases.
Googling around I also found these links...
Oracle has the SQL*LOADER tool
MySQL has the LOAD DATA INFILE command
I would expect each database vendor to supply a tool of some description for bulk loading of data.
I have one sqlserver 2008 r2 datatable, it has one column autoId int identity(1,1), but it's not the primary key, another column varchar(20) is the one.
question is : how do i config the hbm file?
bellow is my config file,but it got errors when i try to save one instance.
"Cannot insert explicit value for identity column in table 'acct_info' when IDENTITY_INSERT is set to OFF."
<property name="autoId" type="int">
<column name="auto_id" not-null="true" unique="true" />
</property>
There can be two reasons , either you don't have sufficient privileges in DB for IDENTITY INSERT or there is mismatch in the mechanism by which you are trying to set an identifier in hibernate and DB layer.
You can have a look at your id generation strategy in hibernate definition file
In DB you can change to Set IDENTITY_INSERT to "ON"
Pick a different generator class