I have 2 examples of Liquibase changesets where the second statements are failing(trying to insert a record with existing primary key), but the first ones are successful:
Failing/no record in databasechangelog:
--changeset yura:2
insert into test2 values (4, 'test4');
insert into test2 values (2, 'test2');
Partially written, no record in databasechangelogg:
--changeset yura:2
insert into test2 values (4, 'test4');
ALTER TABLE test2 ADD name varchar(50);
When I try to run those statements on MySql directly, the behaviour is consistent for both because MySql(InnoDB) will wrap every statement in a separate transaction.
Why is Liquibase not consistent?
I think I will be able to answer it myself after some further investigation.
Some statements, due to multiple reasons one of which is that they are very hard to rollback, will perform and implicit commit before they execute.
You can check it here.
At the same time, Liquibase has an interesting configuration by default:
runInTransaction Should the changeSet be ran as a single transaction (if possible)? Defaults to true.
If you put those two facts together, the answer becomes obvious:
When there is an ALTER inside a cahngeset, it will be implicitly demarcated from the previous statements. Courtesy of DB itself. Liquibase can not influence this low-level DB feature. However, Liquibase will be able to group statements into 1 transaction, when those statements do not require implicit commit.
Related
I'm observing the strange situation in work "insert into" command.
I'll try to explain the situation from my point a view
There is TEMP_LINKS table in my database and application inserts data into it.
Say the query lays in insert1.sql
insert into TEMP_LINK (ID, SIDE)
select ID, SIDE
from //inner query//
group by ID, SIDE;
commit;
and there is java1 class which execute it
...
executeSqlScript(getResource("path-to-query1"));
...
After that, another java2 class make another insert into the same TEMP_LINK table
...
executeSqlScript(getResource("path-to-query2"));
...
where query2 looks like
insert into TEMP_LINK (ID, SIDE)
select
ID, 'B'
from (
select ID
from ...tables
where ..conditions
minus (
select ID
from ..tables
union
select ID
from TEMP_LINKS
);
commit;
Both java1 and java2 are executed in different threads and java1 is finished earlier that java2.
But time to time, second insert(from query2) don't insert data at all. I see in log: Update count 0 and in TEPM_LINKS there are data only from query1.
If I'm running the application again the issue is disappeared and both of the queries inserted properly data.
Earlier I tried to put both of the queries into one sql file, but the issue has appeared too.
So, maybe someone has ideas about what should I do, because mine is over. One interesting fact - sql "minus" operation is used only once - in that query2.
A big difference between Oracle and SQL Server, Oracle NEVER blocks a read. This is true even when records are locked. The following is a simplified explanation. Oracle uses the System Change Number (SCN) at the time a transaction starts to determine the state of the database for that transaction. All sorts of things can happen, inserts, updates, and deletes, the transaction sees the database as it was at the start of that transaction. Changes only matters at the point where the commit/rollback is executed.
In your situation, if the second query starts before the first has committed, the second won't see any changes the first has made, even after the first commits. You need to synchronize those transactions. The easiest way is to combine them into a single sequential execution. Oracle has many more complex synchronization methods, I would not go that route in this situation.
I am working with mysql in Java.
Basically, I have multiple queries that each create a table in the database, along with a single ALTER statement that adjusts the auto-increment initial value for one of my attributes. I am executing those queries as a transaction--namely either all are committed to the database or none are. But to do so I have create a separate Statement for each query - 8 in total - and execute each. After, I commit all the results. And then I have to close each Statement.
But this seems inefficient. To many Statements. So I wonder whether batch methods would work. My concern is that batch methods execute all the queries simultaneously, and since I have Referential Integrity Constraints and the ALTER query there is a dependancy between the tables - and thus the order in which they are created matters. Is this not correct ? Am I misunderstanding how batch statements work ?
If my logic above is correct, then should I possibly group a few queries together (that are not related) and use batch methods to execute them. This will then reduce the number of Statements I have.
I don't think you can batch DDL (i.e. create, drop, alter). Also, it's not a great idea, performance wise, to require dynamic DDL.
You can batch DML statements using Statement.addBatch(String) (i.e. select, insert, update and delete statements) and then call Statement.executeBatch().
I am trying to create multiple tables (upto 20) via java.sql prepared statement batch execute. Most of tables are related to eachother. But there is some confusion in my mind.
1) set connection auto commit true or false?
2) Is there any special pattern for BatchExecute.? like up down. I want to parent table create query must execute first.
3) If error ouccurs all the batch is rollback?
The behavior of batch execution with auto commit on is implementation defined, some drivers may not even support that. So if you want to use batch execution, set auto commit to false.
That said, some databases implicitly commit each DDL statement; this might interfere with correct working of batched execution. I would advise to take the safe route and not use batched execution for DDL, but to use a normal Statement and execute(String) for executing DDL.
Actually using batch execution in this case does not make much sense. Batch execution gives you a (big) performance improvement when inserting or updating thousands of rows at once.
You just need to have all your statements within a transaction:
call Connection.setAutoCommit(false)
execute your create-table statements with Statement.executeUpdate
call Connection.commit()
You need to order the create-table statements yourself based on the foreign-keys between them.
As Mark pointed out, the DB you are using might commit each create-table right away and ignore the transaction. Not all DBs support transactional creation of tables. You will need to test this or do some more research regarding this aspect.
I am doing a bulk insert using sybase temporary table approach (# table name). This happens in a transaction. However this operation is committing the data transaction. ( I am not doing a connection.commit myself). I don't want this commit to happen since I might have to roll back the entire transaction later on. Any idea why insert using temp table is committing the transaction withought being asked?. How do I fix this issue ?
The sql is something like
select * into #MY_TABLE_BUFFER from MY_TABLE where 0=1;
load table #MY_TABLE_BUFFER from 'C:\temp\123.tmp' WITH CHECKPOINT ON;
insert into MY_TABLE on existing update select * from #MY_TABLE_BUFFER;
drop table #MY_TABLE_BUFFER;
And I am using statement.executeUpdate() to execute it
Figured out that its due to temp table not participating in transaction and doing a commit.
Is there any workaround for this?
Sybase is funny about using user-specified (aka explicit) transactions in conjunction w/ the use of #temp tables (where the temp table is created while in the transaction). For better or worse, Sybase considers the creation of a #temp table (including via 'select into' statement) to be a DDL statement in the context of tempdb. In the editor, w/ default server/db settings, you'll get an error when you do this.
As a test, you could try setting the 'ddl in tran' setting (in the context of the tempdb database) to true. Then, see if the behavior changes.
Note, however, that permanently leaving that setting in place is a bad idea (per Sybase documentation). I'm proposing it for investigative purposes only.
The real solution (if my assumption of the problem is correct) likely lies in creating the #temp table first, then beginning the transaction, to avoid any DDL stmts in the scope of the transaction.
sp_dboption tempdb, 'ddl in tran',true
the above shuld work,even am also not able to create /update #tables when proc created with anymode.
There is a UNIQUE database constraint on an index which doesn't allow more than one record having identical columns.
There is a piece of code, managed by Hibernate (v2.1.8), doing two DAO
getHibernateTemplate().save( theObject )
calls which results two records entered into the table mentioned above.
If this code is executed without transactions, it results INSERT, UPDATE, then another INSERT and another UPDATE SQL statements and works fine. Apparently, the sequence is to insert the record containing DB NULL first, and then update it with the proper data.
If this code is executed under Spring (v2.0.5) wrapped in a single Spring transaction, it results two INSERTS, followed by immediate exception due to UNIQUE constraint mentioned above.
This problem only manifests itself on MS SQL due to its incompatibility with ANSI SQL. It works fine on MySQL and Oracle. Unfortunately, our solution is cross-platform and must support all databases.
Having this stack of technologies, what would be your preferred workaround for given problem?
You could try flushing the hibernate session in between the two saves. This may force Hibernate to perform the first update before the second insert.
Also, when you say that hibernate is inserting NULL with the insert, do you mean every column is NULL, or just the ID column?
I have no experience in Hibernate, so I don't know if you are free to change the DB at your will or if Hibernate requires a specific DB structure you cannot change.
If you can make changes then you can use this workaround in MSSQL tu emulate the ANSI behaviour :
drop the unique index/constraint
define a calc field like this:
alter table MyTable Add MyCalcField as
case when MyUniqueField is NULL
then cast(Myprimarykey as MyUniqueFieldType)
else MyUniqueField end
add the unique constraint on this new field you created.
Naturally this applies if MyUniqueField is not the primary key! :)
You can find more details in this article at databasejournal.com