I am doing a bulk insert using sybase temporary table approach (# table name). This happens in a transaction. However this operation is committing the data transaction. ( I am not doing a connection.commit myself). I don't want this commit to happen since I might have to roll back the entire transaction later on. Any idea why insert using temp table is committing the transaction withought being asked?. How do I fix this issue ?
The sql is something like
select * into #MY_TABLE_BUFFER from MY_TABLE where 0=1;
load table #MY_TABLE_BUFFER from 'C:\temp\123.tmp' WITH CHECKPOINT ON;
insert into MY_TABLE on existing update select * from #MY_TABLE_BUFFER;
drop table #MY_TABLE_BUFFER;
And I am using statement.executeUpdate() to execute it
Figured out that its due to temp table not participating in transaction and doing a commit.
Is there any workaround for this?
Sybase is funny about using user-specified (aka explicit) transactions in conjunction w/ the use of #temp tables (where the temp table is created while in the transaction). For better or worse, Sybase considers the creation of a #temp table (including via 'select into' statement) to be a DDL statement in the context of tempdb. In the editor, w/ default server/db settings, you'll get an error when you do this.
As a test, you could try setting the 'ddl in tran' setting (in the context of the tempdb database) to true. Then, see if the behavior changes.
Note, however, that permanently leaving that setting in place is a bad idea (per Sybase documentation). I'm proposing it for investigative purposes only.
The real solution (if my assumption of the problem is correct) likely lies in creating the #temp table first, then beginning the transaction, to avoid any DDL stmts in the scope of the transaction.
sp_dboption tempdb, 'ddl in tran',true
the above shuld work,even am also not able to create /update #tables when proc created with anymode.
Related
I'm observing the strange situation in work "insert into" command.
I'll try to explain the situation from my point a view
There is TEMP_LINKS table in my database and application inserts data into it.
Say the query lays in insert1.sql
insert into TEMP_LINK (ID, SIDE)
select ID, SIDE
from //inner query//
group by ID, SIDE;
commit;
and there is java1 class which execute it
...
executeSqlScript(getResource("path-to-query1"));
...
After that, another java2 class make another insert into the same TEMP_LINK table
...
executeSqlScript(getResource("path-to-query2"));
...
where query2 looks like
insert into TEMP_LINK (ID, SIDE)
select
ID, 'B'
from (
select ID
from ...tables
where ..conditions
minus (
select ID
from ..tables
union
select ID
from TEMP_LINKS
);
commit;
Both java1 and java2 are executed in different threads and java1 is finished earlier that java2.
But time to time, second insert(from query2) don't insert data at all. I see in log: Update count 0 and in TEPM_LINKS there are data only from query1.
If I'm running the application again the issue is disappeared and both of the queries inserted properly data.
Earlier I tried to put both of the queries into one sql file, but the issue has appeared too.
So, maybe someone has ideas about what should I do, because mine is over. One interesting fact - sql "minus" operation is used only once - in that query2.
A big difference between Oracle and SQL Server, Oracle NEVER blocks a read. This is true even when records are locked. The following is a simplified explanation. Oracle uses the System Change Number (SCN) at the time a transaction starts to determine the state of the database for that transaction. All sorts of things can happen, inserts, updates, and deletes, the transaction sees the database as it was at the start of that transaction. Changes only matters at the point where the commit/rollback is executed.
In your situation, if the second query starts before the first has committed, the second won't see any changes the first has made, even after the first commits. You need to synchronize those transactions. The easiest way is to combine them into a single sequential execution. Oracle has many more complex synchronization methods, I would not go that route in this situation.
I have 2 examples of Liquibase changesets where the second statements are failing(trying to insert a record with existing primary key), but the first ones are successful:
Failing/no record in databasechangelog:
--changeset yura:2
insert into test2 values (4, 'test4');
insert into test2 values (2, 'test2');
Partially written, no record in databasechangelogg:
--changeset yura:2
insert into test2 values (4, 'test4');
ALTER TABLE test2 ADD name varchar(50);
When I try to run those statements on MySql directly, the behaviour is consistent for both because MySql(InnoDB) will wrap every statement in a separate transaction.
Why is Liquibase not consistent?
I think I will be able to answer it myself after some further investigation.
Some statements, due to multiple reasons one of which is that they are very hard to rollback, will perform and implicit commit before they execute.
You can check it here.
At the same time, Liquibase has an interesting configuration by default:
runInTransaction Should the changeSet be ran as a single transaction (if possible)? Defaults to true.
If you put those two facts together, the answer becomes obvious:
When there is an ALTER inside a cahngeset, it will be implicitly demarcated from the previous statements. Courtesy of DB itself. Liquibase can not influence this low-level DB feature. However, Liquibase will be able to group statements into 1 transaction, when those statements do not require implicit commit.
I have bunch of MySQL queries that use temporary tables to split complex/expensive queries into small pieces.
create temporary table product_stats (
product_id int
,count_vendors int
,count_categories int
,...
);
-- Populate initial values.
insert into product_stats(product_id) select product_id from product;
-- Incrementally collect stats info.
update product_stats ... join vendor ... set count_vendors = count(vendor_id);
update product_stats ... join category... set count_categories = count(category_id);
....
-- Consume the resulting temporary table.
select * from product_stats;
The problem is that, as I use connection pool, these tables are not cleared even if I close the java.sql.Connection.
I can manually remove them (drop temporary table x;) one by one before executing the needed queries, but that may take place for mistakes.
Is there a way (JDBC/MySQL , API/configuration) to reset all the temporary tables created within the current session without closing the database connection (as you know, I'm not reffering to java.sql.Connection.close()), so that I can still use the advantages that provides connection pool?
Edited:
It seems that only from MySQL version 5.7.3 they started imlpementing the "reset connection" feature. (Release note: https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-3.html) However, I will not use it for the moment because version 5.7 is still on a development release.
Q: Is there a way (JDBC/MySQL , API/configuration) to reset all the temporary tables created within the current session without closing the database connection.
A: No. There's no "reset" available. You can issue DROP TEMPORARY TABLE foo statements within the session, but you have to provide the name of the temporary table you want to drop.
The normative pattern is for the process that created the temporary table to drop it, before the connection is returned to the pool. (I typically handle this in the finally block.)
If we are expecting other processes may leave temporary tables in the session (and to be defensive, that's what we expect), we typically do a DROP TEMPORARY TABLE IF EXISTS foo before we attempt to create a temporary table.
EDIT
The answer above is correct for MySQL up through version 5.6.
#mr.Kame (OP) points out the new mysql_reset_connection function (introduced in MySQL 5.7.3).
Reference: 22.8.7.60 mysql_reset_connection() http://dev.mysql.com/doc/refman/5.7/en/mysql-reset-connection.html
Looks like this new function achieves nearly the same result as we'd get by disconnecting from and reconnecting to MySQL, but with less overhead.
(Now I'm wondering if MariaDB has introduced a similar feature.)
I use Hibernate version 4. We have a problem in batch process. Our system works as below
Select set of records which are in 'PENDING' state
Update immediately to 'IN PROGRESS' state
Process it and update to 'COMPLETED' state
The problem when we have two servers and executing at same time, we fear of having concurrency issue. So we would like to implement DB Lock for first two steps. We used query.setLockOptions(), but it seems not working. Is there any other to have table level lock or Row level lock till it completes select and update. Both are in same session.
We have options in JDBC that LOCK TABLE <TABLE_NAME> WRITE. But how do we implement in hibernate or is it possible to implement select..for update in hibernate?
"Select ... for update" is supported in Hibernate via LockMode.UPGRADE which you can set in, for example, a NamedQuery.
But using application/manual table-row locking has several drawbacks (especially when a database connection gets broken half-way a transaction) and your update-procedure can do without it:
Start transaction.
update table set state='PENDING', server_id=1 where state='IN PROGRESS';
Commit transaction
select from table where state='PENDING' and server_id=1;
[process records]
Each server must have a unique number for this to work, but it will be less error-prone and you let the DBMS do what it is supposed to be good at: isolation (see ACID).
There is a UNIQUE database constraint on an index which doesn't allow more than one record having identical columns.
There is a piece of code, managed by Hibernate (v2.1.8), doing two DAO
getHibernateTemplate().save( theObject )
calls which results two records entered into the table mentioned above.
If this code is executed without transactions, it results INSERT, UPDATE, then another INSERT and another UPDATE SQL statements and works fine. Apparently, the sequence is to insert the record containing DB NULL first, and then update it with the proper data.
If this code is executed under Spring (v2.0.5) wrapped in a single Spring transaction, it results two INSERTS, followed by immediate exception due to UNIQUE constraint mentioned above.
This problem only manifests itself on MS SQL due to its incompatibility with ANSI SQL. It works fine on MySQL and Oracle. Unfortunately, our solution is cross-platform and must support all databases.
Having this stack of technologies, what would be your preferred workaround for given problem?
You could try flushing the hibernate session in between the two saves. This may force Hibernate to perform the first update before the second insert.
Also, when you say that hibernate is inserting NULL with the insert, do you mean every column is NULL, or just the ID column?
I have no experience in Hibernate, so I don't know if you are free to change the DB at your will or if Hibernate requires a specific DB structure you cannot change.
If you can make changes then you can use this workaround in MSSQL tu emulate the ANSI behaviour :
drop the unique index/constraint
define a calc field like this:
alter table MyTable Add MyCalcField as
case when MyUniqueField is NULL
then cast(Myprimarykey as MyUniqueFieldType)
else MyUniqueField end
add the unique constraint on this new field you created.
Naturally this applies if MyUniqueField is not the primary key! :)
You can find more details in this article at databasejournal.com