When issuing a bulk insert statement, how can we ignore (supress) the exception thrown by DB and/or JDBC driver ?
Lets say I want to bulk insert some Users and I have an Id as a unique key
INSERT INTO users
(id,name,age,email,pass_code)
VALUES
(1,'Mark',18,'mail#mail.com',123),
(2,'Zak',18,'mail#mail.com',123),
(3,'Djigi',18,'mail#mail.com',123),
(1,'James Petkov',18,'mail#mail.com',123), --Duplicated by id ?
(4,'Kinkinikin',18,'mail#mail.com',123),
(5,'A bula bula ',18,'mail#mail.com',123),
(6,'Shakazulo',18,'mail#mail.com',123);
How to tell the engine MySQL/PostgreSQL to continue inserting the remaining records ?
Is this supported in SQL at all ?
In PostgreSQL, you can ignore the failing rows with
INSERT ... ON CONFLICT (id) DO NOTHING;
A more general solution is to run each INSERT separately and ignore errors.
Related
I need some sort of persistance componnent to store id(long) and value(object) for my Java application.
All The cacheing systems I looked at where not persistant enough(If the process died the cache would erase itself) or slow
I tried to use Embedded DataBases like Derby and HSQLDB but they where not as fast as H2 as SELECT and INSERT.
For some reason the UPDATE query takes 1-2 seconds for one row if I Update a row with Blob.
Does anyone know why is it this slow?
Queries:
CREATE TABLE ENTITIES(ID BIGINT PRIMARY KEY, DATA BLOB)
INSERT INTO ENTITIES(DATA, ID) VALUES(?, ?)
UPDATE ENTITIES SET DATA = ? WHERE ID = ?
I am using JDBC with PreparedStatement
Edit:
The connection string is:
jdbc:h2:C:\temp\h2db;FILE_LOCK=NO;
I tried to add CACHE_SIZE=102400 and PAGE_SIZE=209715200 but it didn't help
I have two table Order1 (OrderNO is PK in Order1)and order2 (OrderNO is FK in order2 ) in MS Access , i have to insert data in this two table using JDBC. so please can any one tell me the sol. i try it but data is inserting in only 1 st table and gives error INSERT INTO is wrong
You can use batch update facility of JDBC 2.0 in order to insert into multiple tables as a batch or as a single unit. In this case your application will hit underlying database(MS access in your case) only once so performance will be increased as compared to one by one insertion.
You can add below given code in your own..
or simply can get idea that how to implement.
Statement stmt = con.createStatement();
con.setAutoCommit(false);
stmt.addBatch("INSERT INTO Order1 VALUES (OrderNO , ..., ...)");
stmt.addBatch("INSERT INTO Order2 VALUES (OrderNO , ...)");
int [] updateCounts = stmt.executeBatch();
Here AutoCommit() is set to false so it will free your application to decide whether to commit or not if any one of command in batch fails to execute or in case of any other error.
I have a table named CUSTOMERS with the following columns :
CUSTOMER_ID (NUMBER), DAY(DATE), REGISTERED_TO(NUMBER)
There are more columns in the table but it is irrelevant to my question as only the above columns are defined together as the primary key
In our application we do a large amount of inserts into this table so we do not use MERGE but use the following statement :
INSERT INTO CUSTOMERS (CUSTOMER_ID , DAY, REGISTERED_TO)
SELECT ?, ?, ?
FROM DUAL WHERE NOT EXISTS
(SELECT NULL
FROM CUSTOMERS
WHERE CUSTOMER_ID = ?
AND DAY = ?
AND REGISTERED_TO = ?
)";
We use a PreparedStatement object using the batch feature to insert a large number of records collected through the flow of the application per customer.
Problem is that sometimes I get the following error :
ORA-00001: unique constraint (CUSTOMERS_PK)
violated
Strange thing is that when I do NOT use batch inserts and insert each record one by one (by simply executing pstmt.execute()) there are no errors.
Is it something wrong with the insert statement ? the jdbc driver ? Am I not using the batch mechanism correctly ?
Here is a semi-pseudo-code of my insertion loop :
pstmt = conn.prepareStatement(statement);
pstmt.setQueryTimeout(90);
for each customer :
- pstmt.setObject(1, customer id);
- pstmt.setObject(2, current day);
- pstmt.setObject(3, registered to);
- pstmt.addBatch();
end for
pstmt.executeBatch();
It is all enclosed in a try/catch/finally block making sure the statement and connection are closed at the end of this process.
I guess you are using several threads or processes in parallel, each doing inserts. In this case, Oracle's transaction isolation feature defeats your attempt to do the merge, because sometimes the following is bound to happen:
session A runs your statement, inserts a row (x,y,z)
session B runs the same statement, tries to insert row (x,y,z), gets a lock and waits
session A commits
session B receives the "unique constraint violated" error
That's because until session A commits, session B doesn't see the new row, so it tries to insert the same.
This query works when I input it through phpmyadmin.
INSERT INTO conversation (user_id) VALUES (?);
INSERT INTO conversation (conversation_id, user_id)
VALUES ((SELECT LAST_INSERT_ID()), ?)
However when I send that query using jdbc and java I get an error -
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INSERT INTO conversation (conversation_id, user_id) VALUES ((SELECT LAST_INSERT_' at line 1"
I am using the exact same query. I checked by calling toString on the PreparedStatement and copying and pasting it into phpmyadmin and executing it and it worked fine. It just doesn't work through java. Any ideas whats wrong?
By default, you cannot execute multiple statements in one query through JDBC. Splitting it into two calls will work, as will changing the allowMultiQueries configuration property to True.
JDBC Configuration Properties — allowMultiQueries:
Allow the use of ';' to delimit multiple queries during one statement (true/false), defaults to 'false', and does not affect the addBatch() and executeBatch() methods, which instead rely on rewriteBatchStatements.
Default value: false
in order to migrate a db from oracle to mysql i am using ddlutils. Migrating the schema works for my purposes, but inserting the data fails due to missing rows. The following excerpt from the log file explains it:
[ddlToDatabase] About to execute SQL: INSERT INTO `RECORDSTATUS` (`NAME_ID`, RECORDSTATUS_ID`, `NAME`, `SORTVALUE`) VALUES (?, ?, ?, ?)
[ddlToDatabase] Inserted bean RECORDSTATUS:RECORDSTATUS_ID=0
...
[ddlToDatabase] Defering insertion of row NAME:LANGUAGE_ID=0;NAME_ID=5941 because it is waiting for:
[ddlToDatabase] RECORDSTATUS:RECORDSTATUS_ID=0
In the database, there is a row RECORDSTATUS_ID=0. Did anybody face the same issue? Has somebody an idea, what the problem is?
I had similar problem when migrating from MySql to DerbyDB. I my case the actual problem was that DDLUtils handles only those foreign keys that are targetted to primary keys.
So, if you have MASTER table that contains some unique non primary key field, and you have DETAILS table that references (foreign key) to that unique non primary key field, DDLUtils cannot link DETAILS records to MASTER and cannot therefore insert DETAIL records at all.
This was the situation in DDLUtils version 1.0.
I made some quick (and mayby dirty) modifications to code and it seems to solve this problem. Modified version can be downloaded here (includes source): DllUtils-1.0_mod_with_src.jar. You can use it at you own risk.
Best regards
Kari Surakka
INSERT INTO `RECORDSTATUS` (`NAME_ID`, RECORDSTATUS_ID`, `NAME`, `SORTVALUE`) VALUES
should be:
INSERT INTO `RECORDSTATUS` (`NAME_ID`, `RECORDSTATUS_ID`, `NAME`, `SORTVALUE`) VALUES