I have Java code to bulk-insert tab file to SQL Server.
I want to get the count of how many records were inserted. I tried using ##rowcount but I'm getting an error that "Statement did not return a result set".
If I run the bulk insert statement in management studio, I can get the count.
Statement stmt = sqlConnection.createStatement();
ResultSet rs = stmt.executeQuery ("BULK INSERT schema1.table1 FROM 'd:\temp1\file1.tab' SELECT ##rowcount");
Is there any way to get the inserted count?
I'm not familiar with SQL Server but it seems like you'll want to issue an executeUpdate instead of an executeQuery
Statement stmt = sqlConnection.createStatement();
int insertedRowCount = stmt.executeUpdate("BULK INSERT schema1.table1 FROM 'd:\temp1\file1.tab'");
I need to insert a couple hundreds of millions of records into the mysql db. I'm batch inserting it 1 million at a time. Please see my code below. It seems to be slow. Is there any way to optimize it?
try {
// Disable auto-commit
connection.setAutoCommit(false);
// Create a prepared statement
String sql = "INSERT INTO mytable (xxx), VALUES(?)";
PreparedStatement pstmt = connection.prepareStatement(sql);
Object[] vals=set.toArray();
for (int i=0; i<vals.length; i++) {
pstmt.setString(1, vals[i].toString());
pstmt.addBatch();
}
// Execute the batch
int [] updateCounts = pstmt.executeBatch();
System.out.append("inserted "+updateCounts.length);
I had a similar performance issue with mysql and solved it by setting the useServerPrepStmts and the rewriteBatchedStatements properties in the connection url.
Connection c = DriverManager.getConnection("jdbc:mysql://host:3306/db?useServerPrepStmts=false&rewriteBatchedStatements=true", "username", "password");
I'd like to expand on Bertil's answer, as I've been experimenting with the connection URL parameters.
rewriteBatchedStatements=true is the important parameter. useServerPrepStmts is already false by default, and even changing it to true doesn't make much difference in terms of batch insert performance.
Now I think is the time to write how rewriteBatchedStatements=true improves the performance so dramatically. It does so by rewriting of prepared statements for INSERT into multi-value inserts when executeBatch() (Source). That means that instead of sending the following n INSERT statements to the mysql server each time executeBatch() is called :
INSERT INTO X VALUES (A1,B1,C1)
INSERT INTO X VALUES (A2,B2,C2)
...
INSERT INTO X VALUES (An,Bn,Cn)
It would send a single INSERT statement :
INSERT INTO X VALUES (A1,B1,C1),(A2,B2,C2),...,(An,Bn,Cn)
You can observe it by toggling on the mysql logging (by SET global general_log = 1) which would log into a file each statement sent to the mysql server.
You can insert multiple rows with one insert statement, doing a few thousands at a time can greatly speed things up, that is, instead of doing e.g. 3 inserts of the form INSERT INTO tbl_name (a,b,c) VALUES(1,2,3); , you do INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(1,2,3),(1,2,3); (It might be JDBC .addBatch() does similar optimization now - though the mysql addBatch used to be entierly un-optimized and just issuing individual queries anyhow - I don't know if that's still the case with recent drivers)
If you really need speed, load your data from a comma separated file with LOAD DATA INFILE , we get around 7-8 times speedup doing that vs doing tens of millions of inserts.
If:
It's a new table, or the amount to be inserted is greater then the already inserted data
There are indexes on the table
You do not need other access to the table during the insert
Then ALTER TABLE tbl_name DISABLE KEYS can greatly improve the speed of your inserts. When you're done, run ALTER TABLE tbl_name ENABLE KEYS to start building the indexes, which can take a while, but not nearly as long as doing it for every insert.
You may try using DDBulkLoad object.
// Get a DDBulkLoad object
DDBulkLoad bulkLoad = DDBulkLoadFactory.getInstance(connection);
bulkLoad.setTableName(“mytable”);
bulkLoad.load(“data.csv”);
try {
// Disable auto-commit
connection.setAutoCommit(false);
int maxInsertBatch = 10000;
// Create a prepared statement
String sql = "INSERT INTO mytable (xxx), VALUES(?)";
PreparedStatement pstmt = connection.prepareStatement(sql);
Object[] vals=set.toArray();
int count = 1;
for (int i=0; i<vals.length; i++) {
pstmt.setString(1, vals[i].toString());
pstmt.addBatch();
if(count%maxInsertBatch == 0){
pstmt.executeBatch();
}
count++;
}
// Execute the batch
pstmt.executeBatch();
System.out.append("inserted "+count);
I have a weird problem that I need to solve, I have a Result Set in Java with data from one Oracle DB, and I need to insert this data into a DB 2 table. Both, the query and the DB2 Table has the same structure, but there's too many records (More than 200k) so make it with an iteration is too slow.
I want to do something like:
Connection DB2Connection = DriverManager.getConnection(Url,Usr,Pwd);
ResultSet rs_oracle = statement.executeQuery("Select * from ORACLE.table1");
ResultSet rs_db2 = statement2.executeQuery("Select * from DB2.table2")
/*PSEUDO*/
rs_db2 += rs_oracle;
DB2Connection.commit();
And insert all the records from the rs_oracle into the DB2 Table.
There's any way to do it without an iteration?
You could go for a prepared statement and do a batch insert on that.
Hi How can I dump data only for an instance of H2 In Memory DB.
What I currently have
PreparedStatement preparedStatement = connection
.prepareStatement("SCRIPT SIMPLE NOSETTINGS");
ResultSet resultSet = preparedStatement.executeQuery();
response.setContentType("text/plain");
ServletOutputStream out = response.getOutputStream();
while (resultSet.next()) {
String columnValue = resultSet.getString(1);
out.print(columnValue);
out.println();
This dumps the entire db structure however not just the insert data. Basically what I want to do is backup the data I insert during development mode so the next time the database is started I can script the data back in.
The table structure isn't a problem as it is done by JPA.
To filter out just inserts, you could use:
if (columnValue.startsWith("INSERT")) {
out.println(columnValue);
}
This question already has answers here:
ResultSet exception - before start of result set
(6 answers)
Closed 5 years ago.
I try to get some data form database. The connection method works for sure, but I have a problem getting any data form DB:
SQLConnect s = new SQLConnect();
Connection c = s.getConnection();
Statement st = c.createStatement();
ResultSet rs = st.executeQuery("select * from produkty");
System.out.println(rs.getString(2));
The problem is with the last line (when I comment it, no error appears).
Error message:
Connected to database
Exception in thread "main" java.sql.SQLException: Before start of result set
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1073)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:987)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:982)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:927)
at com.mysql.jdbc.ResultSetImpl.checkRowPos(ResultSetImpl.java:841)
at com.mysql.jdbc.ResultSetImpl.getStringInternal(ResultSetImpl.java:5656)
at com.mysql.jdbc.ResultSetImpl.getString(ResultSetImpl.java:5576)
at antmedic.Main.main(Main.java:85)
Java Result: 1
BUILD SUCCESSFUL (total time: 1 second)
Thanks for any help
You need to call ResultSet#next() to shift the resultset cursor to the next row. Usually, when there's means of multiple rows, do this in a while loop.
while (rs.next()) {
System.out.println(rs.getString(2));
}
Or when you expect zero or one row, use an if statement.
if (rs.next()) {
System.out.println(rs.getString(2));
}
See also:
JDBC tutorial
Examples of how to traverse the ResultSet correctly
As you get the ResultSet object, the cursor points to the row before the first row, So after calling
while (rs.next()) {
//your code
}
the cursor points to the next row
i.e. the first row.
Remember, whenever select query fires for retrieving the data from database into ResultSet,
so the structure of ResultSet is
-> Zero Record Area
-> Database Record Area
-> No Record Area
that's why alwayz we must put next() with ResultSet object so it can move from Zero Record Area to Database Record Area.
while(rs.next())
{
System.out.println(rs.getString(1));
System.out.println(rs.getString(2));
}
at the place of 1, 2, .... we can use the database columns name also. But technically always we use indexing like 1,2,3,.... its reason for any updation happening in future at our database like changes the column name so it can't be occur any problem for us because we haven't used the columns names.