I want to insert multiple rows into a MySQL table at once using Java. The number of rows is dynamic. In the past I was doing...
for (String element : array) {
myStatement.setString(1, element[0]);
myStatement.setString(2, element[1]);
myStatement.executeUpdate();
}
I'd like to optimize this to use the MySQL-supported syntax:
INSERT INTO table (col1, col2) VALUES ('val1', 'val2'), ('val1', 'val2')[, ...]
but with a PreparedStatement I don't know of any way to do this since I don't know beforehand how many elements array will contain. If it's not possible with a PreparedStatement, how else can I do it (and still escape the values in the array)?
You can create a batch by PreparedStatement#addBatch() and execute it by PreparedStatement#executeBatch().
Here's a kickoff example:
public void save(List<Entity> entities) throws SQLException {
try (
Connection connection = database.getConnection();
PreparedStatement statement = connection.prepareStatement(SQL_INSERT);
) {
int i = 0;
for (Entity entity : entities) {
statement.setString(1, entity.getSomeProperty());
// ...
statement.addBatch();
i++;
if (i % 1000 == 0 || i == entities.size()) {
statement.executeBatch(); // Execute every 1000 items.
}
}
}
}
It's executed every 1000 items because some JDBC drivers and/or DBs may have a limitation on batch length.
See also:
JDBC tutorial - Using PreparedStatement
JDBC tutorial - Using Statement Objects for Batch Updates
When MySQL driver is used you have to set connection param rewriteBatchedStatements to true ( jdbc:mysql://localhost:3306/TestDB?**rewriteBatchedStatements=true**).
With this param the statement is rewritten to bulk insert when table is locked only once and indexes are updated only once. So it is much faster.
Without this param only advantage is cleaner source code.
If you can create your sql statement dynamically you can do following workaround:
String myArray[][] = { { "1-1", "1-2" }, { "2-1", "2-2" }, { "3-1", "3-2" } };
StringBuffer mySql = new StringBuffer("insert into MyTable (col1, col2) values (?, ?)");
for (int i = 0; i < myArray.length - 1; i++) {
mySql.append(", (?, ?)");
}
myStatement = myConnection.prepareStatement(mySql.toString());
for (int i = 0; i < myArray.length; i++) {
myStatement.setString(i, myArray[i][1]);
myStatement.setString(i, myArray[i][2]);
}
myStatement.executeUpdate();
In case you have auto increment in the table and need to access it.. you can use the following approach... Do test before using because getGeneratedKeys() in Statement because it depends on driver used. The below code is tested on Maria DB 10.0.12 and Maria JDBC driver 1.2
Remember that increasing batch size improves performance only to a certain extent... for my setup increasing batch size above 500 was actually degrading the performance.
public Connection getConnection(boolean autoCommit) throws SQLException {
Connection conn = dataSource.getConnection();
conn.setAutoCommit(autoCommit);
return conn;
}
private void testBatchInsert(int count, int maxBatchSize) {
String querySql = "insert into batch_test(keyword) values(?)";
try {
Connection connection = getConnection(false);
PreparedStatement pstmt = null;
ResultSet rs = null;
boolean success = true;
int[] executeResult = null;
try {
pstmt = connection.prepareStatement(querySql, Statement.RETURN_GENERATED_KEYS);
for (int i = 0; i < count; i++) {
pstmt.setString(1, UUID.randomUUID().toString());
pstmt.addBatch();
if ((i + 1) % maxBatchSize == 0 || (i + 1) == count) {
executeResult = pstmt.executeBatch();
}
}
ResultSet ids = pstmt.getGeneratedKeys();
for (int i = 0; i < executeResult.length; i++) {
ids.next();
if (executeResult[i] == 1) {
System.out.println("Execute Result: " + i + ", Update Count: " + executeResult[i] + ", id: "
+ ids.getLong(1));
}
}
} catch (Exception e) {
e.printStackTrace();
success = false;
} finally {
if (rs != null) {
rs.close();
}
if (pstmt != null) {
pstmt.close();
}
if (connection != null) {
if (success) {
connection.commit();
} else {
connection.rollback();
}
connection.close();
}
}
} catch (SQLException e) {
e.printStackTrace();
}
}
#Ali Shakiba your code needs some modification. Error part:
for (int i = 0; i < myArray.length; i++) {
myStatement.setString(i, myArray[i][1]);
myStatement.setString(i, myArray[i][2]);
}
Updated code:
String myArray[][] = {
{"1-1", "1-2"},
{"2-1", "2-2"},
{"3-1", "3-2"}
};
StringBuffer mySql = new StringBuffer("insert into MyTable (col1, col2) values (?, ?)");
for (int i = 0; i < myArray.length - 1; i++) {
mySql.append(", (?, ?)");
}
mysql.append(";"); //also add the terminator at the end of sql statement
myStatement = myConnection.prepareStatement(mySql.toString());
for (int i = 0; i < myArray.length; i++) {
myStatement.setString((2 * i) + 1, myArray[i][1]);
myStatement.setString((2 * i) + 2, myArray[i][2]);
}
myStatement.executeUpdate();
This might be helpful in your case of passing array to PreparedStatement.
Store the required values to an array and pass it to a function to insert the same.
String sql= "INSERT INTO table (col1,col2) VALUES (?,?)";
String array[][] = new String [10][2];
for(int i=0;i<array.size();i++){
//Assigning the values in individual rows.
array[i][0] = "sampleData1";
array[i][1] = "sampleData2";
}
try{
DBConnectionPrepared dbcp = new DBConnectionPrepared();
if(dbcp.putBatchData(sqlSaveAlias,array)==1){
System.out.println("Success");
}else{
System.out.println("Failed");
}
}catch(Exception e){
e.printStackTrace();
}
putBatchData(sql,2D_Array)
public int[] putBatchData(String sql,String args[][]){
int status[];
try {
PreparedStatement stmt=con.prepareStatement(sql);
for(int i=0;i<args.length;i++){
for(int j=0;j<args[i].length;j++){
stmt.setString(j+1, args[i][j]);
}
stmt.addBatch();
stmt.executeBatch();
stmt.clearParameters();
}
status= stmt.executeBatch();
} catch (Exception e) {
e.printStackTrace();
}
return status;
}
It is possible to submit multiple updates in JDBC.
We can use Statement, PreparedStatement, and CallableStatement objects for batch update with disabled auto-commit.
addBatch() and executeBatch() functions are available with all statement objects to have BatchUpdate.
Here addBatch() method adds a set of statements or parameters to the current batch.
Related
How does this source code create a new database record? I am reverse engineering a java application, and I don't have much java experience myself. I would expect to see something like "INSERT INTO ShipBargePipe (columns) VALUES (values) etc. But All i see is a dbTransfers object and nothing else. Does anyone have any idea of how this works?
Thanks in advance.
`
int insertSBP(String direction, int selectedTransferTypeNumber, int selectedSBPNumber, Timestamp shouldStart, int barrels, String notes)
throws NumberFormatException, SQLException
{
int newTransferNumber = -1;
Connection conn = null;
try
{
int userNumber = GV.user.getUserNumber();
conn = GV.getConnection(false);
ResultSetCA selectedProducts = (ResultSetCA)this.jListProducts.getSelectedVectors();
ResultSetCA selectedSources = null;
ResultSetCA selectedDestinations = null;
if (direction.equals("Outbound")) {
selectedSources = (ResultSetCA)this.jListSourceTanks.getSelectedVectors();
} else {
selectedDestinations = (ResultSetCA)this.jListDestinationTanks
.getSelectedVectors();
}
this.dbTransfers.resetChanged();
this.dbTransfers.setInt(selectedTransferTypeNumber, "transferTypeNumber");
this.dbTransfers.setInt(userNumber, "userNumber");
this.dbTransfers.setTimeStamp(shouldStart, "ShouldStartStamp");
this.dbTransfers.setInt(selectedSBPNumber, "SBPNumber");
this.dbTransfers.setString(notes, "Notes");
this.dbTransfers.setInt(barrels, "BarrelsRequested");
this.dbTransfers.insert(conn);
newTransferNumber = DbObject.lastId(conn);
if (direction.equals("Outbound")) {
for (int counter = 0; counter < selectedSources.size(); counter++)
{
int tankNumber = selectedSources.getInt(counter, 0);
int customerNumber = selectedSources.getInt(counter, 2);
this.dbTransferTank.resetChanged();
this.dbTransferTank.setInt(newTransferNumber, "transferNumber");
this.dbTransferTank.setInt(tankNumber, "tankNumber");
this.dbTransferTank.setString("s", "sourceDest");
this.dbTransferTank.setInt(customerNumber, "customerNumber");
this.dbTransferTank.setNull("startBarrels");
this.dbTransferTank.insert(conn);
}
} else {
for (int counter = 0; counter < selectedDestinations.size(); counter++)
{
int tankNumber = selectedDestinations.getInt(counter, 0);
int customerNumber = selectedDestinations.getInt(counter, 2);
this.dbTransferTank.resetChanged();
this.dbTransferTank.setInt(newTransferNumber, "transferNumber");
this.dbTransferTank.setInt(tankNumber, "tankNumber");
this.dbTransferTank.setString("d", "sourceDest");
this.dbTransferTank.setInt(customerNumber, "customerNumber");
this.dbTransferTank.setNull("startBarrels");
this.dbTransferTank.insert(conn);
}
}
for (int counter = 0; counter < selectedProducts.size(); counter++)
{
int productNumber = selectedProducts.getInt(counter, 0);
this.dbProductTransfer.resetChanged();
this.dbProductTransfer.setInt(newTransferNumber, "transferNumber");
this.dbProductTransfer.setInt(productNumber, "productNumber");
this.dbProductTransfer.insert(conn);
}
conn.commit();
}
catch (SQLException e)
{
if (conn != null) {
conn.rollback();
}
throw e;
}
return newTransferNumber;
}
`
Edit * Here is the dbObject insert method and it's depended createInsert method.
It looks like it dynamically loops through the columns to build the SQL query string. Maybe there is a way to enable logging on the mysql database so i can see exactly what the query strings are ran against it?
`
String createInsert()
throws SQLException
{
String fieldNames = "";
String values = "";
for (int i = 0; i < this.fields.length; i++) {
if (this.fields[i].isChanged()) {
if (fieldNames.equals(""))
{
fieldNames = this.fields[i].getName();
values = "?";
}
else
{
fieldNames = fieldNames + ", " + this.fields[i].getName();
values = values + ",?";
}
}
}
if (this.fields.equals("")) {
throw new SQLException("The table " + this.tableName +
" does not have anything to insert");
}
return createInsert(this.schema + "." + this.tableName, fieldNames, values);
}
`
`
public void insert(Connection conn)
throws SQLException
{
Statement stmt = null;
PreparedStatement ps = null;
int i = 0;
int j = 1;
try
{
String dml = createInsert();
printDebug(dml);
ps = conn.prepareStatement(dml);
for (i = 0; i < this.fields.length; i++) {
if (this.fields[i].isChanged())
{
this.fields[i].setInPreparedStatement(ps, j);
if (this.fields[i].getObject() == null) {
printDebug(j + ": " + this.fields[i].getObject());
} else {
printDebug(j + ": '" + this.fields[i].getObject() + "'");
}
j++;
}
}
int v = ps.executeUpdate();
if (v != 1) {
throw new SQLException("I can not insert the table " + this.tableName);
}
}
finally
{
resetChanged();
if (ps != null) {
ps.close();
}
if (stmt != null) {
stmt.close();
}
}
}
`
Edit* After further digging, I enabled logging on mysql and it shows me exaclty what is going on behind the scenes when new records are added / deleted etc.
`
170426 15:39:16 4 Query SET autocommit=0
4 Prepare [5] SELECT transfertypenumber, transfertypename, abbreviation FROM rc.transfertypes WHERE transfertypename= ?
4 Execute [5] SELECT transfertypenumber, transfertypename, abbreviation FROM rc.transfertypes WHERE transfertypename= 'Tank'
4 Prepare [6] INSERT INTO rc.transfers(transfertypenumber, usernumber, shouldstartstamp, notes, barrelsrequested) VALUES (?,?,?,?,?)
4 Execute [6] INSERT INTO rc.transfers(transfertypenumber, usernumber, shouldstartstamp, notes, barrelsrequested) VALUES (5,49,'2017-04-26 15:39:05','ZACKSCRIVEN',999)
4 Prepare [7] SELECT last_insert_id()
4 Execute [7] SELECT last_insert_id()
4 Prepare [8] INSERT INTO rc.transfertank(transfernumber, tanknumber, sourcedest, customernumber) VALUES (?,?,?,?)
4 Execute [8] INSERT INTO rc.transfertank(transfernumber, tanknumber, sourcedest, customernumber) VALUES (76265,1,'s',18)
4 Prepare [9] INSERT INTO rc.transfertank(transfernumber, tanknumber, sourcedest, customernumber) VALUES (?,?,?,?)
4 Execute [9] INSERT INTO rc.transfertank(transfernumber, tanknumber, sourcedest, customernumber) VALUES (76265,9,'d',18)
4 Prepare [10] INSERT INTO rc.producttransfer(transfernumber, productnumber) VALUES (?,?)
4 Execute [10] INSERT INTO rc.producttransfer(transfernumber, productnumber) VALUES (76265,21)
4 Query commit
`
dbTransfers and dbTransferTank seem to represent some sort of ORM (object-relational mapping) object. The insert statement isn't called directly by this code, but is embedded in the lines that call these objects' respective .insert method.
So i try to input all the keys from a HashMap to a database. My first approach was to insert all the keys one by one to my database. Note that the HashMap size is some million keys long so this process took a lot of time.
I did some research and stumbled upon the preparedStatement interface. So i came up with this piece of code to create a Batch of 10000 elements and then input them all together to the database.
final int batchSize = 10000;
int count = 0;
Connection dbConnection = null;
try {
dbConnection = getDBConnection();
String SQL = "INSERT INTO masterdict (window) " +
"VALUES(?)";
PreparedStatement ps = (PreparedStatement) dbConnection.prepareStatement(SQL);
for (String k : masterDict.keySet()) {
ps.setString(1,k);
ps.addBatch();
if(++count % batchSize == 0) {
System.out.println(count);
ps.executeBatch();
}
}
ps.executeBatch();
ps.close();
dbConnection.close();
for some reason though this approach takes exactly the same time to complete as the first one. Can anyone explain to me why is this the case?
After reading through the comments i ended up with this new version of the code that works just fine.
final int batchSize = 10000;
int count = 0;
Connection dbConnection = null;
try {
dbConnection = getDBConnection();
String SQL = "INSERT INTO masterdict (window) " +
"VALUES(?)";
PreparedStatement ps = (PreparedStatement) dbConnection.prepareStatement(SQL);
dbConnection.setAutoCommit(false);
for (String k : masterDict.keySet()) {
ps.setString(1,k);
ps.addBatch();
if(++count % batchSize == 0) {
System.out.println(count);
ps.executeBatch();
dbConnection.commit();
}
}
ps.executeBatch();
dbConnection.commit();
ps.close();
} catch (SQLException e) {
if (dbConnection != null) {
dbConnection.rollback();
}
System.out.println(e.getMessage());
} finally {
dbConnection.close();
}
I have a method which retrieved values from resultset where some column values is 1.
Now I want to apply a condition which states that when beam_current = 101.20 ,beam_energy=2500.063 and st1_prmt_status_p45=1 then the values should be printed.
My code for displaying values is:
public LinkedHashMap < String, Integer > beam_CurrentStatus() throws SQLException {
try
{
con = getConnection();
stmt = con.createStatement();
String sql = "SELECT TOP 1 c.logtime, a.BL1_data_SS_ST,a.BL2_data_SS_ST,a.BL3_data_SS_ST,a.BL4_data_SS_ST,a.BL5_data_SS_ST,a.BL6_data_SS_ST,a.BL7_data_SS_ST,a.BL8_data_SS_ST,a.BL9_data_SS_ST,a.BL10_data_SS_ST,a.BL11_data_SS_ST, a.BL12_data_SS_ST,a.BL13_data_SS_ST,a.BL14_data_SS_ST,a.BL15_data_SS_ST,a.BL16_data_SS_ST,a.BL17_data_SS_ST,a.BL18_data_SS_ST,a.BL19_data_SS_ST,a.BL20_data_SS_ST,a.BL21_data_SS_ST,a.BL22_data_SS_ST,a.BL23_data_SS_ST,a.BL24_data_SS_ST,a.BL25_data_SS_ST,a.BL26_data_SS_ST,a.BL27_data_SS_ST,b.st1_prmt_status_p45,c.beam_current,c.beam_energy from INDUS2_BLFE.dbo.main_BLFE_status a inner join INDUS2_MSIS.dbo.main_MSIS_status b on a.logtime=b.logtime inner join INDUS2_BDS.dbo.DCCT c on b.logtime=c.logtime ORDER BY c.logtime DESC ";
stmt.executeQuery(sql);
rs = stmt.getResultSet();
ResultSetMetaData rsmd = rs.getMetaData();
while (rs.next()) {
for (int j = 2; j < 29; j++) {
if (rs.getInt(j) == 1) {
String name = rsmd.getColumnLabel(j);
map.put(name, rs.getInt(j));
}
}
}
} catch (Exception e) {
System.out.println("\nException in Bean " + e.getMessage());
} finally {
closeConnection(stmt, rs, con);
}
return map;
}
I want to apply the condition like
if(rs.getInt(29)==1|| rs.getDouble(30)==101.20||rs.getDouble(30)==2500.063)
{
for (int j = 2; j < 29; j++)
{.......
But this if condition has no effect on for loop??How to apply this if condition in while loop??
I think in below condition is wrong :
if(rs.getInt(29)==1|| rs.getDouble(30)==101.20||rs.getDouble(30)==2500.063)
Below is your condition :
st1_prmt_status_p45=1
beam_current = 101.20
beam_energy=2500.063
This should be condition :
if(rs.getInt(29)==1|| rs.getDouble(30)==101.20||rs.getDouble(31)==2500.063)
rs.getDouble(30) ===> rs.getDouble(31)
I'm need to update a table with data from a CSV. All data is validated before the update takes place: a validation method (witch is not presented bellow) checks if some assumptions are true and "flags" the object as valid or invalid. I've already test it a lot and it's working exactly as I want.
Even so, I would like to guarantee that all Statements will be executed even if there's a fail on a batch, something that I was not able to think about. If this happens, I want the batch in witch this fail statement is to be skipped and that the next one is executed.
public void updateTable(List<PersonBean> personList) {
Connection connection = null;
PreparedStatement ps = null;
String updateDBPersonSQL = "UPDATE Person set merge_parent_id = ? WHERE id = ?";
try {
logger.info("DATA UPDATING STARTED");
input = new FileInputStream("resources/propertiesFiles/applications.properties");
properties.load(input);
final int batchSize = Integer.parseInt(properties.getProperty("batchSize"));
connection = DBConnection.getConnection();
connection.setAutoCommit(false);
int validObj = 0;
ps = connection.prepareStatement(updateDBPersonSQL);
for (int i = 0; i < personList.size(); i++) {
PersonBean person = personList.get(i);
if (person.getValidationStatus().equals("valid")) {
ps.setInt(1, person.getMerge_parent_id());
ps.setInt(2, person.getId());
ps.addBatch();
validObj++;
if (validObj % batchSize == 0 && validObj != 0) {
ps.executeBatch();
connection.commit();
logger.info((batchSize) + " rows updated");
}
}
}
int [] batchCount = ps.executeBatch();
connection.commit();
logger.info(batchCount.length + " rows updated");
writeValidationStatusToCSV(personList);
} catch (BatchUpdateException e) {
int [] updateCount = e.getUpdateCounts();
for (int i = 0; i < updateCount.length; i++) {
if (updateCount[i] >= 0) {
logger.info(updateCount.length + " objects updated.");
} else if (updateCount[i] == Statement.EXECUTE_FAILED) {
?????
}
}
logger.error(updateCount.length);
logger.error("BatchUpdateException: " + e);
logger.error("getNextException: " + e.getNextException());
try {
connection.rollback();
} catch (SQLException e1) {
logger.error("Rollback error: " + e1, e1);
}
} finally {
if (ps!= null) {
try {
ps.close();
} catch (SQLException e) {
logger.info(e);
}
}
}
logger.info("DATA UPDATING FINISHED");
}
I saw a lot of material about how to handle the exception, but none explained or pointed me to the direction of how to retry the next Statements, it means, how to execute the next batch.
How do I manage to do this?
EDIT: I'm using Postgresql
I manage to retry the next batches by surrounding the batch execution with try and catch statements. This way I'm able to catch the BatchUpdateException and call a continue statement.
try {
ps.executeBatch();
connection.commit();
/*Some more code*/
} catch (BatchUpdateException e) {
connection.rollback();
/*Some more code*/
continue;
}
I also used some control logic to "flag" the statements and batches that were already executed and logged them, making it easier to troubleshoot if some statement fails.
Here is the full code:
public void updateTable(List<PersonBean> personList) throws Exception {
logger.info("TABLE UPDATE STARTED");
List <PersonBean> personListValidated = createValidStmtList(personList);
Connection connection = null;
PreparedStatement ps = null;
String updatePersonSQL = "UPDATE Person SET merge_parent_id = ? WHERE id = ?";
input = new FileInputStream("resources/propertiesFiles/applications.properties");
properties.load(input);
final int batchSize = Integer.parseInt(properties.getProperty("batchSize"));
/*A list was used to "flag" the batches that were already executed. BatchStatus objs have only two parameters, number (incremented as the batches are being executed) and status (success or fail).*/
List <BatchStatus> batchStatusList = new ArrayList<BatchStatus>();
/*This variables will be used to help flag the batches and statements that were already executed.*/
int batchCount = 0;
int stmtAddedToBatchCount = 0;
try {
connection = DBConnection.getConnection();
connection.setAutoCommit(false);
ps = connection.prepareStatement(updatePersonSQL);
/*personListValidated contains the objects that will be updated in the table. Instead of doing the validation on the update method, I decomposed
* this part in other 2 methods, making it easier to control of the statements added to the batch.
*/
for (int i = 0; i < personListValidated.size(); i++) {
PersonBean personValid = personListValidated.get(i);
ps.setInt(1, personValid.getMerge_parent_id());
ps.setInt(2, personValid.getId());
ps.addBatch();
personValid.setToBatch("true");
stmtAddedToBatchCount++;
logger.info("Row added to batch (count: " + stmtAddedToBatchCount + ")");
if (stmtAddedToBatchCount % batchSize == 0) {
batchCount++;
try {
ps.executeBatch();
connection.commit();
for (int j = stmtAddedToBatchCount - batchSize; j < stmtAddedToBatchCount; j++){
personValid = personListValidated.get(j);
personValid.setValidationStatus("success");
}
BatchStatus batchStatusObj = new BatchStatus(batchCount, "sucess");
batchStatusList.add(batchStatusObj);
logger.info(batchStatusList.get(batchCount - 1));
} catch (BatchUpdateException e) {
connection.rollback();
for (int j = stmtAddedToBatchCount - batchSize; j < stmtAddedToBatchCount; j++){
personValid = personListValidated.get(j);
personValid.setValidationStatus("fail");
}
BatchStatus batchStatusObj = new BatchStatus(batchCount, "fail");
batchStatusList.add(batchStatusObj);
logger.info(batchStatusList.get(batchCount - 1));
logger.error("Bacth execution fail: " + e, e);
continue;
}
}
}
} catch (SQLException e) {
logger.error(e, e);
}
int[] lastBatchCount = null;
/*Try and catch to handle the statements executed on the last batch*/
try {
lastBatchCount = ps.executeBatch();
connection.commit();
for (int j = batchStatusList.size() * batchSize; j < stmtAddedToBatchCount; j++){
PersonBean personValid = personListValidated.get(j);
personValid.setValidationStatus("success");
}
logger.info(lastBatchCount.length + " rows inserted on the last batch");
logger.info("Last batch excuted");
} catch (BatchUpdateException e) {
connection.rollback();
for (int j = batchStatusList.size() * batchSize; j < stmtAddedToBatchCount; j++){
PersonBean personValid = personListValidated.get(j);
personValid.setValidationStatus("fail");
}
logger.error("Last batch fail to execute: " + e, e);
}
writeValidationStatusToCSV(personList);
logger.info("TABLE UPDATE FINISHED");
}
I want to insert multiple rows into a MySQL table at once using Java. The number of rows is dynamic. In the past I was doing...
for (String element : array) {
myStatement.setString(1, element[0]);
myStatement.setString(2, element[1]);
myStatement.executeUpdate();
}
I'd like to optimize this to use the MySQL-supported syntax:
INSERT INTO table (col1, col2) VALUES ('val1', 'val2'), ('val1', 'val2')[, ...]
but with a PreparedStatement I don't know of any way to do this since I don't know beforehand how many elements array will contain. If it's not possible with a PreparedStatement, how else can I do it (and still escape the values in the array)?
You can create a batch by PreparedStatement#addBatch() and execute it by PreparedStatement#executeBatch().
Here's a kickoff example:
public void save(List<Entity> entities) throws SQLException {
try (
Connection connection = database.getConnection();
PreparedStatement statement = connection.prepareStatement(SQL_INSERT);
) {
int i = 0;
for (Entity entity : entities) {
statement.setString(1, entity.getSomeProperty());
// ...
statement.addBatch();
i++;
if (i % 1000 == 0 || i == entities.size()) {
statement.executeBatch(); // Execute every 1000 items.
}
}
}
}
It's executed every 1000 items because some JDBC drivers and/or DBs may have a limitation on batch length.
See also:
JDBC tutorial - Using PreparedStatement
JDBC tutorial - Using Statement Objects for Batch Updates
When MySQL driver is used you have to set connection param rewriteBatchedStatements to true ( jdbc:mysql://localhost:3306/TestDB?**rewriteBatchedStatements=true**).
With this param the statement is rewritten to bulk insert when table is locked only once and indexes are updated only once. So it is much faster.
Without this param only advantage is cleaner source code.
If you can create your sql statement dynamically you can do following workaround:
String myArray[][] = { { "1-1", "1-2" }, { "2-1", "2-2" }, { "3-1", "3-2" } };
StringBuffer mySql = new StringBuffer("insert into MyTable (col1, col2) values (?, ?)");
for (int i = 0; i < myArray.length - 1; i++) {
mySql.append(", (?, ?)");
}
myStatement = myConnection.prepareStatement(mySql.toString());
for (int i = 0; i < myArray.length; i++) {
myStatement.setString(i, myArray[i][1]);
myStatement.setString(i, myArray[i][2]);
}
myStatement.executeUpdate();
In case you have auto increment in the table and need to access it.. you can use the following approach... Do test before using because getGeneratedKeys() in Statement because it depends on driver used. The below code is tested on Maria DB 10.0.12 and Maria JDBC driver 1.2
Remember that increasing batch size improves performance only to a certain extent... for my setup increasing batch size above 500 was actually degrading the performance.
public Connection getConnection(boolean autoCommit) throws SQLException {
Connection conn = dataSource.getConnection();
conn.setAutoCommit(autoCommit);
return conn;
}
private void testBatchInsert(int count, int maxBatchSize) {
String querySql = "insert into batch_test(keyword) values(?)";
try {
Connection connection = getConnection(false);
PreparedStatement pstmt = null;
ResultSet rs = null;
boolean success = true;
int[] executeResult = null;
try {
pstmt = connection.prepareStatement(querySql, Statement.RETURN_GENERATED_KEYS);
for (int i = 0; i < count; i++) {
pstmt.setString(1, UUID.randomUUID().toString());
pstmt.addBatch();
if ((i + 1) % maxBatchSize == 0 || (i + 1) == count) {
executeResult = pstmt.executeBatch();
}
}
ResultSet ids = pstmt.getGeneratedKeys();
for (int i = 0; i < executeResult.length; i++) {
ids.next();
if (executeResult[i] == 1) {
System.out.println("Execute Result: " + i + ", Update Count: " + executeResult[i] + ", id: "
+ ids.getLong(1));
}
}
} catch (Exception e) {
e.printStackTrace();
success = false;
} finally {
if (rs != null) {
rs.close();
}
if (pstmt != null) {
pstmt.close();
}
if (connection != null) {
if (success) {
connection.commit();
} else {
connection.rollback();
}
connection.close();
}
}
} catch (SQLException e) {
e.printStackTrace();
}
}
#Ali Shakiba your code needs some modification. Error part:
for (int i = 0; i < myArray.length; i++) {
myStatement.setString(i, myArray[i][1]);
myStatement.setString(i, myArray[i][2]);
}
Updated code:
String myArray[][] = {
{"1-1", "1-2"},
{"2-1", "2-2"},
{"3-1", "3-2"}
};
StringBuffer mySql = new StringBuffer("insert into MyTable (col1, col2) values (?, ?)");
for (int i = 0; i < myArray.length - 1; i++) {
mySql.append(", (?, ?)");
}
mysql.append(";"); //also add the terminator at the end of sql statement
myStatement = myConnection.prepareStatement(mySql.toString());
for (int i = 0; i < myArray.length; i++) {
myStatement.setString((2 * i) + 1, myArray[i][1]);
myStatement.setString((2 * i) + 2, myArray[i][2]);
}
myStatement.executeUpdate();
This might be helpful in your case of passing array to PreparedStatement.
Store the required values to an array and pass it to a function to insert the same.
String sql= "INSERT INTO table (col1,col2) VALUES (?,?)";
String array[][] = new String [10][2];
for(int i=0;i<array.size();i++){
//Assigning the values in individual rows.
array[i][0] = "sampleData1";
array[i][1] = "sampleData2";
}
try{
DBConnectionPrepared dbcp = new DBConnectionPrepared();
if(dbcp.putBatchData(sqlSaveAlias,array)==1){
System.out.println("Success");
}else{
System.out.println("Failed");
}
}catch(Exception e){
e.printStackTrace();
}
putBatchData(sql,2D_Array)
public int[] putBatchData(String sql,String args[][]){
int status[];
try {
PreparedStatement stmt=con.prepareStatement(sql);
for(int i=0;i<args.length;i++){
for(int j=0;j<args[i].length;j++){
stmt.setString(j+1, args[i][j]);
}
stmt.addBatch();
stmt.executeBatch();
stmt.clearParameters();
}
status= stmt.executeBatch();
} catch (Exception e) {
e.printStackTrace();
}
return status;
}
It is possible to submit multiple updates in JDBC.
We can use Statement, PreparedStatement, and CallableStatement objects for batch update with disabled auto-commit.
addBatch() and executeBatch() functions are available with all statement objects to have BatchUpdate.
Here addBatch() method adds a set of statements or parameters to the current batch.