I am using the following code to Insert data into a table.
test_conn.setAutoCommit(false);
stmt = test_conn.prepareStatement("INSERT INTO ...");
while(RSet.next()){
for(int i = 1; i <= columnCount; i++){
stmt.setString(i, RSet.getString(i));
}
stmt.addBatch();
}
stmt.executeBatch();
test_conn.commit();
other processing methods to occur only all the above rows are successfully inserted....
when I Insert into table using executeBatch(), if an SQL Exception or Error occurs in Inserting , is it possible to find Insertion of which Record has thrown the exception?
You have to try-catch the stmt.executeBatch() call and check for details in the exception. Batch execution will stop on first error that will occure.
Related
I have a program which is connected to a db2 z/os database. I get the following Exception:
com.ibm.db2.jcc.am.SqlException: DB2 SQL Error: SQLCODE=-805, SQLSTATE=51002, SQLERRMC=NULLID.SYSLH21E.5359534C564C3031, DRIVER=3.66.46
this one says my program is running out of statements. So I checked everything and summarised all sql-action:
Connection_1
Connection_2
Resultset set1 = Connection_1.PreparedStatement(Select Table).open
try {
while (set1.next()) {
Resultset set2 = Connection_1.PreparedStatement(find Dataset).open
try{
if(set2.next()) {
Connection_2.PreparedStatement(Insert in Table).open
Connection_2.PreparedStatement(Insert in Table).close
Connection_2.PreparedStatement(update in Table).open
Connection_2.PreparedStatement(update in Table).close
}finally {
set2.close
PreparedStatemtn(find Dataset).close
}
if (something is true){
Connection_2.commit)()
}
}
}
}finally{
set1.close
PreparedStatement(Select Table).close
Connect_2.close
}
Then in the moment just before my program chrash i create a lot of preparedStatements:
for (int i = 0; i< 10000; i++){
PreparedStatement statement = CONNECTION.prepareStatement("SELECT * FROM TABLE");
}
Then I get the following Error:
Exception stack trace:
com.ibm.db2.jcc.am.SqlException: DB2 SQL Error: SQLCODE=-805, SQLSTATE=51002, SQLERRMC=NULLID.SYSLH219 0X5359534C564C3031, DRIVER=3.66.46
Concurrently open statements:
1. SQL string: SELECT * FROM TABEL
Number of statements: 10000
2. SQL string: SELECT * FROM OTHER_TABLE
Number of statements: 1
********************
So this looks like there is no problem with open Statements. Are there other possibilitys for an exception like this? Maybe I select to many datasets?
Resultset set1 = Connection_1.PreparedStatement(Select Table).open
This table got round about 4_000_000 datasets.
I hope someone can help me. If you need more information just tell me.
Kindly Regards!
I am trying to Insert large amount of data from one table to another table. The two tables are in different regions. When I Insert data, the ID (I am using to create connection)is able to insert data for less number of rows. If It inserts data for more than 500 rows it throws exception
com.ibm.db2.jcc.b.SqlException: DB2 SQL error: SQLCODE: -551, SQLSTATE: 42501, SQLERRMC: DB2GCS;EXECUTE PACKAGE;NULLID.SYSLH203.
I am not able to find why it shows authorization exception for the same id if data is more.
My Code Snippet :
while(RSet.next()){
stmt=test_conn.prepareStatement("Insert Query");
for(int i=1;i<=columnCount;i++){
stmt.setString(i,RSet.getString(i));
}
stmt.addBatch();;
}
stmt.executeBatch();
Thanks in Advance for Your Help.
Your code is actually not batching correctly and this could be the reason why it's breaking. The problem is that you're preparing your insert query over and over again needlessly.
You need to prepare it just once outside the loop like
test_conn.setAutoCommit(false);
stmt = test_conn.prepareStatement("INSERT INTO ...");
while(RSet.next()){
for(int i = 1; i <= columnCount; i++){
stmt.setString(i, RSet.getString(i));
}
stmt.addBatch();
}
stmt.executeBatch();
test_conn.commit();
I am trying to retrieve generated keys from an executeBatch() transaction but I only get the last key to be added.
this is my code:
PreparedStatement ps_insert = conn.prepareStatement(insertQuery, PreparedStatement.RETURN_GENERATED_KEYS);
for (int i = 0 ; i < adding_dates.length ; i++){
ps_insert.setInt(1, Integer.parseInt(consultant_id));
ps_insert.setDate(2, adding_dates[i]);
ps_insert.setInt(3, Integer.parseInt(room_id));
ps_insert.addBatch();
}
ps_insert.executeBatch();
ResultSet rs = ps_insert.getGeneratedKeys(); //<-- Only the last key retrieved
conn.commit();
What am I doing wrong?
EDIT: Apologies for not mentioning that I use H2 (http://www.h2database.com/html/main.html) database in embedded mode.
According to H2 jdbc driver javadocs, this is the normal behaviour:
Return a result set that contains the last generated auto-increment
key for this connection, if there was one. If no key was generated by
the last modification statement, then an empty result set is returned.
The returned result set only contains the data for the very last row.
You must iterate the ResultSet to retrieve the keys.
PreparedStatement ps_insert = conn.prepareStatement(insertQuery, PreparedStatement.RETURN_GENERATED_KEYS);
for (int i = 0 ; i < adding_dates.length ; i++){
ps_insert.setInt(1, Integer.parseInt(consultant_id));
ps_insert.setDate(2, adding_dates[i]);
ps_insert.setInt(3, Integer.parseInt(room_id));
ps_insert.addBatch();
}
ps_insert.executeBatch();
ResultSet rs = ps_insert.getGeneratedKeys(); //<-- Only the last key retrieved
if (rs.next()) {
ResultSetMetaData rsmd = rs.getMetaData();
int colCount = rsmd.getColumnCount();
do {
for (int i = 1; i <= colCount; i++) {
String key = rs.getString(i);
System.out.println("key " + i + "is " + key);
}
}
while (rs.next();)
}
conn.commit();
This is a limitation of H2 implementation. This is an issue.
For now use inserts/updates without batch, or query generated keys somehow through select.
If you are sharing a session/connection between 2 threads, and two of those threads try to execute statements at the same time, then you might see this kind of problem.
You probably need to either (a) use a connection pool or (b) synchronise your entire access to the DB.
for instance for option (b)
put a synchronize token infront of your method to make it thread safe
Just a thought as i dont know you complete execution context
I am extracting data from excel sheet and inserting them into my oracle table. The database is setup in a way that when executing a batch statement, if any insert statement in the batch fails, all the other statements in the batch are not executed. So my problem is how can I find out which row of data is actually causing it, so I can send a message to the user with the row number of the data that's causing the problem?
Connection con = null;
PreparedStatement pstmt = null;
Iterator iterator = list.iterator();
int rowCount = list.size();
int currentRow = 0;
String sqlStatement = "INSERT INTO DMD_VOL_UPLOAD (ORIGIN, DESTINATION, DAY_OF_WEEK, VOLUME)";
sqlStatement += " VALUES(?, ?, ?, ?)";
int batchSize==1000;
for(currentRow=1; currentRow<=rowCount; currentRow++){
ForecastBatch forecastBatch = (ForecastBatch) iterator.next();
pstmt.setString(1, forecastBatch.getOrigin());
pstmt.setString(2, forecastBatch.getDestination());
pstmt.setInt(3, forecastBatch.getDayOfWeek());
pstmt.setInt(4, forecastBatch.getVolumeSum());
pstmt.addBatch();
if(i % batchSize == 0){
updateCounts = pstmt.executeBatch();
con.commit();
pstmt.clearBatch();
session.flush();
session.clear();
}
}
executeBatch returns an integer array containing the counts of all the rows modified by each statement in the batch. I think negative numbers are used to indicate errors. You should be able to figure out which ones failed using this return value.
http://docs.oracle.com/javase/1.3/docs/guide/jdbc/spec2/jdbc2.1.frame6.html
java.sql.Statement's javadoc says that executeBatch() throws BatchUpdateException (a subclass of SQLException) if one of the commands sent to the database fails to execute properly or attempts to return a result set.
The method getUpdateCount() of java.sql.BatchUpdateException "..Retrieves the update count for each update statement in the batch update that executed successfully before this exception occurred..."
If none of this works, you will probably have to fall back to executing and committing each statement (within this particular batch) individually until you hit an error.
In my application, I need to perform millions of queries to MySQL database. The codes look as follows:
for (int i=0; i< num_rows ; i++) {
String query2="select id from mytable where x='"+ y.get(i) "'";
Statement stmt2 = Con0.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
ResultSet rs2 = stmt2.executeQuery(query2);
... // process result in rs2
rs2.close();
}
where num_rows is around 2 million. After 600k loops, java report an error and exit:
java.lang.OutOfMemoryError: Java heap space error.
What's wrong in my codes? How should I avoid such an error?
Thanks in advance!
Close your statements as well.
Statement is no good solution here. Try the following code:
PreparedStatement pre = Con0.prepareStatement("select id from mytable where x=?");
for (int i=0; i< num_rows ; i++) {
pre.setString(1, y.get(i));
ResultSet rs2 = pre.executeQuery();
... // process result in rs2
rs2.close();
pre.clearParameters();
}
pre.close();
I don't know if the answer accepted by you have solved your problem, since it doesn't change anything that could cause the problem.
The problem is when ResultSet is caching all the rows returned by the query, which can either be stored while you iterate through set, or prefetched. I've had similar problem with PostgreSQL JDBC driver, which ignored the cursor fetch size, when running in no-trasactional mode.
The JDBC driver should use cursors for such queries, so you should check driver's documentation about fetchSize parameter. As alternative, you can manage cursors yourself executing SQL command to create cursor and fetch next X rows.
Using a preparedStatement, since only the value of X changes in each loop, declared outside de loop should help. You're also, at least in the shown code, not closing the statement used, which might not help the garbage collector to free the used memory.
Assuming that you are using a single connection for all your queries, and assuming your code is more complicated than what you show us, it is critical that you ensure that each Statement and each ResultSet is closed when you are finished with it. This means that you need a try/finally block like this:
for (int i=0; i< num_rows ; i++) {
String query2="select id from mytable where x='"+ y.get(i) "'";
Statement stmt2 = Con0.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
ResultSet rs2 = null;
try {
rs2 = stmt2.executeQuery(query2);
... // process result in rs2
} finally {
try {
stmt2.close();
} catch (SQLException sqle) {
// complain to logs
}
try {
if (rs2 != null) { rs2.close(); }
} catch (SQLException sqle) {
}
}
}
If you do not aggressively and deterministically close all result set and statement objects, and if you do requests quickly enough, you will run out of memory.