I'm working on an application that will pull data from a remote Web service & then push it to Oracle database.
Now, the data is event based so it could be an INSERT/UPDATE or DELETE. For this purpose, I have created a single object of preparedStatement & then based on type of event the code creates a String & assigns it to prepared statement.
Now, with each retrieved event the table name on oracle may changes & hence the query, so after retrieving each event, a String is created based on table on which operation is done & then being passed to preparedStatement object & executed.
BUT, I've 2 issues with this -
I guess, I'm not using prepared statement efficiently, since each time the query changes not sure if Db caching mechanism for preparedStatement will help much in this case.
Also, all my statements are either INSERT, UPDATE or DELETE, so I'm using pstmt.executeUpdate(), now it returns the number of rows affected by operation. But, I'm getting "maximum open cursor extended error" ... I read through many threads & since my statement does not returns resultset, & closing preparedstatement after each operation is not efficient, not sure how should I handle this error. I can increase the open cursor count in DB but that will not a application fix as this may only delay the error until a bad scenario is encountered.
getEvents()
for ( i = 0 -> lastevent)
{
if (event == condition1)
{
Process.condition1(arg1, arg2)
}
else if (event == condition2)
{
Process.condition2(arg1, arg2)
}
.
.
.
}
pstmt.close();
connection.close();
Process Class
{
condition1(arg1, arg2)
{
sqlstatement = "INSERT INTO Table1 (column1, column2, column3,...) VALUES (?, ?, ?);"
pstmt = connection.prepareStatement(sqlstatement);
pstmt.setString(1, value1);
pstmt.setInt(2, value2);
.
.
.
pstmt.executeUpdate();
connection.commit();
return;
}
condition2(arg1, arg2)
{
sqlstatement = "INSERT INTO Table2 (column1, column2, column3,...) VALUES (?, ?, ?);"
pstmt = connection.prepareStatement(sqlstatement);
pstmt.setString(1, value1);
pstmt.setInt(2, value2);
.
.
.
pstmt.executeUpdate();
connection.commit();
return;
}
}
Sorry, I guess, above may give some idea on how the whole process is being done ... I didn't put the actual code coz it is very much distributed among various classes & it gets data for positional parameters from remote service. But above is the summarized form of how it is being done.
Also, one thought may be is to take similar type of events in a list & then process it but the way my business requirement is & the manner in which remote service provides data, it is more prone to errors.
Also, there are 130+ columns in some tables on oracle & the code will be processing 150+ events every 15 minutes.
It is not possible to "assign a query-string to a prepared statement". A prepared statement is created with a query-string as is shown in your code: each call to connection.prepareStatement(sqlstatement) creates a new prepared statement. This is also why the error "too many open cursors" occurs: the previously created pstmt is never closed.
To re-use the prepared statements, instantiate the Process class and give it an init method (and corresponding close method) that is called once to setup the prepared statements that might be used (and are closed when work is done):
PreparedStatement insertTable1;
PreparedStatement insertTable2;
void init(Connection c) {
insertTable1 = c.prepareStatement(INSERT_TABLE1_QUERY);
insertTable2 = c.prepareStatement(INSERT_TABLE2_QUERY);
}
void close() {
insertTable1.close();
insertTable2.close();
}
void condition1(Connection c, arg1, arg2) {
insertTable1.setString(1, value1);
insertTable1.setInt(2, value2);
insertTable1.executeUpdate();
c.commit();
}
void condition2(Connection c, arg1, arg2) {
insertTable2.setString(1, value1);
insertTable2.setInt(2, value2);
insertTable2.executeUpdate();
c.commit();
}
Related
I'm creating parcel machine program. Every parcel has unique parcelID which is exported to mysql db. The problem is that every time when I run the program, the program is counting parcelID from 0. I'm looking for a solution which will allow me to check the last parcelID in the database and create row after the last one.
Now it looks like this: 1. I'm creating a new row in db (successfully) by java program. 2. I'm closing the program after some time. 3. I run the program again and I can't add another new row because there is error "duplicate entry '1' for key 'PRIMARY'".
public static void post() throws Exception{
int parcelID = Parcel.generateID();
int clientMPNumber = Parcel.typeClientNumber();
int orderPassword = Parcel.generatePass();
try{
Connection con = getConnection();
PreparedStatement posted = con.prepareStatement("INSERT INTO Parcels.Orders (parcelID, clientMPNumber, orderPassword) VALUES ('"+parcelID+"', '"+clientMPNumber+"', '"+orderPassword+"')");
posted.executeUpdate();
}
catch(Exception e){
System.out.println(e);
}
finally{
System.out.println("Insert completed");
}
}
and the method is:
public static int generateID(){
parcelID = parcelID + 1;
return parcelID;
}
I'd let the database do the heavy lifting for you - Just define the parcelID column as serial instead of trying to set its value yourself.
You shouldn't use Id generation, just create auto_increment column in database table
As described here , define your primary key column to auto increment for each insert so your java code doesn't have to manually calculate primary key value each time.
If that is not a possibility, you need to show how you declare & initialize parcelID. As of your current code, parcelID looks to be a class level field that gets initialized to zero for each run so you always get the same value - 1. You need to initialize with last value from data base.
Also, implement suggestion as mentioned in comment to your question regarding PreparedStatement
There are a couple of things to attent to.
// parcelID should be an INT AUTOINCREMENT primary key.
try (PreparedStatement posted = con.prepareStatement(
"INSERT INTO Parcels.Orders (clientMPNumber, orderPassword) "
+ "VALUES (?, ?)",
Statement.RETURN_GENERATED_KEYS);
posted.setString(1, clientMPNumber);
posted.setString(2, orderPassword);
posted.executeUpdate();
try (ResultSet rsKey = posted.getGeneratedKeys()) {
if (rsKey.next()) {
int parcelID = rsKey.getInt(1);
return parcelID; // Or such
}try-with-resources
}
}
The database can deal with automatic numbering best, so that two transactions at the same time do not steal the same "next" number.
You should close things like Connection, PreparedStatement and ResultSet. This can best be done using the a bit awkward syntax of try-with-resources. That closes automatically even on exception and return.
PreparedStatements should be used with placeholders ?. This takes care for escaping special characters like ' in the password. Also prevents SQL injection.
Stylistic better use SQLException above Exception. Better maybe even a throws SQLException.
I'm trying to find the faster way to do batch insert.
I tried to insert several batches with jdbcTemplate.update(String sql), where
sql was builded by StringBuilder and looks like:
INSERT INTO TABLE(x, y, i) VALUES(1,2,3), (1,2,3), ... , (1,2,3)
Batch size was exactly 1000. I inserted nearly 100 batches.
I checked the time using StopWatch and found out insert time:
min[38ms], avg[50ms], max[190ms] per batch
I was glad but I wanted to make my code better.
After that, I tried to use jdbcTemplate.batchUpdate in way like:
jdbcTemplate.batchUpdate(sql, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
// ...
}
#Override
public int getBatchSize() {
return 1000;
}
});
where sql was look like
INSERT INTO TABLE(x, y, i) VALUES(1,2,3);
and I was disappointed! jdbcTemplate executed every single insert of 1000 lines batch in separated way. I loked at mysql_log and found there a thousand inserts.
I checked the time using StopWatch and found out insert time:
min[900ms], avg[1100ms], max[2000ms] per Batch
So, can anybody explain to me, why jdbcTemplate doing separated inserts in this method? Why method's name is batchUpdate?
Or may be I am using this method in wrong way?
These parameters in the JDBC connection URL can make a big difference in the speed of batched statements --- in my experience, they speed things up:
?useServerPrepStmts=false&rewriteBatchedStatements=true
See: JDBC batch insert performance
I found a major improvement setting the argTypes array in the call.
In my case, with Spring 4.1.4 and Oracle 12c, for insertion of 5000 rows with 35 fields:
jdbcTemplate.batchUpdate(insert, parameters); // Take 7 seconds
jdbcTemplate.batchUpdate(insert, parameters, argTypes); // Take 0.08 seconds!!!
The argTypes param is an int array where you set each field in this way:
int[] argTypes = new int[35];
argTypes[0] = Types.VARCHAR;
argTypes[1] = Types.VARCHAR;
argTypes[2] = Types.VARCHAR;
argTypes[3] = Types.DECIMAL;
argTypes[4] = Types.TIMESTAMP;
.....
I debugged org\springframework\jdbc\core\JdbcTemplate.java and found that most of the time was consumed trying to know the nature of each field, and this was made for each record.
Hope this helps !
I have also faced the same issue with Spring JDBC template. Probably with Spring Batch the statement was executed and committed on every insert or on chunks, that slowed things down.
I have replaced the jdbcTemplate.batchUpdate() code with original JDBC batch insertion code and found the Major performance improvement.
DataSource ds = jdbcTemplate.getDataSource();
Connection connection = ds.getConnection();
connection.setAutoCommit(false);
String sql = "insert into employee (name, city, phone) values (?, ?, ?)";
PreparedStatement ps = connection.prepareStatement(sql);
final int batchSize = 1000;
int count = 0;
for (Employee employee: employees) {
ps.setString(1, employee.getName());
ps.setString(2, employee.getCity());
ps.setString(3, employee.getPhone());
ps.addBatch();
++count;
if(count % batchSize == 0 || count == employees.size()) {
ps.executeBatch();
ps.clearBatch();
}
}
connection.commit();
ps.close();
Check this link as well
JDBC batch insert performance
Simply use transaction. Add #Transactional on method.
Be sure to declare the correct TX manager if using several datasources #Transactional("dsTxManager"). I have a case where inserting 60000 records. It takes about 15s. No other tweak:
#Transactional("myDataSourceTxManager")
public void save(...) {
...
jdbcTemplate.batchUpdate(query, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
...
}
#Override
public int getBatchSize() {
if(data == null){
return 0;
}
return data.size();
}
});
}
Change your sql insert to INSERT INTO TABLE(x, y, i) VALUES(1,2,3). The framework creates a loop for you.
For example:
public void insertBatch(final List<Customer> customers){
String sql = "INSERT INTO CUSTOMER " +
"(CUST_ID, NAME, AGE) VALUES (?, ?, ?)";
getJdbcTemplate().batchUpdate(sql, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
Customer customer = customers.get(i);
ps.setLong(1, customer.getCustId());
ps.setString(2, customer.getName());
ps.setInt(3, customer.getAge() );
}
#Override
public int getBatchSize() {
return customers.size();
}
});
}
IF you have something like this. Spring will do something like:
for(int i = 0; i < getBatchSize(); i++){
execute the prepared statement with the parameters for the current iteration
}
The framework first creates PreparedStatement from the query (the sql variable) then the setValues method is called and the statement is executed. that is repeated as much times as you specify in the getBatchSize() method. So the right way to write the insert statement is with only one values clause.
You can take a look at http://docs.spring.io/spring/docs/3.0.x/reference/jdbc.html
I had also some bad time with Spring JDBC batch template. In my case, it would be, like, insane to use pure JDBC, so instead I used NamedParameterJdbcTemplate. This was a must have in my project. But it was way slow to insert hundreds os thousands of lines in the database.
To see what was going on, I've sampled it with VisualVM during the batch update and, voilà:
What was slowing the process was that, while setting the parameters, Spring JDBC was querying the database to know the metadata each parameter. And seemed to me that it was querying the database for each parameter for each line every time. So I just taught Spring to ignore the parameter types (as it is warned in the Spring documentation about batch operating a list of objects):
#Bean(name = "named-jdbc-tenant")
public synchronized NamedParameterJdbcTemplate getNamedJdbcTemplate(#Autowired TenantRoutingDataSource tenantDataSource) {
System.setProperty("spring.jdbc.getParameterType.ignore", "true");
return new NamedParameterJdbcTemplate(tenantDataSource);
}
Note: the system property must be set before creating the JDBC Template object. It would be possible to just set in the application.properties, but this solved and I've never after touched this again
I don't know if this will work for you, but here's a Spring-free way that I ended up using. It was significantly faster than the various Spring methods I tried. I even tried using the JDBC template batch update method the other answer describes, but even that was slower than I wanted. I'm not sure what the deal was and the Internets didn't have many answers either. I suspected it had to do with how commits were being handled.
This approach is just straight JDBC using the java.sql packages and PreparedStatement's batch interface. This was the fastest way that I could get 24M records into a MySQL DB.
I more or less just built up collections of "record" objects and then called the below code in a method that batch inserted all the records. The loop that built the collections was responsible for managing the batch size.
I was trying to insert 24M records into a MySQL DB and it was going ~200 records per second using Spring batch. When I switched to this method, it went up to ~2500 records per second. so my 24M record load went from a theoretical 1.5 days to about 2.5 hours.
First create a connection...
Connection conn = null;
try{
Class.forName("com.mysql.jdbc.Driver");
conn = DriverManager.getConnection(connectionUrl, username, password);
}catch(SQLException e){}catch(ClassNotFoundException e){}
Then create a prepared statement and load it with batches of values for insert, and then execute as a single batch insert...
PreparedStatement ps = null;
try{
conn.setAutoCommit(false);
ps = conn.prepareStatement(sql); // INSERT INTO TABLE(x, y, i) VALUES(1,2,3)
for(MyRecord record : records){
try{
ps.setString(1, record.getX());
ps.setString(2, record.getY());
ps.setString(3, record.getI());
ps.addBatch();
} catch (Exception e){
ps.clearParameters();
logger.warn("Skipping record...", e);
}
}
ps.executeBatch();
conn.commit();
} catch (SQLException e){
} finally {
if(null != ps){
try {ps.close();} catch (SQLException e){}
}
}
Obviously I've removed error handling and the query and Record object is notional and whatnot.
Edit:
Since your original question was comparing the insert into foobar values (?,?,?), (?,?,?)...(?,?,?) method to Spring batch, here's a more direct response to that:
It looks like your original method is likely the fastest way to do bulk data loads into MySQL without using something like the "LOAD DATA INFILE" approach. A quote from the MysQL docs (http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html):
If you are inserting many rows from the same client at the same time,
use INSERT statements with multiple VALUES lists to insert several
rows at a time. This is considerably faster (many times faster in some
cases) than using separate single-row INSERT statements.
You could modify the Spring JDBC Template batchUpdate method to do an insert with multiple VALUES specified per 'setValues' call, but you'd have to manually keep track of the index values as you iterate over the set of things being inserted. And you'd run into a nasty edge case at the end when the total number of things being inserted isn't a multiple of the number of VALUES lists you have in your prepared statement.
If you use the approach I outline, you could do the same thing (use a prepared statement with multiple VALUES lists) and then when you get to that edge case at the end, it's a little easier to deal with because you can build and execute one last statement with exactly the right number of VALUES lists. It's a bit hacky, but most optimized things are.
Solution given by #Rakesh worked for me.
Significant improvement in performance. Earlier time was 8 min, with this solution taking less than 2 min.
DataSource ds = jdbcTemplate.getDataSource();
Connection connection = ds.getConnection();
connection.setAutoCommit(false);
String sql = "insert into employee (name, city, phone) values (?, ?, ?)";
PreparedStatement ps = connection.prepareStatement(sql);
final int batchSize = 1000;
int count = 0;
for (Employee employee: employees) {
ps.setString(1, employee.getName());
ps.setString(2, employee.getCity());
ps.setString(3, employee.getPhone());
ps.addBatch();
++count;
if(count % batchSize == 0 || count == employees.size()) {
ps.executeBatch();
ps.clearBatch();
}
}
connection.commit();
ps.close();
Encountered some serious performance issue with JdbcBatchItemWriter.write() (link) from Spring Batch and find out the write logic delegates to JdbcTemplate.batchUpdate() eventually.
Adding a Java system properties of spring.jdbc.getParameterType.ignore=true fixed the performance issue entirely ( from 200 records per second to ~ 5000 ).
The patch was tested working on both Postgresql and MsSql (might not be dialect specific)
... and ironically, Spring documented this behaviour under a "note" section link
In such a scenario, with automatic setting of values on an underlying PreparedStatement, the corresponding JDBC type for each value needs to be derived from the given Java type. While this usually works well, there is a potential for issues (for example, with Map-contained null values). Spring, by default, calls ParameterMetaData.getParameterType in such a case, which can be expensive with your JDBC driver. You should use a recent driver version and consider setting the spring.jdbc.getParameterType.ignore property to true (as a JVM system property or in a spring.properties file in the root of your classpath) if you encounter a performance issue — for example, as reported on Oracle 12c (SPR-16139).
Alternatively, you might consider specifying the corresponding JDBC
types explicitly, either through a 'BatchPreparedStatementSetter' (as
shown earlier), through an explicit type array given to a
'List<Object[]>' based call, through 'registerSqlType' calls on a
custom 'MapSqlParameterSource' instance, or through a
'BeanPropertySqlParameterSource' that derives the SQL type from the
Java-declared property type even for a null value.
I have two methods in my class, First I am calling method dbExecuteStatement(), which execute the sql query. After execution of sql query, I get a ResultSet object. I am saving this ResultSet object in a static hashMap, so that on my next method call fetchResults(), I can use the existing result set to retrieve the results. Reason for saving the ResultSet object in a map is ,in fetchResults() method request parameter, I will get the max fetch row size, and on basis of that value I will be iterating the result set. Both of this methods are supposed to be called individual from the client side.
Now the problem, I am facing is that, When I am iterating the ResultSet object in fetchResults() method, I am getting the row count zero. If I fetch the same ResultSet from a hashMap in dbExecuteStatement(), I get the actual row count i.e 5 in my case. I checked the ResultSet object that I have put in the hash map in fetchResults() method and dbExecuteStatement(), it is the same object. But If get the ResultSetMetaData object in fetchResults() method and dbExecuteStatement(), they are coming different. Can someone help me in understanding the cause, Why I am getting the result count zero.
Below is the code:
public class HiveDao1 {
private static Map<Object,Map<Object,Object>> databaseConnectionDetails
= new HashMap<Object,Map<Object,Object>>();
//This method will execute the sql query and will save the ResultSet obj in a hashmap for later use
public void dbExecuteStatement(DbExecuteStatementReq dbExecuteStatementReq){
//I already have a connection object saved in map
String uniqueIdForConnectionObject = dbExecuteStatementReq.getDbUniqueConnectionHandlerId();
Map<Object,Object> dbObject = databaseConnectionDetails.get(uniqueIdForConnectionObject);
Connection connection = (Connection) dbObject.get(DatabaseConstants.CONNECTION);
try {
Statement stmt = connection.createStatement() ;
// Execute the query
ResultSet resultSet = stmt.executeQuery(dbExecuteStatementReq.getStatement().trim()) ;
//save the result set for further use, Result set will be used in fetchResult() call
dbObject.put(DatabaseConstants.RESULTSET, resultSet);
/*
* Now below is the debugging code,which I put to compare the result set
* iteration dbExecuteStatement() and fetchResults method
*/
ResultSet rs = (ResultSet) dbObject.get(DatabaseConstants.RESULTSET);
ResultSetMetaData md = (ResultSetMetaData) dbObject.get(DatabaseConstants.RESULTSETMETADATA);
System.out.println("==ResultSet fethced in dbExecuteStatement=="+rs);
System.out.println("==ResultSet metadata fetched in dbExecuteStatement ==="+rs.getMetaData());
int count = 0;
while (rs.next()) {
++count;
}
if (count == 0) {
System.out.println("No records found");
}
System.out.println("No of rows found from result set in dbExecuteStatement is "+count);
} catch (SQLException e) {
e.printStackTrace();
}
}
/*
* This method fetch the result set object from hashMap
* and iterate it on the basis of fetch size received in req parameter
*/
public void fetchResults(FetchResultsReq fetchResultsReq){
String uniqueIdForConnectionObject = fetchResultsReq.getDbUniqueConnectionHandlerId();
Map<Object,Object> dbObject = databaseConnectionDetails.get(uniqueIdForConnectionObject);
try {
//Fetch the ResultSet object that was saved by dbExecuteStatement()
ResultSet rs = (ResultSet) dbObject.get(DatabaseConstants.RESULTSET);
ResultSetMetaData md = (ResultSetMetaData) dbObject.get(DatabaseConstants.RESULTSETMETADATA);
System.out.println("ResultSet fethced in fetchResults at server side dao layer======"+rs);
System.out.println("ResultSet metadata fetched in fetchResults at server side dao layer======"+md);
int count = 0;
while (rs.next()) {
++count;
}
if (count == 0) {
System.out.println("No records found");
}
//Here the row count is not same as row count in dbExecuteStatement()
System.out.println("No of rows found from result set in fetchResults is "+count);
} catch (SQLException e) {
e.printStackTrace();
}
}
}
Expanding on my comment (And #Glenn's):
Using a ResultSet more than once
When you write debug code that iterates a ResultSet, the cursor moves to the end of the results. Of course, if you then call the same object and use next(), it will still be at the end, so you won't get any more records.
If you really need to read from the same ResultSet more than once, you need to execute the query such that it returns a scrollable ResultSet. You do this when you create the statement:
Statement stmt = connection.createStatement(
ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.CONCUR_READ_ONLY );
The default statement created by connection.createStatement() without parameters returns a result set of type ResultSet.TYPE_FORWARD_ONLY, and that ResultSet object can only be read once.
If your result set type is scroll insensitive or scroll sensitive, you can use a statement like rs.first() to reset the cursor and then you can fetch the records again.
Keeping the statement in scope
#Glenn's comment is extremely important. The way your program works right now, it may work fine throughout the testing phase, and then suddenly in production, you'll sometimes have zero records in your ResultSet, and the error will be reproducible only occasionally - a debug nightmare.
If the Statement object that produces the ResultSet is closed, the ResultSet itself is also closed. Since you are not closing your Statement object yourself, this will be done when the Statement object is finalized.
The stmt variable is local, and it's the only reference to that Statement that we know of. Therefore, it will be claimed by the garbage collector. However, objects that have a finalizer are relegated to a finalization queue, and there is no way of knowing when the finalizer will be called, and no way to control it. Once it happens, the ResultSet becomes closed out of your control.
So be sure to keep a reference to the statement object alongside your ResultSet. And make sure you close it properly yourself once you are done with the ResultSet and will not be using it anymore. And after you close it remember to remove the reference you have kept - both for the statement and the result set - to avoid memory leaks. Closing is important, and relying on finalizers is a bad strategy. If you don't close it yourself, you might run out of cursors at some point in your database (depending on the DBMS and its configuration).
I am beginner in android development.
I have a question. what is ?s term used for in explanation below? i got it from the documentation of android developer.
public int update (String table, ContentValues values, String whereClause, String[] whereArgs)
Added in API level 1
Convenience method for updating rows in the database.
Parameters
table the table to update in
values a map from column names to new column values. null is a valid value that will be translated to NULL.
whereClause the optional WHERE clause to apply when updating. Passing null will update all rows.
whereArgs You may include ?s in the where clause, which will be replaced by the values from whereArgs. The values will be bound as Strings.
Returns
the number of rows affected
Basically its a variable to be filled in later. You should use these everywhere that data is coming from a user, a file, or anything else not hardcoded into the app. Why? Because it prevents security problems due to SQL injection. The variables cannot themselves be SQL, and will not be parsed as SQL by the database. So if all variables sent from users to the db are bind variables you remove that entire class of security issues from the app.
A PreparedStatement supports a mechanism called bind variables. For example,
SELECT * FROM table WHERE id = ?
In the above query, there is a single bind parameter for an id. You might use it (to get a row where id is 100) with something like
String sql = "SELECT * FROM table WHERE id = ?";
try (PreparedStatement ps = conn.prepareStatement(sql)) {
ps.setInt(1, 100);
try (ResultSet rs = ps.executeQuery()) {
if (rs.next()) {
}
}
} catch (SQLException e) {
e.printStackTrace();
}
Each ? corresponds, in order, to an index of the sql arguments passed in the method's last String[] whereArgs parameter
public int update (table, values, "age > ? AND age < ?", new String[] { "18", "25"});
The documentation meant literally using '?' in the whereClause statement. Simple example:
rawQuery("select * from todo where _id = ?", new String[] { id });
From the above statement, during execution, the ? will be replaced by the value of variable id.
This mechanism helps prevent SQL Injection.
I am using a SELECT statement to get data from a table and then insert it into another table. However the line "stmt.executeQuery(query);" is inserting the first line from the table then exits. When I comment this line out, the while loop loops through all the lines printing them out. The stacktrace isn't showing any errors. Why is this happening?
try{
String query = "SELECT * FROM "+schema_name+"."+table;
rs = stmt.executeQuery(query);
while (rs.next()) {
String bundle = rs.getString("BUNDLE");
String project_cd = rs.getString("PROJECT_CD");
String dropper = rs.getString("DROPPER");
String week = rs.getString("WEEK");
String drop_dt = rs.getString("DROP_DT").replace(" 00:00:00.0","");
query = "INSERT INTO INDUCTION_INFO (BUNDLE, PROJECT_CD, DROPPER, WEEK, DROP_DT) "
+ "VALUES ("
+ bundle+","
+ "'"+project_cd+"',"
+ dropper+","
+ week+","
+ "to_date('"+drop_dt+"','YYYY-MM-DD'))";
System.out.println(query);
stmt.executeQuery(query);
}
}catch(Exception e){
e.printStackTrace();
}
You are re-using the Statement that was used to produce rs on the last line of your loop.
This will close the ResultSet rs. As stated in the documentation:
A ResultSet object is automatically closed when the Statement object that generated it is closed, re-executed, or used to retrieve the next result from a sequence of multiple results.
You need to use a second Statement object to execute the INSERT statements.
Statement objects can only do one thing at a time, so when you execute that INSERT, you invalidate the ResultSet which it generated. You'll need to create a second Statement object to perform the INSERT.
From the Statement documentation: "By default, only one ResultSet object per Statement object can be open at the same time. Therefore, if the reading of one ResultSet object is interleaved with the reading of another, each must have been generated by different Statement objects. All execution methods in the Statement interface implicitly close a statment's current ResultSet object if an open one exists."
if you use the same statement, it will invalidate the previous result set. You should use a different statement to perform updates/inserts.
This is from the Java docs of interface Statement:
By default, only one ResultSet object per Statement object can be open
at the same time.
So you better use a second Statement or even better a PreparedStatement.
And to execute an INSERT SQL statement you should use executeUpdate() instead of executeQuery().