I am trying to Insert large amount of data from one table to another table. The two tables are in different regions. When I Insert data, the ID (I am using to create connection)is able to insert data for less number of rows. If It inserts data for more than 500 rows it throws exception
com.ibm.db2.jcc.b.SqlException: DB2 SQL error: SQLCODE: -551, SQLSTATE: 42501, SQLERRMC: DB2GCS;EXECUTE PACKAGE;NULLID.SYSLH203.
I am not able to find why it shows authorization exception for the same id if data is more.
My Code Snippet :
while(RSet.next()){
stmt=test_conn.prepareStatement("Insert Query");
for(int i=1;i<=columnCount;i++){
stmt.setString(i,RSet.getString(i));
}
stmt.addBatch();;
}
stmt.executeBatch();
Thanks in Advance for Your Help.
Your code is actually not batching correctly and this could be the reason why it's breaking. The problem is that you're preparing your insert query over and over again needlessly.
You need to prepare it just once outside the loop like
test_conn.setAutoCommit(false);
stmt = test_conn.prepareStatement("INSERT INTO ...");
while(RSet.next()){
for(int i = 1; i <= columnCount; i++){
stmt.setString(i, RSet.getString(i));
}
stmt.addBatch();
}
stmt.executeBatch();
test_conn.commit();
Related
I have developed a MIS Report in my swing application. Means a table
in which the first column contains date column followed by 7 columns of invoice status like pending for payment, paid, pending for park, post etc..
Just trying to show the EXCEL PIVOT LIKE table report in swing, I have successfully implemented it.
but the problem is: day by day the row count is increasing and I have developed a query like
select count(invoice_No)
from MyTable
where ClaimStatus='columnHeader'
AND date='row1stColumn'
and it will loop through all rows and column to get desired count and will display it into jtable.
But as I said the row count gets increased day by day and SQL Server table data is also increasing a lot.
So my count query gets lot of time to populate that table
Is there any way to make above query faster?
If anyone wants to see my code I will provide it.
I have implemented my application but because of huge data its taking time to show MIS Report
Please see the picture i have attached which is the output of my code.
And The code is,
1) for distinct dates in 1st column
q="Select distinct(Inward_Date_Short) from Inward_Master";
PreparedStatement ps=con.prepareStatement(q);
ResultSet rs=ps.executeQuery();
while(rs.next())
{
inwardDateList.add(rs.getString(1));
}
2) static columns of JTable
headers.add("Pending For Digitization");
headers.add("Pending For Claim Creation");
headers.add("Resolution - Pending For Claim Creation");
headers.add("Pending For Approval");
headers.add("Pending For Parking");
headers.add("Pending For Posting");
headers.add("Objection");
headers.add("Pending For Payment");
headers.add("Paid");
headers.add("Rejected");
headers.add("Outward");
3) now this is most important code which i want to make more faster
for(int i=0;i<inwardDateList.size();i++)
{
Vector varsha=new Vector();
varsha.add(inwardDateList.get(i).toString());
for(int c=1;c<headers.size();c++)
{
try(Connection con=dbConnection.dbConnector();)
{
String q="";
q=headers.get(c).toString();
PreparedStatement ps=con.prepareStatement("Select COUNT_BIG(Inward_No) from Inward_Master where Inward_Date_Short='"+inwardDateList.get(i).toString()+"' AND Claim_Status='"+q+"'");
//PreparedStatement ps=con.prepareStatement("Select count(Inward_No) from(Select Inward_No from Inward_Master where Inward_Date_Short='"+inwardDateList.get(i).toString()+"' AND Claim_Status='"+q+"') X");
ResultSet rs=ps.executeQuery();
rs.next();
data.add(rs.getInt(1));
}
catch(Exception e)
{
e.printStackTrace();
}
}
rowdata.add(data);
}
Hi I am trying to fetch 50K + rows from one of the table in MYSQL DB. It is taking more than 20 minutes to retrieve all the data and writing it to text file. Can I use multi threading to reduce this fetching time and make the code more efficient. Any help will be appreciated.
I have used normal JDBC connection and ResultSetMetaData to fetch rows from the Table.
String row = "";
stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("select * from employee_details");
ResultSetMetaData rsmd = rs.getMetaData();
int columnCount = rsmd.getColumnCount();
while (rs.next()) {
for (int i = 1; i < columnCount; i++) {
row = row + rs.getObject(i) + "|";// check
}
row = row + "\r\n";
}
And I am writing the fetched values in text file as below.
BufferedWriter writer = new BufferedWriter(new FileWriter(
"C:/Users/430398/Desktop/file/abcd.txt"));
writer.write(row);
writer.close();
Remember that rs.next will fetch Results from the DB in n-batches. Where n is a number defined by the JDBC-Implementation. I assume it's at 10 right now. So for every 10 batches it will again query the DB, hence there'll be an network-overhead - even if it's on the very same machine.
Just increasing that number will result in a faster loading time.
edit:
adding this
stmt.setFetchSize(50000);
might be it.
Be aware, that this results in heavy memory consumption.
First you need to identify where the bottleneck is. Is it the SQL query? Or the fetching of the rows via the ResultSet? Or the building of the huge string? Or perhaps writing the file?
You need to measure the duration of the above mentioned individual parts of your algorithm and tells us the results. Without this knowledge is not possible to tell how to speed the algorithm.
I have a program which is connected to a db2 z/os database. I get the following Exception:
com.ibm.db2.jcc.am.SqlException: DB2 SQL Error: SQLCODE=-805, SQLSTATE=51002, SQLERRMC=NULLID.SYSLH21E.5359534C564C3031, DRIVER=3.66.46
this one says my program is running out of statements. So I checked everything and summarised all sql-action:
Connection_1
Connection_2
Resultset set1 = Connection_1.PreparedStatement(Select Table).open
try {
while (set1.next()) {
Resultset set2 = Connection_1.PreparedStatement(find Dataset).open
try{
if(set2.next()) {
Connection_2.PreparedStatement(Insert in Table).open
Connection_2.PreparedStatement(Insert in Table).close
Connection_2.PreparedStatement(update in Table).open
Connection_2.PreparedStatement(update in Table).close
}finally {
set2.close
PreparedStatemtn(find Dataset).close
}
if (something is true){
Connection_2.commit)()
}
}
}
}finally{
set1.close
PreparedStatement(Select Table).close
Connect_2.close
}
Then in the moment just before my program chrash i create a lot of preparedStatements:
for (int i = 0; i< 10000; i++){
PreparedStatement statement = CONNECTION.prepareStatement("SELECT * FROM TABLE");
}
Then I get the following Error:
Exception stack trace:
com.ibm.db2.jcc.am.SqlException: DB2 SQL Error: SQLCODE=-805, SQLSTATE=51002, SQLERRMC=NULLID.SYSLH219 0X5359534C564C3031, DRIVER=3.66.46
Concurrently open statements:
1. SQL string: SELECT * FROM TABEL
Number of statements: 10000
2. SQL string: SELECT * FROM OTHER_TABLE
Number of statements: 1
********************
So this looks like there is no problem with open Statements. Are there other possibilitys for an exception like this? Maybe I select to many datasets?
Resultset set1 = Connection_1.PreparedStatement(Select Table).open
This table got round about 4_000_000 datasets.
I hope someone can help me. If you need more information just tell me.
Kindly Regards!
Currently we are selecting data from one database and inserting it into a backup database(SQL SERVER).
This data always contains more than 15K records in one select.
We are using Enumeration to iterate over the data selected.
We are using JDBC PreparedStatement to insert data as:
Enumeration values = ht.elements(); -- ht is HashTable containing selected data.
while(values.hasMoreElements())
{
pstmt = conn.prepareStatement("insert query");
pstmt.executeUpdate();
}
I am not sure if this is the correct or efficient way to do the faster insert.
For inserting 10k rows it takes near about 30 min or more.
Is there any efficient way to make it fast?
Note: Not using any indexes on the table.
Use a batch insert, but commit after a few entris, don't try to send all 10K at once. Try investigating to get the best size, it' a trade off to memory vs network trips.
Connection connection = new getConnection();
Statement statement = connection.createStatement();
int i = 0;
for (String query : queries) {
statement.addBatch("insert query");
if ((i++ % 500) == 0) {
// Do an execute now and again, don't send too many at once
statement.executeBatch();
}
}
statement.executeBatch();
statement.close();
connection.close();
Also, from your code I'm not sure what you are doing, but use paramaterised queries rather than sending 10K insert statements as text. Something like:
String q= "INSERT INTO data_table (id) values (?)";
Connection connection = new getConnection();
PreparedStatement ps = connection.prepareStatement(q);
for (Data d: data) {
ps.setString(1, d.getId());
ps.addBatch();
}
ps.executeBatch();
ps.close();
connection.close();
You can insert all the values in one sql command:
INSERT INTO Table1 ( Column1, Column2 ) VALUES
( V1, V2 ), ( V3, V4 ), .......
You may also insert the values by bulks of 500 records, for example, if the query would become very big. It is not efficient at all to insert on row per statement remotely (using a connection). Another solution is to do the inserts using a stored procedure. You just pass the values to it as parameters.
Here is how you can do it using the INSERT command above:
Enumeration values = ht.elements(); -- ht is HashTable containing selected data.
int i=0;
String sql="";
while(values.hasMoreElements())
{
sql+="(" + values + ")"; //better use StringBuffer here
i++;
if(i % 500 == 0) {
pstmt = conn.prepareStatement("insert query "+sql);
pstmt.executeUpdate();
sql="";
}
else
sql += " , ";
}
I am using the following code to Insert data into a table.
test_conn.setAutoCommit(false);
stmt = test_conn.prepareStatement("INSERT INTO ...");
while(RSet.next()){
for(int i = 1; i <= columnCount; i++){
stmt.setString(i, RSet.getString(i));
}
stmt.addBatch();
}
stmt.executeBatch();
test_conn.commit();
other processing methods to occur only all the above rows are successfully inserted....
when I Insert into table using executeBatch(), if an SQL Exception or Error occurs in Inserting , is it possible to find Insertion of which Record has thrown the exception?
You have to try-catch the stmt.executeBatch() call and check for details in the exception. Batch execution will stop on first error that will occure.