I'm trying to find the faster way to do batch insert.
I tried to insert several batches with jdbcTemplate.update(String sql), where
sql was builded by StringBuilder and looks like:
INSERT INTO TABLE(x, y, i) VALUES(1,2,3), (1,2,3), ... , (1,2,3)
Batch size was exactly 1000. I inserted nearly 100 batches.
I checked the time using StopWatch and found out insert time:
min[38ms], avg[50ms], max[190ms] per batch
I was glad but I wanted to make my code better.
After that, I tried to use jdbcTemplate.batchUpdate in way like:
jdbcTemplate.batchUpdate(sql, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
// ...
}
#Override
public int getBatchSize() {
return 1000;
}
});
where sql was look like
INSERT INTO TABLE(x, y, i) VALUES(1,2,3);
and I was disappointed! jdbcTemplate executed every single insert of 1000 lines batch in separated way. I loked at mysql_log and found there a thousand inserts.
I checked the time using StopWatch and found out insert time:
min[900ms], avg[1100ms], max[2000ms] per Batch
So, can anybody explain to me, why jdbcTemplate doing separated inserts in this method? Why method's name is batchUpdate?
Or may be I am using this method in wrong way?
These parameters in the JDBC connection URL can make a big difference in the speed of batched statements --- in my experience, they speed things up:
?useServerPrepStmts=false&rewriteBatchedStatements=true
See: JDBC batch insert performance
I found a major improvement setting the argTypes array in the call.
In my case, with Spring 4.1.4 and Oracle 12c, for insertion of 5000 rows with 35 fields:
jdbcTemplate.batchUpdate(insert, parameters); // Take 7 seconds
jdbcTemplate.batchUpdate(insert, parameters, argTypes); // Take 0.08 seconds!!!
The argTypes param is an int array where you set each field in this way:
int[] argTypes = new int[35];
argTypes[0] = Types.VARCHAR;
argTypes[1] = Types.VARCHAR;
argTypes[2] = Types.VARCHAR;
argTypes[3] = Types.DECIMAL;
argTypes[4] = Types.TIMESTAMP;
.....
I debugged org\springframework\jdbc\core\JdbcTemplate.java and found that most of the time was consumed trying to know the nature of each field, and this was made for each record.
Hope this helps !
I have also faced the same issue with Spring JDBC template. Probably with Spring Batch the statement was executed and committed on every insert or on chunks, that slowed things down.
I have replaced the jdbcTemplate.batchUpdate() code with original JDBC batch insertion code and found the Major performance improvement.
DataSource ds = jdbcTemplate.getDataSource();
Connection connection = ds.getConnection();
connection.setAutoCommit(false);
String sql = "insert into employee (name, city, phone) values (?, ?, ?)";
PreparedStatement ps = connection.prepareStatement(sql);
final int batchSize = 1000;
int count = 0;
for (Employee employee: employees) {
ps.setString(1, employee.getName());
ps.setString(2, employee.getCity());
ps.setString(3, employee.getPhone());
ps.addBatch();
++count;
if(count % batchSize == 0 || count == employees.size()) {
ps.executeBatch();
ps.clearBatch();
}
}
connection.commit();
ps.close();
Check this link as well
JDBC batch insert performance
Simply use transaction. Add #Transactional on method.
Be sure to declare the correct TX manager if using several datasources #Transactional("dsTxManager"). I have a case where inserting 60000 records. It takes about 15s. No other tweak:
#Transactional("myDataSourceTxManager")
public void save(...) {
...
jdbcTemplate.batchUpdate(query, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
...
}
#Override
public int getBatchSize() {
if(data == null){
return 0;
}
return data.size();
}
});
}
Change your sql insert to INSERT INTO TABLE(x, y, i) VALUES(1,2,3). The framework creates a loop for you.
For example:
public void insertBatch(final List<Customer> customers){
String sql = "INSERT INTO CUSTOMER " +
"(CUST_ID, NAME, AGE) VALUES (?, ?, ?)";
getJdbcTemplate().batchUpdate(sql, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
Customer customer = customers.get(i);
ps.setLong(1, customer.getCustId());
ps.setString(2, customer.getName());
ps.setInt(3, customer.getAge() );
}
#Override
public int getBatchSize() {
return customers.size();
}
});
}
IF you have something like this. Spring will do something like:
for(int i = 0; i < getBatchSize(); i++){
execute the prepared statement with the parameters for the current iteration
}
The framework first creates PreparedStatement from the query (the sql variable) then the setValues method is called and the statement is executed. that is repeated as much times as you specify in the getBatchSize() method. So the right way to write the insert statement is with only one values clause.
You can take a look at http://docs.spring.io/spring/docs/3.0.x/reference/jdbc.html
I had also some bad time with Spring JDBC batch template. In my case, it would be, like, insane to use pure JDBC, so instead I used NamedParameterJdbcTemplate. This was a must have in my project. But it was way slow to insert hundreds os thousands of lines in the database.
To see what was going on, I've sampled it with VisualVM during the batch update and, voilà:
What was slowing the process was that, while setting the parameters, Spring JDBC was querying the database to know the metadata each parameter. And seemed to me that it was querying the database for each parameter for each line every time. So I just taught Spring to ignore the parameter types (as it is warned in the Spring documentation about batch operating a list of objects):
#Bean(name = "named-jdbc-tenant")
public synchronized NamedParameterJdbcTemplate getNamedJdbcTemplate(#Autowired TenantRoutingDataSource tenantDataSource) {
System.setProperty("spring.jdbc.getParameterType.ignore", "true");
return new NamedParameterJdbcTemplate(tenantDataSource);
}
Note: the system property must be set before creating the JDBC Template object. It would be possible to just set in the application.properties, but this solved and I've never after touched this again
I don't know if this will work for you, but here's a Spring-free way that I ended up using. It was significantly faster than the various Spring methods I tried. I even tried using the JDBC template batch update method the other answer describes, but even that was slower than I wanted. I'm not sure what the deal was and the Internets didn't have many answers either. I suspected it had to do with how commits were being handled.
This approach is just straight JDBC using the java.sql packages and PreparedStatement's batch interface. This was the fastest way that I could get 24M records into a MySQL DB.
I more or less just built up collections of "record" objects and then called the below code in a method that batch inserted all the records. The loop that built the collections was responsible for managing the batch size.
I was trying to insert 24M records into a MySQL DB and it was going ~200 records per second using Spring batch. When I switched to this method, it went up to ~2500 records per second. so my 24M record load went from a theoretical 1.5 days to about 2.5 hours.
First create a connection...
Connection conn = null;
try{
Class.forName("com.mysql.jdbc.Driver");
conn = DriverManager.getConnection(connectionUrl, username, password);
}catch(SQLException e){}catch(ClassNotFoundException e){}
Then create a prepared statement and load it with batches of values for insert, and then execute as a single batch insert...
PreparedStatement ps = null;
try{
conn.setAutoCommit(false);
ps = conn.prepareStatement(sql); // INSERT INTO TABLE(x, y, i) VALUES(1,2,3)
for(MyRecord record : records){
try{
ps.setString(1, record.getX());
ps.setString(2, record.getY());
ps.setString(3, record.getI());
ps.addBatch();
} catch (Exception e){
ps.clearParameters();
logger.warn("Skipping record...", e);
}
}
ps.executeBatch();
conn.commit();
} catch (SQLException e){
} finally {
if(null != ps){
try {ps.close();} catch (SQLException e){}
}
}
Obviously I've removed error handling and the query and Record object is notional and whatnot.
Edit:
Since your original question was comparing the insert into foobar values (?,?,?), (?,?,?)...(?,?,?) method to Spring batch, here's a more direct response to that:
It looks like your original method is likely the fastest way to do bulk data loads into MySQL without using something like the "LOAD DATA INFILE" approach. A quote from the MysQL docs (http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html):
If you are inserting many rows from the same client at the same time,
use INSERT statements with multiple VALUES lists to insert several
rows at a time. This is considerably faster (many times faster in some
cases) than using separate single-row INSERT statements.
You could modify the Spring JDBC Template batchUpdate method to do an insert with multiple VALUES specified per 'setValues' call, but you'd have to manually keep track of the index values as you iterate over the set of things being inserted. And you'd run into a nasty edge case at the end when the total number of things being inserted isn't a multiple of the number of VALUES lists you have in your prepared statement.
If you use the approach I outline, you could do the same thing (use a prepared statement with multiple VALUES lists) and then when you get to that edge case at the end, it's a little easier to deal with because you can build and execute one last statement with exactly the right number of VALUES lists. It's a bit hacky, but most optimized things are.
Solution given by #Rakesh worked for me.
Significant improvement in performance. Earlier time was 8 min, with this solution taking less than 2 min.
DataSource ds = jdbcTemplate.getDataSource();
Connection connection = ds.getConnection();
connection.setAutoCommit(false);
String sql = "insert into employee (name, city, phone) values (?, ?, ?)";
PreparedStatement ps = connection.prepareStatement(sql);
final int batchSize = 1000;
int count = 0;
for (Employee employee: employees) {
ps.setString(1, employee.getName());
ps.setString(2, employee.getCity());
ps.setString(3, employee.getPhone());
ps.addBatch();
++count;
if(count % batchSize == 0 || count == employees.size()) {
ps.executeBatch();
ps.clearBatch();
}
}
connection.commit();
ps.close();
Encountered some serious performance issue with JdbcBatchItemWriter.write() (link) from Spring Batch and find out the write logic delegates to JdbcTemplate.batchUpdate() eventually.
Adding a Java system properties of spring.jdbc.getParameterType.ignore=true fixed the performance issue entirely ( from 200 records per second to ~ 5000 ).
The patch was tested working on both Postgresql and MsSql (might not be dialect specific)
... and ironically, Spring documented this behaviour under a "note" section link
In such a scenario, with automatic setting of values on an underlying PreparedStatement, the corresponding JDBC type for each value needs to be derived from the given Java type. While this usually works well, there is a potential for issues (for example, with Map-contained null values). Spring, by default, calls ParameterMetaData.getParameterType in such a case, which can be expensive with your JDBC driver. You should use a recent driver version and consider setting the spring.jdbc.getParameterType.ignore property to true (as a JVM system property or in a spring.properties file in the root of your classpath) if you encounter a performance issue — for example, as reported on Oracle 12c (SPR-16139).
Alternatively, you might consider specifying the corresponding JDBC
types explicitly, either through a 'BatchPreparedStatementSetter' (as
shown earlier), through an explicit type array given to a
'List<Object[]>' based call, through 'registerSqlType' calls on a
custom 'MapSqlParameterSource' instance, or through a
'BeanPropertySqlParameterSource' that derives the SQL type from the
Java-declared property type even for a null value.
Related
I'm creating parcel machine program. Every parcel has unique parcelID which is exported to mysql db. The problem is that every time when I run the program, the program is counting parcelID from 0. I'm looking for a solution which will allow me to check the last parcelID in the database and create row after the last one.
Now it looks like this: 1. I'm creating a new row in db (successfully) by java program. 2. I'm closing the program after some time. 3. I run the program again and I can't add another new row because there is error "duplicate entry '1' for key 'PRIMARY'".
public static void post() throws Exception{
int parcelID = Parcel.generateID();
int clientMPNumber = Parcel.typeClientNumber();
int orderPassword = Parcel.generatePass();
try{
Connection con = getConnection();
PreparedStatement posted = con.prepareStatement("INSERT INTO Parcels.Orders (parcelID, clientMPNumber, orderPassword) VALUES ('"+parcelID+"', '"+clientMPNumber+"', '"+orderPassword+"')");
posted.executeUpdate();
}
catch(Exception e){
System.out.println(e);
}
finally{
System.out.println("Insert completed");
}
}
and the method is:
public static int generateID(){
parcelID = parcelID + 1;
return parcelID;
}
I'd let the database do the heavy lifting for you - Just define the parcelID column as serial instead of trying to set its value yourself.
You shouldn't use Id generation, just create auto_increment column in database table
As described here , define your primary key column to auto increment for each insert so your java code doesn't have to manually calculate primary key value each time.
If that is not a possibility, you need to show how you declare & initialize parcelID. As of your current code, parcelID looks to be a class level field that gets initialized to zero for each run so you always get the same value - 1. You need to initialize with last value from data base.
Also, implement suggestion as mentioned in comment to your question regarding PreparedStatement
There are a couple of things to attent to.
// parcelID should be an INT AUTOINCREMENT primary key.
try (PreparedStatement posted = con.prepareStatement(
"INSERT INTO Parcels.Orders (clientMPNumber, orderPassword) "
+ "VALUES (?, ?)",
Statement.RETURN_GENERATED_KEYS);
posted.setString(1, clientMPNumber);
posted.setString(2, orderPassword);
posted.executeUpdate();
try (ResultSet rsKey = posted.getGeneratedKeys()) {
if (rsKey.next()) {
int parcelID = rsKey.getInt(1);
return parcelID; // Or such
}try-with-resources
}
}
The database can deal with automatic numbering best, so that two transactions at the same time do not steal the same "next" number.
You should close things like Connection, PreparedStatement and ResultSet. This can best be done using the a bit awkward syntax of try-with-resources. That closes automatically even on exception and return.
PreparedStatements should be used with placeholders ?. This takes care for escaping special characters like ' in the password. Also prevents SQL injection.
Stylistic better use SQLException above Exception. Better maybe even a throws SQLException.
I'm working on an application that will pull data from a remote Web service & then push it to Oracle database.
Now, the data is event based so it could be an INSERT/UPDATE or DELETE. For this purpose, I have created a single object of preparedStatement & then based on type of event the code creates a String & assigns it to prepared statement.
Now, with each retrieved event the table name on oracle may changes & hence the query, so after retrieving each event, a String is created based on table on which operation is done & then being passed to preparedStatement object & executed.
BUT, I've 2 issues with this -
I guess, I'm not using prepared statement efficiently, since each time the query changes not sure if Db caching mechanism for preparedStatement will help much in this case.
Also, all my statements are either INSERT, UPDATE or DELETE, so I'm using pstmt.executeUpdate(), now it returns the number of rows affected by operation. But, I'm getting "maximum open cursor extended error" ... I read through many threads & since my statement does not returns resultset, & closing preparedstatement after each operation is not efficient, not sure how should I handle this error. I can increase the open cursor count in DB but that will not a application fix as this may only delay the error until a bad scenario is encountered.
getEvents()
for ( i = 0 -> lastevent)
{
if (event == condition1)
{
Process.condition1(arg1, arg2)
}
else if (event == condition2)
{
Process.condition2(arg1, arg2)
}
.
.
.
}
pstmt.close();
connection.close();
Process Class
{
condition1(arg1, arg2)
{
sqlstatement = "INSERT INTO Table1 (column1, column2, column3,...) VALUES (?, ?, ?);"
pstmt = connection.prepareStatement(sqlstatement);
pstmt.setString(1, value1);
pstmt.setInt(2, value2);
.
.
.
pstmt.executeUpdate();
connection.commit();
return;
}
condition2(arg1, arg2)
{
sqlstatement = "INSERT INTO Table2 (column1, column2, column3,...) VALUES (?, ?, ?);"
pstmt = connection.prepareStatement(sqlstatement);
pstmt.setString(1, value1);
pstmt.setInt(2, value2);
.
.
.
pstmt.executeUpdate();
connection.commit();
return;
}
}
Sorry, I guess, above may give some idea on how the whole process is being done ... I didn't put the actual code coz it is very much distributed among various classes & it gets data for positional parameters from remote service. But above is the summarized form of how it is being done.
Also, one thought may be is to take similar type of events in a list & then process it but the way my business requirement is & the manner in which remote service provides data, it is more prone to errors.
Also, there are 130+ columns in some tables on oracle & the code will be processing 150+ events every 15 minutes.
It is not possible to "assign a query-string to a prepared statement". A prepared statement is created with a query-string as is shown in your code: each call to connection.prepareStatement(sqlstatement) creates a new prepared statement. This is also why the error "too many open cursors" occurs: the previously created pstmt is never closed.
To re-use the prepared statements, instantiate the Process class and give it an init method (and corresponding close method) that is called once to setup the prepared statements that might be used (and are closed when work is done):
PreparedStatement insertTable1;
PreparedStatement insertTable2;
void init(Connection c) {
insertTable1 = c.prepareStatement(INSERT_TABLE1_QUERY);
insertTable2 = c.prepareStatement(INSERT_TABLE2_QUERY);
}
void close() {
insertTable1.close();
insertTable2.close();
}
void condition1(Connection c, arg1, arg2) {
insertTable1.setString(1, value1);
insertTable1.setInt(2, value2);
insertTable1.executeUpdate();
c.commit();
}
void condition2(Connection c, arg1, arg2) {
insertTable2.setString(1, value1);
insertTable2.setInt(2, value2);
insertTable2.executeUpdate();
c.commit();
}
I'm currently using the Datastax Cassandra driver for Cassandra 2 to execute cql3. This works correctly. I started using PreparedStatement's:
Session session = sessionProvider.getSession();
try {
PreparedStatement ps = session.prepare(cql);
ResultSet rs = session.execute(ps.bind(objects));
if (irsr != null) {
irsr.read(rs);
}
}
Sometimes I get a warning from the driver in my log:
Re-preparing already prepared query . Please note that preparing the same query more than once is generally an anti-pattern and will likely affect performance. Consider preparing the statement only once.
This warning makes sense, but i'm not sure how i should reuse the PreparedStatement?
Should I just create all my PreparedStatement in a constructor/init method and than simply use them?
But does this go well when multiple threads use the same PreparedStatement at the same time (especially calling PreparedStatement.bind() to bind objects)
You may just initialize the PreparedStatement once and cache it while the app is running. It should be available for use as long as the Cassandra cluster is up.
Using the statement from multiple threads is fine (as long as you don't modify it throught setXXX() methods). When you call bind(), the code underneath only reads the PreparedStatement and then creates a new instance of BoundStatement() which the caller thread is then free to mutate.
Here is the source code, if you're curious (search for bind()).
We are using cassandra in a webapplication with Spring. In our case we create the PreparedStatements when the bean which encapsulate the operation against on cf (our repository) is instatiated.
Here you have a snippet of the code we are using:
#Repository
public class StatsRepositoryImpl implements StatsRepository {
#SuppressWarnings("unused")
#PostConstruct
private void initStatements(){
if (cassandraSession == null){
LOG.error("Cassandra 2.0 not available");
} else {
GETSTATS_BY_PROJECT = cassandraSession.prepare(SELECTSTATS+" WHERE projectid = ?");
}
}
#Override
public Stats findByProject(Project project) {
Stats stats = null;
BoundStatement boundStatement = new BoundStatement(GETSTATS_BY_PROJECT);
ResultSet rs = cassandraSession.execute(boundStatement.bind(project.getId()));
for (Row row : rs){
stats = mapRowToStats(row);
}
return stats;
}
By this way the prepared statements are reused each time we execute the method findByProject.
The above solution will work in case the key space is fixed. In case of multi-tenant scenario, this solution will not suffice. I simply did in the following way, where keyspace is passed as a argument.
Check for keyspace from the prepared statement, if it is same as passed argument then do not prepare the statement as it is already prepared in this case.
private BatchStatement eventBatch(List<SomeEvent> events, String keySpace) {
BatchStatement batch = new BatchStatement();
String preparedStmtKeySpace = propEventPer == null? "" : propEventPer.getQueryKeyspace();
if(!keySpace.equals("\"" + preparedStmtKeySpace + "\"")) {
eventStmt = cassandraOperations.getSession().prepare(colFamilyInsert(keySpace + "." + "table_name"));
}
....
private RegularStatement colFamilyInsert(String colFamilyName) {
return insertInto(colFamilyName)
.value(PERSON_ID, bindMarker())
.value(DAY, bindMarker());
}
03-Apr-2017 10:02:24,120 WARN [com.datastax.driver.core.Cluster] (cluster1-worker-2851) Re-preparing already prepared query is generally an anti-pattern and will likely affect performance. Consider preparing the statement only once. Query='select * from xxxx where cjpid=? and cjpsnapshotid =?'
Create a pool of PreparedStatement objects, one for each CQL query.
Then, when these queries are being requested by client, fetch the respective cached PS object and supply values by calling bind().
as explained by Daniel, bind() does new BoundStmt(param) which makes this thread safe.
i am just benchmarking a complex system and found that queries going through Spring are really slow.
It adds ~600ms.
The benchmark code compares the following:
case TEMPLATE:
{
t = System.currentTimeMillis();
jdbcTemplate.update(getUnnamedPreparedStatement(query), new PreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps) throws SQLException {
int i = 1;
for (Object o : queryParameters) {
ps.setObject(i++, o);
}
}
});
break;
}
case PREPAREDSTATEMENT:
{
Connection c = dataSource.getConnection();
t = System.currentTimeMillis();
PreparedStatement ps = c.prepareStatement(getUnnamedPreparedStatement(query));
int index = 1;
for (Object parameter: queryParameters) {
ps.setObject(index++, parameter);
}
ResultSet rs = ps.executeQuery();
rs.next();
break;
}
Both queries give the same result and the order does not matter.
Moreover, it does not depend on the query type (i.e. SELECT, UPDATE).
I have run the test a dozen times and the results are stable.
What does the Spring jdbcTemplate do, that the PreparedStatement does not do?
Since my comment above seems to be the correct answer I'll post it as an answer for future consulting.
#Felix, reusing connections has nothing to do with spring but with your connection pool if you have it. So that should be taken into account.
So basicly I think the connection pool was missing in the spring project.
Your first case executes an update query, whereas the second one executes a select query. The second one should use ps.executeUpdate() to be similar to the first one.
Read the Following Code:
public class selectTable {
public static ResultSet rSet;
public static int total=0;
public static ResultSet onLoad_Opetations(Connection Conn, int rownum,String sql)
{
int rowNum=rownum;
int totalrec=0;
try
{
Conn=ConnectionODBC.getConnection();
Statement stmt = Conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
String sqlStmt = sql;
rSet = stmt.executeQuery(sqlStmt);
total = rSet.getRow();
}
catch(Exception e)
{
System.out.println(e.getMessage());
}
System.out.println("Total Number of Records="+totalrec);
return rSet;
}
}
The folowing code dos't show actual total:
total = rSet.getRow();
my jTable display 4 record in jTable but total = 0; when I evaluate through debug, it shows:
total=(int)0;
rather than total=(int)4
And if I use
rSet=last(); above from the code total = rSet.getRow();
then total shows accurate value = 4 but rSet return nothing. then jTable is empty.
Update me!
BalusC's answer is right! but I have to mention according to the user instance variable such as:
rSet.last();
total = rSet.getRow();
and then which you are missing
rSet.beforeFirst();
the remaining code is same you will get your desire result.
You need to call ResultSet#beforeFirst() to put the cursor back to before the first row before you return the ResultSet object. This way the user will be able to use next() the usual way.
resultSet.last();
rows = resultSet.getRow();
resultSet.beforeFirst();
return resultSet;
However, you have bigger problems with the code given as far. It's leaking DB resources and it is also not a proper OOP approach. Lookup the DAO pattern. Ultimately you'd like to end up as
public List<Operations> list() throws SQLException {
// Declare Connection, Statement, ResultSet, List<Operation>.
try {
// Use Connection, Statement, ResultSet.
while (resultSet.next()) {
// Add new Operation to list.
}
} finally {
// Close ResultSet, Statement, Connection.
}
return list;
}
This way the caller has just to use List#size() to know about the number of records.
The getRow() method retrieves the current row number, not the number of rows. So before starting to iterate over the ResultSet, getRow() returns 0.
To get the actual number of rows returned after executing your query, there is no free method: you are supposed to iterate over it.
Yet, if you really need to retrieve the total number of rows before processing them, you can:
ResultSet.last()
ResultSet.getRow() to get the total number of rows
ResultSet.beforeFirst()
Process the ResultSet normally
As others have answered there is no way to get the count of rows without iterating till the end. You could do that, but you may not want to, note the following points:
For a many RDBMS systems ResultSet is a streaming API, this means
that it does not load (or maybe even fetch) all the rows from the
database server. See this question on SO. By iterating to the
end of the ResultSet you may add significantly to the time taken to
execute in certain cases.
A default ResultSet object is not updatable and has a cursor
that moves forward only. I think this means that unless you
execute
the query with ResultSet.TYPE_SCROLL_INSENSITIVE rSet.beforeFirst() will throw
SQLException. The reason it is this way is because there is cost
with scrollable cursor. According to the documentation, it may throw SQLFeatureNotSupportedException even if you create a scrollable cursor.
Populating and returning a List<Operations> means that you will
also need extra memory. For very large resultsets this will not
work
at all.
So the big question is which RDBMS?. All in all I would suggest not logging the number of records.
One better way would be to use SELECT COUNT statement of SQL.
Just when you need the count of number of rows returned, execute another query returning the exact number of result of that query.
try
{
Conn=ConnectionODBC.getConnection();
Statement stmt = Conn.createStatement();
String sqlStmt = sql;
String sqlrow = SELECT COUNT(*) from (sql) rowquery;
String total = stmt.executeQuery(sqlrow);
int rowcount = total.getInt(1);
}
The getRow() method will always yield 0 after a query:
ResultSet.getRow()
Retrieves the current row number.
Second, you output totalrec but never assign anything to it.
You can't get the number of rows returned in a ResultSet without iterating through it. And why would you return a ResultSet without iterating through it? There'd be no point in executing the query in the first place.
A better solution would be to separate persistence from view. Create a separate Data Access Object that handles all the database queries for you. Let it get the values to be displayed in the JTable, load them into a data structure, and then return it to the UI for display. The UI will have all the information it needs then.
I have solved that problem. The only I do is:
private int num_rows;
And then in your method using the resultset put this code
while (this.rs.next())
{
this.num_rows++;
}
That's all
The best way to get number of rows from resultset is using count function query for database access and then rs.getInt(1) method to get number of rows.
from my code look it:
String query = "SELECT COUNT() FROM table";
ResultSet rs = new DatabaseConnection().selectData(query);
rs.getInt(1);
this will return int value number of rows fetched from database.
Here DatabaseConnection().selectData() is my code for accessing database.
I was also stuck here but then solved...