I have a problem in implementing this case.
1) some examples show that we can use as in the following:
try{
bd.conectarBaseDeDatos();
PreparedStatement stmt;
String sql=("SELECT * FROM cab_pedido a, det_pedido b WHERE a.numero = b.numero and a.cod_agencia = b.cod_agencia");
System.out.println(sql);
stmt = bd.conexion.prepareStatement(sql,ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY);
bd.result = stmt.executeQuery(sql);
}
catch (InstantiationException | IllegalAccessException | SQLException e){ System.out.println(e);}
return rs;
}
This returns my resultSet and I can use rs.next(). rs.previous(), etc to solve my problem, but I see some comments that say we should close rs and db connection. How dangerous is that? I can implement without closing the resultset and connection? Because when we close resultSet, I will not able to get data anymore.
2) Store the data into Hashmap or list
This is another possibility but if I want to get the last or the first values how can I do that?
I need the next, prev, last, and first functions but I'm not sure about my first implementation.
Can anybody give me some advices of how start this.
I need solutions, advices. that duplicate means nothing.
I am going to try to start an answer here since I don't have option to comment yet All that is understandable from your code and question is:
You have a resultset from which you want to read first, last, previous, next etc set of data.
Closing the db connection resource
Either to use HashMap or List
Well, from what you have so far, you can definitely read the data from your resultset into List or HashMap
Secondly on closing the resources, Yes! You should close the db connection resources always to avoid memory leak. You can do that by adding a finally block after the catch block in your code.
On HasMap or List. If you aren't sure about the type of data you will be reading from your resultset then go with some implementation of List, e.g. ArrayList<String>. If you know the data will have Keys and Values, then you can go for HashMap.
These are just general guidelines. If you post some more code, maybe we can help further.
ANSWER for #1 -- This is a repeated question.
When you close your DB connection. ResultSet and Statement are also closed.
But that is not a guaranteed action. Therefore you should close all your DB resources.
Need for closing DB resources separately
Also consider scenario where max no of DB connection allowed to be open in the connection pool is set to a fix number. You will eventually reach that and your application will simply crash or not respond correctly.
This not only applies to DB connection/resulorces but to all IO resources.
You should (as a good practice) always close all your IO resources after you are done using them.
Not doing so you are simply leaving reason for memory leaks.
Answer to #2
Its a good to move the information/data into a proper data structure before you may want to close resultset. This is more of implementation scenario which we always face.
Form of data structure to use again will be based on the scenario. But in all cases you may want to use a Bean class/POJO to store relevant information for each row fetched because you are getting multiple values each row.
Another suggestion would be not to do a select * call via JDBC. This is not very useful. You should be preferably mentioning column names, this will allow you to control the data you are fetching into the resultset, also in many cases make the query execution faster.
Related
While writing java JDBC code to call a stored procedure I am using con.setAutoCommit(false); My question is what is the difference in the below approaches:
Approach-1:
con = DBConnection.getConnection();
con.setAutoCommit(false);
stmt= con.prepareCall("{call updateEmp(?,?,?,?,?,?)}");
stmt.setInt(1, id);
stmt.setString(2, name);
stmt.setString(3, role);
stmt.registerOutParameter(6, java.sql.Types.VARCHAR);
stmt.executeUpdate();
con.commit();
//read the OUT parameter AFTER commit
String result = stmt.getString(6);
or Approach-2:
// Read the OUT parameter BEFORE commit
String result = stmt.getString(6);
con.commit();
I think this would depend on whether the stored procedure you're calling does its own commit or not. I would expect an update procedure that takes in params, and sets out params to do a commit or rollback internally.
In that case, calling setAutoCommit(true) or calling con.commit() would have no effect, and the out parameter would have a value regardless of when you call stmt.getString(6). If there's no commit in the stored procedure itself, I would expect your out parameter to be null if you call con.commit() after you call stmt.getString(6).
The major difference is that you're holding a transaction open longer than necessary. You should always try to commit as quickly as practically possible, in order to minimize the possibility of blocking other transactions. Especially if you're doing something like transferring a BLOB or large text field that could potentially tie up a lot of transaction log space (as well as take more time to transfer across the wire).
The difference is in exception handling. If getString throws an exception, then the following commit will not execute. The consequences depend on whether there are any changes in the current transaction**. If you were to trace the two versions of the code (where no exception is ever thrown) and then compare the two trace files, you would not be able to tell which version of the code created each trace unless you left markers of some kind (or kept the spids for each user process).
You have to ask yourself the question: do I want to commit even if exceptions are thrown? Then you'll know how to write the code.
** Your connection always has a transaction open. Some transactions have changes and others don't.
It seems to me that in cases when out parameter is a simple type, it's to greater extent a matter of style. However, out parameter can be, for instance, a cursor. In case of autocommit, a commit operation occurs only when all of the result sets that are returned by cursor type output parameters or by the stored procedure are closed. If commit issued before cursor completely fetched , data consistency is questionable. To avoid such ambiguity , I'd suggest committing/rolling back transaction after all out parameters are read.
Just summing up the above answers in one answer for a quick summary:
1- If commiting, before all out parameters are read, data consistency is questionable. So here approach-2 is not advisable.
2- Suppose an exception occurs while reading the output parameters, then the transaction won't be committed. On the other hand, if we want the transaction to be committed without caring about the out parameters, we can commit before reading them.
3- In approach-2 we are blocking a transaction for longer time. In above sample code, its not a big deal but it may be a problem where we are doing lots of things before committing.
JavaDoc says: "The constant indicating the type for a ResultSet object that is scrollable but generally not sensitive to changes to the data that underlies the ResultSet"
.
I am clear about the scrollable part but have doubts regarding the latter part of statement.
I am using following code snippet for validate my understanding.
conn = getConnection();
Statement stmt = conn
.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.CONCUR_UPDATABLE);
String query = "select * from vehicle";
ResultSet rs = stmt.executeQuery(query);
rs.absolute(2);
System.out.print(rs.getString(2));
System.out.println("Waiting........");
Thread.sleep(20000); //1 manually changed database entry
rs.refreshRow();
System.out.println(rs.getString(2));//2 Surprisingly changes is reflected
At Comment 1, I did manual changes in the database then I called rs.refreshRow() method. After this At Comment 2, when when I accessed the value of second column then surprisingly change in value of second column is reflected. As per my understanding this change should not be reflected, as 'it is insensitive to changes done by other'(as per JavaDoc). Can anybody explain me what is its actual usage?
I investigated this a while ago, specifically with regard to MySQL Connector/J. As far as I could tell, the settings ResultSet.TYPE_SCROLL_SENSITIVE and ResultSet.TYPE_SCROLL_INSENSITIVE did not actually affect the behaviour when retrieving data from MySQL.
Several similar questions and blog posts I found referred to the MySQL Connector/J documentation, where in the section on JDBC API Implementation Notes it says that
By default, ResultSets are completely retrieved and stored in memory. In most cases this is the most efficient way to operate and, due to the design of the MySQL network protocol, is easier to implement.
It goes on to talk about using ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY, and stmt.setFetchSize(Integer.MIN_VALUE); as "a signal to the driver to stream result sets row-by-row", but even in that case my testing showed that the entire ResultSet was still being retrieved as soon as I did stmt.executeQuery(...). (Although perhaps I missed some other connection setting that wasn't explicitly mentioned in that section of the MySQL Connector/J documentation.)
Finally I came to the conclusion that the ResultSet.TYPE_SCROLL_[IN]SENSITIVE setting really doesn't make any difference under MySQL Connector/J. While simply scrolling around the ResultSet it always seems to act like it is INSENSITIVE (ignoring any changes to existing rows that were made by other processes), but rs.refreshRow(); always returns the latest data (including changes made by other processes) as though it is SENSITIVE even if the ResultSet is supposed to be INSENSITIVE.
I am Programming a software with JAVA and using the Oracle DB.
Normally we obtain the values from the Database using a Loop like
Resultset rt = (Resultset) cs.getObject(1);
while(rt.next){
....
}
But it sound is more slowly when fetch thousand of data from the database.
My question is:
In Oracle DB: I created a Procedure like this and it is the Iterating data and assign to the cursor.
Ex.procedure test_pro(sysref_cursor out info) as
open info select * from user_tbl ......
end test_pro;
In JAVA Code: As I mentioned before I Iterate a the resultset for obtain values, but the side of database, even I select the values, why should I use a loop for getting that values?
(another fact in the .net frameworks, there are using the database binding concept. So is any way in the java, binding the database procedures like .net 's, without the iterating.
)
Depending on what you are going to do with that data and at which frequence, the choice for a ref_cursor might be a good or a bad one. Ref_cursors are intended to give non Oracle aware programs a way to pass it data, for reporting purposes.
In you case, stick to the looping but don't forget to implement array fetching because this has a tremendous effect on the performance. The database passes blocks of rows to your jdbc buffer at the client and your code fetches rows from that buffer. By the time you hit the end of the buffer, the Jdbc layer requests the next chunk of rows from the database, eliminating lot's of network round trips. The default already fetches 10 rows at a time. For larger sets, use bigger numbers, if memory can provide the room.
See Oracle® Database JDBC Developer's Guide and Reference
If you know for sure there will always be exactly one result, like in this case, you can even skip the if and just call rs.next() once:
For example :
ResultSet resultset = statement.executeQuery("SELECT MAX (custID) FROM customer");
resultset.next(); // exactly one result so allowed
int max = resultset.getInt(1); // use indexed retrieval since the column has no name
Yes,you can call procedure in java.
http://www.mkyong.com/jdbc/jdbc-callablestatement-stored-procedure-out-parameter-example/
You can't avoid looping. For performance reasons you need to adjust your prefetch on Statement or Resultset object (100 is a solid starting point).
Why is done this way? It's similar to reading streams - you never know how big it can be - so you read by chunk/buffer, one after another...
I have a web application that needs a database back-end.
My back-end is really small (max 4 tables) and the SQL operations are not that much.
So I decided that some robust ORM solution is like hitting a moschito with a hummer and I am going just to do a little DAO pattern so that the code is more clean (instead of hitting the db directly with sql commands).
So far it works but I am not sure that I haven't stepped into a pittfall without knowing.
I use Tomcat's connection pool and I expect concurrent access to the database.
My question is related to concurrency and the use of the java sql objects.
Example:
I do the following:
do a query
get a result set and use that to build an object (dto)
building this object I do a new sql query (using the same connection
and having the previous resultset open)
Is this correct/safe?
Also can I reuse the same connection in a re-entrant manner?
I assume it is no problem to use it via multiple threads right?
Generally any tips/guide to get in the right track is welcome
Regarding the connections, as long as you use the connection pool you are guaranteeing that each thread takes its own connection, so from that point of wiew, there is no problem in your approach in a multithreaded environment (you can check Is java.sql.Connection thread safe?).
With respect to the ResultSet and the second query you are performing, you must take into account that a ResultSet maintains a cursor pointing to its current row of data. So the key point in your question is if you are using the same "SELECT statement", because of in that case, you could get the same cursor attributes and some problems may arise.
Check ResultSet's javadoc, especially this sentence:
A ResultSet object is automatically closed when the Statement object that generated it is closed, re-executed, or used to retrieve the next result from a sequence of multiple results.
and How can I avoid ResultSet is closed exception in Java?.
I have a lot of rows in a database and it must be processed, but I can't retrieve all the data to the memory due to memory limitations.
At the moment, I using LIMIT and OFFSET to retrieve the data to get the data in some especified interval.
I want to know if the is the faster way or have another method to getting all the data from a table in database. None filter will be aplied, all the rows will be processed.
SELECT * FROM table ORDER BY column
There's no reason to be sucking the entire table in to RAM. Simply open a cursor and start reading. You can play games with fetch sizes and what not, but the DB will happily keep its place while you process your rows.
Addenda:
Ok, if you're using Java then I have a good idea what your problem is.
First, just by using Java, you're using a cursor. That's basically what a ResultSet is in Java. Some ResultSets are more flexible than others, but 99% of them are simple, forward only ResultSets that you call 'next' upon to get each row.
Now as to your problem.
The problem is specifically with the Postgres JDBC driver. I don't know why they do this, perhaps it's spec, perhaps it's something else, but regardless, Postgres has the curious characteristic that if your Connection has autoCommit set to true, then Postgres decides to suck in the entire result set on either the execute method or the first next method. Not really important as to where, only that if you have a gazillion rows, you get a nice OOM exception. Not helpful.
This can easily be exactly what you're seeing, and I appreciate how it can be quite frustrating and confusing.
Most Connection default to autoCommit = true. Instead, simply set autoCommit to false.
Connection con = ...get Connection...
con.setAutoCommit(false);
PreparedStatement ps = con.prepareStatement("SELECT * FROM table ORDER BY columm");
ResultSet rs = ps.executeQuery();
while(rs.next()) {
String col1 = rs.getString(1);
...and away you go here...
}
rs.close();
ps.close();
con.close();
Note the distinct lack of exception handling, left as an exercise for the reader.
If you want more control over how many rows are fetched at a time into memory, you can use:
ps.setFetchSize(numberOfRowsToFetch);
Playing around with that might improve your performance.
Make sure you have an appropriate index on the column you use in the ORDER BY if you care about sequencing at all.
Since its clear your using Java based on your comments:
If you are using JDBC you will want to use:
http://download.oracle.com/javase/1.5.0/docs/api/java/sql/ResultSet.html
If you are using Hibernate it gets trickier:
http://docs.jboss.org/hibernate/core/3.3/reference/en/html/batch.html