In my application I have implemented a method to get favourits of particular user. If the user is a new one there will not be a entry in the table.If so I add default favourtis to the table. Code is shown below.
public String getUserFavourits(String username) {
String s = "SELECT FAVOURITS FROM USERFAVOURITS WHERE USERID='" +
username.trim() + "'";
String a = "";
Statement stm = null;
ResultSet reset = null;
DatabaseConnectionHandler handler = null;
Connection conn = null;
try {
handler = DatabaseConnectionHandler.getInstance();
conn = handler.getConnection();
stm = conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,ResultSet.CONCUR_UPDATABLE);
reset = stm.executeQuery(s);
if (reset.next()) {
a = reset.getString("FAVOURITS").toString();
}
reset.close();
stm.close();
}
catch (SQLException ex) {
ex.printStackTrace();
}
catch (Exception ex) {
ex.printStackTrace();
}
finally {
try {
handler.returnConnectionToPool(conn);
if (stm != null) {
stm.close();
}
if (reset != null) {
reset.close();
}
}catch (Exception ex) {
ex.printStackTrace();
}
}
if (a.equalsIgnoreCase("")) {
a = updateNewUserFav(username);
}
return a;
}
You can see that after the Finally block updateNewUserFav(username) method is use to insert default favourits in to table. Normally users are forced to change this in their first login.
My problem is many users have complain me about they hava lost their customized favourits and default has get loaded in their login. When I go through the code I notice that it can only happen if exception occured in the try block. When I debug code works fine. Is this can be coused at time when DB is busy?
Normally there are more than 1000 concurrent user in the system. Since it is real time application there will be huge number a of request comming to the Database(DB is Oracle).
Can some one pls explain.
Firstly, use jonearles suggestion about bind variables. If a lot of your code is like this, with 1000 concurrent users, I'd hate to think what performance is like.
Secondly, if it is busy then there is a chance of time-outs. As you say, if an exception is encountered then it falls back to the "updateNewUserFav"
Really, it should only call that if NO exception is raised.
If an exception is raised, the function should fail. The current code is similar to
"TURN THE IGNITION KEY TO START THE CAR"
"IF THERE IS A PROBLEM, RING GARAGE AND BOOK APPOINTMENT"
"PUT CAR INTO GEAR AND RELEASE HAND_BRAKE"
You really only want to release the hand-brake once the car has successfully started, otherwise you'll end up rolling down the hill until the sudden stop at the end (often involving an expensive CRUNCH sound).
Related
We use connection pool in our application. While I understand that we should close and get connections as needed since we are using a connection pool. I implemented a cache update mechanism by receiving Postgres LISTEN notifications. The code is pretty much similar to the canonical example given by the documentation.
As you can see in the code, the query is initiated in the constructor and the connection is re used. This may pose problem when the connection is closed out of band due to any factor. One solution to this is to get the connection before every use, but as you can see the statement is only executed once in the constructor but still I can receive the notification in the polling. So if I get the connection every time, it will force me to re issue the statement for every iteration(after delay). I'm not sure if that's an expensive operation.
What is the middle ground here?
class Listener extends Thread
{
private Connection conn;
private org.postgresql.PGConnection pgconn;
Listener(Connection conn) throws SQLException
{
this.conn = conn;
this.pgconn = conn.unwrap(org.postgresql.PGConnection.class);
Statement stmt = conn.createStatement();
stmt.execute("LISTEN mymessage");
stmt.close();
}
public void run()
{
try
{
while (true)
{
org.postgresql.PGNotification notifications[] = pgconn.getNotifications();
if (notifications != null)
{
for (int i=0; i < notifications.length; i++){
//use notification
}
}
Thread.sleep(delay);
}
}
catch (SQLException sqle)
{
//handle
}
catch (InterruptedException ie)
{
//handle
}
}
}
In addition to this, there is also another similar document which had another query in run method as well in addition to constructor. I'm wondering if someone could enlighten me the purpose of another query within the method.
public void run() {
while (true) {
try {
//this query is additional to the one in the constructor
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SELECT 1");
rs.close();
stmt.close();
org.postgresql.PGNotification notifications[] = pgconn.getNotifications();
if (notifications != null) {
for (int i=0; i<notifications.length; i++) {
System.out.println("Got notification: " + notifications[i].getName());
}
}
// wait a while before checking again for new
// notifications
Thread.sleep(delay);
} catch (SQLException sqle) {
//handle
} catch (InterruptedException ie) {
//handle
}
}
}
I experimented closing the connection in every iteration(but without getting another one). That's still working. Perhaps that's due to unwrap that was done.
Stack:
Spring Boot, JPA, Hikari, Postgres JDBC Driver (not pgjdbc-ng)
The connection pool is the servant, not the master. Keep the connection for as long as you are using it to LISTEN on, i.e. ideally forever. If the connection ever does close, then you will miss whatever notices were sent while it was closed. So to keep the cache in good shape, you would need to discard the whole thing and start over. Obviously not something you would want to do on a regular basis, or what would be the point of having it in the first place?
The other doc you show is just an ancient version of the first one. The dummy query just before polling is there to poke the underlying socket code to make sure it has absorbed all the messages. This is no longer necessary. I don't know if it ever was necessary, it might have just been some cargo cult that found its way into the docs.
You would probably be better off with the blocking version of this code, by using getNotifications(0) and getting rid of sleep(delay). This will block until a notice becomes available, rather than waking up twice a second and consuming some (small) amount of resources before sleeping again. Also, once a notice does arrive it will be processed almost immediately, instead of waiting for what is left of a half-second timeout to expire (so, on average, about a quarter second).
I am having an java package, which connects with a database and fetches some data. At some rare case, I am getting heap memory exception, since the fetched query data size is exceeding the java heap space. Increasing the java heap space is not something the business can think for now.
Other option is to catch the exception and continue the flow with stopping the execution. ( I know catching OOME is not a good idea but here only me local variables are getting affected). My code is below:
private boolean stepCollectCustomerData() {
try {
boolean biResult = generateMetricCSV();
} catch (OutOfMemoryError e) {
log.error("OutOfMemoryError while collecting data ");
log.error(e.getMessage());
return false;
}
return true;
}
private boolean generateMetricCSV(){
// Executing the PAC & BI cluster SQL queries.
try (Connection connection = DriverManager.getConnection("connectionURL", "username", "password")) {
connection.setAutoCommit(false);
for (RedshiftQueryDefinition redshiftQueryDefinition: redshiftQueryDefinitions){
File csvFile = new File(dsarConfig.getDsarHomeDirectory() + dsarEntryId, redshiftQueryDefinition.getCsvFileName());
log.info("Running the query for metric: " + redshiftQueryDefinition.getMetricName());
try( PreparedStatement preparedStatement = createPreparedStatement(connection,
redshiftQueryDefinition.getSqlQuery(), redshiftQueryDefinition.getArgumentsList());
ResultSet resultSet = preparedStatement.executeQuery();
CSVWriter writer = new CSVWriter(new FileWriter(csvFile));) {
if (resultSet.next()) {
resultSet.beforeFirst();
log.info("Writing the data to CSV file.");
writer.writeAll(resultSet, true);
log.info("Metric written to csv file: " + csvFile.getAbsolutePath());
filesToZip.put(redshiftQueryDefinition.getCsvFileName(), csvFile);
} else {
log.info("There is no data for the metric " + redshiftQueryDefinition.getCsvFileName());
}
} catch (SQLException | IOException e) {
log.error("Exception while generating the CSV file: " + e);
e.printStackTrace();
return false;
}
}
} catch (SQLException e){
log.error("Exception while creating connection to the Redshift cluster: " + e);
return false;
}
return true;
}
We are getting exception in the line "ResultSet resultSet = preparedStatement.executeQuery()" in the later method and i am catching this exception in the parent method. Now, i need to make sure when the exception is caught in the former method, is the GC already triggered and cleared the local variables memory? (such as connection and result set variable) If not, when that will be happen?
I am worried about the java heap space because, this is continuous flow and I need to keep on fetching the data for another users.
The code that i have provided is only to explain the underlying issue and flow and kindly ignore syntax, etc.., I am using JDK8
Thanks in advance.
this my code to execute update query
public boolean executeQuery(Connection con,String query) throws SQLException
{
boolean flag=false;
try
{
Statement st = con.createStatement();
flag=st.execute(query);
st.close();
st=null;
flag=true;
}
catch (Exception e)
{
flag=false;
e.printStackTrace();
throw new SQLException(" UNABLE TO FETCH INSERT");
}
return flag;
}
maximum open cursor is set to 4000
code is executing
update tableA set colA ='x',lst_upd_date = trunc(sysdate) where trunc(date) = to_date('"+date+"','dd-mm-yyyy')
update query for around 8000 times
but after around 2000 days its throwing exception as "maximum open cursors exceeded"
please suggest code changes for this.
#TimBiegeleisen here is the code get connecttion
public Connection getConnection(String sessId)
{
Connection connection=null;
setLastAccessed(System.currentTimeMillis());
connection=(Connection)sessionCon.get(sessId);
try
{
if(connection==null || connection.isClosed() )
{
if ( ds == null )
{
InitialContext ic = new InitialContext();
ds = (DataSource) ic.lookup("java:comp/env/iislDB");
}
connection=ds.getConnection();
sessionCon.put(sessId, connection);
}
}
catch (SQLException e)
{
e.printStackTrace();
}
catch (Exception e)
{
e.printStackTrace();
}
return connection;
}
`
error stack is as bellow
java.sql.SQLException: ORA-01000: maximum open cursors exceeded
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:180)
at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:208)
at oracle.jdbc.ttc7.Oopen.receive(Oopen.java:118)
at oracle.jdbc.ttc7.TTC7Protocol.open(TTC7Protocol.java:472)
at oracle.jdbc.driver.OracleStatement.<init>(OracleStatement.java:499)
at oracle.jdbc.driver.OracleConnection.privateCreateStatement(OracleConnection.java:683)
at oracle.jdbc.driver.OracleConnection.createStatement(OracleConnection.java:560)
at org.apache.tomcat.dbcp.dbcp.DelegatingConnection.createStatement(DelegatingConnection.java:257)
at org.apache.tomcat.dbcp.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.createStatement(PoolingDataSource.java:216)
at com.iisl.business.adminbo.computeindex.MoviIndexComputeBO.calculateMoviValue(MoviIndexComputeBO.java:230)
Your code has a cursor leak. That's what is causing the error. It seems unlikely that your code can really go 2000 days (about 5.5 years) before encountering the error. If that was the case, I'd wager that you'd be more than happy to restart a server twice a decade.
In your try block, you create a Statement. If an exception is thrown between the time that the statement is created and the time that st.close() is called, your code will leave the statement open and you will have leaked a cursor. Once a session has leaked 4000 cursors, you'll get the error. Increasing max_open_cursors will merely delay when the error occurs, it won't fix the underlying problem.
The underlying problem is that your try/ catch block needs a finally that closes the Statement if it was left open by the try. For this to work, you'd need to declare st outside of the try
finally {
if (st != null) {
st.close();
}
}
As mentioned in another response you will leak cursors if an exception is thrown during the statement execution because st.close() won't be executed. You can use Java's try-with-resources syntax to be sure that your statement object is closed:
try (Statement st = con.createStatement())
{
flag=st.execute(query);
flag=true;
}
catch (Exception e)
{
flag=false;
e.printStackTrace();
throw new SQLException(" UNABLE TO FETCH INSERT");
}
return flag;
One of quickest solution is to increase cursor that each connection can handle by issuing following command on SQL prompt:
alter system set open_cursors = 1000
Also, add finally block in your code and close the connection to help closing cursors when ever exception occurs.
Also, run this query to see where actually cursor are opened.
select sid ,sql_text, count(*) as "OPEN CURSORS", USER_NAME from v$open_cursor
finally {
if (connection!=null) {
connection.close();
}
I've been looking all over for an answer to this and have yet to find one.
Basically I am trying to connect to a database server through a GUI. My boss wants to be able to enter all fields and then check to see if they are valid entries, then if there are any invalid entries, he wants me to turn the text red, indicating that the field is invalid. I have the try statement catch ClassNotFoundException and SQLException. Because there are multiple fields that need to be checked, I have tried to have a set of if statements to check the connection info. Here is the code below, I hope this makes sense...
//The cancel boolean values in this code are used elsewhere to regulate the Threads
try
{
//attempt connection here
}
catch(ClassNotFoundException | SQLException e)
{
String[] errors = new String[4]; //This will create a String array of the errors it catches
//and will later get called into a method that displays
//the messages in a JOptionPane.showMessageDialog()
if (e.getMessage().startsWith("The TCP/IP connection to the host"))
{
errors[0] = "SQL CONNECTION FAILED: Please check the server URL you entered to make sure it is correct.";
cancel = true;
mGUI.serverNameTextField.setForeground(Color.RED);
}
if (e.getMessage().startsWith("Login failed for user"))
{
errors[1] = "LOGIN FAILED: You do not have sufficient access to the server.";
cancel = true;
}
if (e.getMessage().startsWith("Cannot open database"))
{
errors[2] = "SQL CONNECTION FAILED: Please check the database name you entered to make sure it is correct.";
cancel = true;
mGUI.dbNameTextField.setForeground(Color.RED);
}
mGUI.reportErrors(errors); //Method where it reports the String[] array of errors
//However, the 'errors' parameter only returns one error
//message at a time, which is the problem.
Thanks for any help!
****EDIT******
I found a solution, so hopefully this will help someone. I changed my if statements to add an AND argument checking for the specific error code. You find find the error code by either setting a break point and looking at the debug perspective, or you can do what I did and set a print statement to see the error code. Here is the print statement:
System.out.println(((SQLException) e).getErrorCode());
Here are my new for statements:
try
{
//attempt connection here
}
catch(SQLException | ClassNotFoundException e)
{
if (e instanceof SQLServerException && ((SQLServerException) e).getErrorCode() == 0)
{
//code here
}
else{
//code here
}
System.out.println(((SQLException) e).getErrorCode()); //Here is the print statement to see the error code.
if (e instanceof SQLServerException && ((SQLServerException) e).getErrorCode() == 4060)
{
//code here
}else{
//code here
}
if(cancel != true)
{
//code here
}
}
You can do it in multiple ways
1 having more than one catch with a common function
}catch (ClassNotFoundException e){
handleError(e);
}catch (SQLException e){
handleError(e);
}
where handleError takes the exception as the argument.
You dont seem to do anything else so you can just combine them both into a single exception
}catch(Exception e){
}
which will catch everything but you have MUCH less control over the error handling.
A general principle of exceptions is that they are handled at the point they are best abled to be handled.
You seem to have very disparate exceptions and presumably a TCP exception thrown somewhere in the code is not the same as the SQLException thrown when connecting to a database (I might be wrong here since I don't know what the rest of the code looks like). So would not a set of exception handlers, one for each type make more sense. Also to reite from Bryan Roach, text disambiguation is not a good idea.
try {
...
} catch (java.net.SocketException e) {
e[0] = "tcp error";
} catch (java.sql.SQLException e) {
e[1] = "sql exception happened";
}
Also your string array seems a bit risk, possibly a
ArrayList errors = new ArrayList();
errors.add("Some tcp error");
errors.add("Some db error");
and then for you error reporting
mGUI.reportErrors(errors.toArray())
would preserve your interface and not waste you having to allocate extra elements to the array and have empty entries. I don't know exactly what your question is, but you allude to the GUI not displaying multiple errors. Possibly there is a check which stops at the first empty element in an array. Say e[2] and e[4] is populated, it might stop when it iterates over the errors as e[3] is empty. I'm presuming again since I don't know what that code looks like
From the comments above it sounds like what you want to do is have different logic for the various Exception types you are catching within a single catch block. If this is the case, you could go:
...
catch(ClassNotFoundException | SQLException e) {
String[] errors = new String[4];
if (e instanceof ClassNotFoundException) {
//do something here
}
if (e instanceof SQLException) {
//do something else here
}
...etc
}
This should work, but it's probably just as easy to use multiple catch blocks as others have suggested:
}catch (ClassNotFoundException e){
handleError(e);
}catch (SQLException e){
handleError(e);
}
I don't mean any offense, but the way the code handles exceptions might cause some headaches down the road. For example:
if (e.getMessage().startsWith("Cannot open database")) {
Here the code relies on the supporting library that throws the exception to use the same text description, but this description might change if you switch to another JVM version, use a different database driver, etc. It might be safer to go by the exception type, rather than the exception description.
As I've started in the title, while I'm querying for user data in my java application, I get following message: "Operation not allowed after ResultSet closed".
I know that this is happens if you try to have more ResultSets opened at the same time.
Here is my current code:
App calls getProject("..."), other 2 methods are there just for help. I'm using 2 classes because there is much more code, this is just one example of exception I get.
Please note that I've translated variable names, etc. for better understanding, I hope I didn't miss anything.
/* Class which reads project data */
public Project getProject(String name) {
ResultSet result = null;
try {
// executing query for project data
// SELECT * FROM Project WHERE name=name
result = statement.executeQuery(generateSelect(tProject.tableName,
"*", tProject.name, name));
// if cursor can't move to first place,
// that means that project was not found
if (!result.first())
return null;
return user.usersInProject(new Project(result.getInt(1), result
.getString(2)));
} catch (SQLException e) {
e.printStackTrace();
return null;
} catch (BadAttributeValueExpException e) {
e.printStackTrace();
return null;
} finally {
// closing the ResultSet
try {
if (result != null)
result.close();
} catch (SQLException e) {
}
}
}
/* End of class */
/* Class which reads user data */
public Project usersInProject(Project p) {
ResultSet result = null;
try {
// executing query for users in project
// SELECT ID_User FROM Project_User WHERE ID_Project=p.getID()
result = statement.executeQuery(generateSelect(
tProject_User.tableName, tProject_User.id_user,
tProject_User.id_project, String.valueOf(p.getID())));
ArrayList<User> alUsers = new ArrayList<User>();
// looping through all results and adding them to array
while (result.next()) { // here java gets ResultSet closed exception
int id = result.getInt(1);
if (id > 0)
alUsers.add(getUser(id));
}
// if no user data was read, project from parameter is returned
// without any new user data
if (alUsers.size() == 0)
return p;
// array of users is added to the object,
// then whole object is returned
p.addUsers(alUsers.toArray(new User[alUsers.size()]));
return p;
} catch (SQLException e) {
e.printStackTrace();
return p;
} finally {
// closing the ResultSet
try {
if (result != null)
result.close();
} catch (SQLException e) {
}
}
}
public User getUser(int id) {
ResultSet result = null;
try {
// executing query for user:
// SELECT * FROM User WHERE ID=id
result = statement.executeQuery(generateSelect(tUser.tableName,
"*", tUser.id, String.valueOf(id)));
if (!result.first())
return null;
// new user is constructed (ID, username, email, password)
User usr = new user(result.getInt(1), result.getString(2),
result.getString(3), result.getString(4));
return usr;
} catch (SQLException e) {
e.printStackTrace();
return null;
} catch (BadAttributeValueExpException e) {
e.printStackTrace();
return null;
} finally {
// closing the ResultSet
try {
if (result != null)
result.close();
} catch (SQLException e) {
}
}
}
/* End of class */
Statements from both classes are added in constructor, calling connection.getStatement() when constructing each of the classes.
tProject and tProject_User are my enums, I'm using it for easier name handling. generateSelect is my method and should work as expected. I'm using this because I've found out about prepared statements after I have written most of my code, so I left it as it is.
I am using latest java MySQL connector (5.1.21).
I don't know what else to try. Any advice will be appreciated.
Quoting from #aroth's answer:
There are many situations in which a ResultSet will be automatically closed for you. To quote the official documentation:
http://docs.oracle.com/javase/6/docs/api/java/sql/ResultSet.html
A ResultSet object is automatically closed when the Statement object that generated
it is closed, re-executed, or used to retrieve the next result from a sequence of
multiple results.
Here in your code , You are creating new ResultSet in the method getUser using the same Statement object which created result set in the usersInProject method which results in closing your resultset object in the method usersInProject.
Solution:
Create another statement object and use it in getUser to create resultset.
It's not really possible to say definitively what is going wrong without seeing your code. However note that there are many situations in which a ResultSet will be automatically closed for you. To quote the official documentation:
A ResultSet object is automatically closed when the Statement object
that generated it is closed, re-executed, or used to retrieve the next
result from a sequence of multiple results.
Probably you've got one of those things happening. Or you're explicitly closing the ResultSet somewhere before you're actually done with it.
Also, have you considered using an ORM framework like Hibernate? In general something like that is much more pleasant to work with than the low-level JDBC API.