I am very new to Java.
I have Java class which implements the Database(Postgres) related functionality.
The problem is if Database stopped and then restart then My this class throws SQLException as connection got reset(database is up and running).
Is there any way that after Database restarted; my class automatically Connection to database and work as expected instead of throwing SQLException.
Is there any way with Properties as parameter to DriverManager.getConnection().
Thanks
MAP
Use a try catch block to handle the SQLException. When you catch an SQLException, the program could wait a specified period of time and then try to reconnect, you could loop this as long as you want.
boolean connected = false;
// repeat until connected is true
while (!connected) {
try {
// put your connection code here
connected == true;
} catch (SQLException se) {
// sleep for 10 seconds
Thread.sleep(10000);
}
}
Related
We use connection pool in our application. While I understand that we should close and get connections as needed since we are using a connection pool. I implemented a cache update mechanism by receiving Postgres LISTEN notifications. The code is pretty much similar to the canonical example given by the documentation.
As you can see in the code, the query is initiated in the constructor and the connection is re used. This may pose problem when the connection is closed out of band due to any factor. One solution to this is to get the connection before every use, but as you can see the statement is only executed once in the constructor but still I can receive the notification in the polling. So if I get the connection every time, it will force me to re issue the statement for every iteration(after delay). I'm not sure if that's an expensive operation.
What is the middle ground here?
class Listener extends Thread
{
private Connection conn;
private org.postgresql.PGConnection pgconn;
Listener(Connection conn) throws SQLException
{
this.conn = conn;
this.pgconn = conn.unwrap(org.postgresql.PGConnection.class);
Statement stmt = conn.createStatement();
stmt.execute("LISTEN mymessage");
stmt.close();
}
public void run()
{
try
{
while (true)
{
org.postgresql.PGNotification notifications[] = pgconn.getNotifications();
if (notifications != null)
{
for (int i=0; i < notifications.length; i++){
//use notification
}
}
Thread.sleep(delay);
}
}
catch (SQLException sqle)
{
//handle
}
catch (InterruptedException ie)
{
//handle
}
}
}
In addition to this, there is also another similar document which had another query in run method as well in addition to constructor. I'm wondering if someone could enlighten me the purpose of another query within the method.
public void run() {
while (true) {
try {
//this query is additional to the one in the constructor
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SELECT 1");
rs.close();
stmt.close();
org.postgresql.PGNotification notifications[] = pgconn.getNotifications();
if (notifications != null) {
for (int i=0; i<notifications.length; i++) {
System.out.println("Got notification: " + notifications[i].getName());
}
}
// wait a while before checking again for new
// notifications
Thread.sleep(delay);
} catch (SQLException sqle) {
//handle
} catch (InterruptedException ie) {
//handle
}
}
}
I experimented closing the connection in every iteration(but without getting another one). That's still working. Perhaps that's due to unwrap that was done.
Stack:
Spring Boot, JPA, Hikari, Postgres JDBC Driver (not pgjdbc-ng)
The connection pool is the servant, not the master. Keep the connection for as long as you are using it to LISTEN on, i.e. ideally forever. If the connection ever does close, then you will miss whatever notices were sent while it was closed. So to keep the cache in good shape, you would need to discard the whole thing and start over. Obviously not something you would want to do on a regular basis, or what would be the point of having it in the first place?
The other doc you show is just an ancient version of the first one. The dummy query just before polling is there to poke the underlying socket code to make sure it has absorbed all the messages. This is no longer necessary. I don't know if it ever was necessary, it might have just been some cargo cult that found its way into the docs.
You would probably be better off with the blocking version of this code, by using getNotifications(0) and getting rid of sleep(delay). This will block until a notice becomes available, rather than waking up twice a second and consuming some (small) amount of resources before sleeping again. Also, once a notice does arrive it will be processed almost immediately, instead of waiting for what is left of a half-second timeout to expire (so, on average, about a quarter second).
this my code to execute update query
public boolean executeQuery(Connection con,String query) throws SQLException
{
boolean flag=false;
try
{
Statement st = con.createStatement();
flag=st.execute(query);
st.close();
st=null;
flag=true;
}
catch (Exception e)
{
flag=false;
e.printStackTrace();
throw new SQLException(" UNABLE TO FETCH INSERT");
}
return flag;
}
maximum open cursor is set to 4000
code is executing
update tableA set colA ='x',lst_upd_date = trunc(sysdate) where trunc(date) = to_date('"+date+"','dd-mm-yyyy')
update query for around 8000 times
but after around 2000 days its throwing exception as "maximum open cursors exceeded"
please suggest code changes for this.
#TimBiegeleisen here is the code get connecttion
public Connection getConnection(String sessId)
{
Connection connection=null;
setLastAccessed(System.currentTimeMillis());
connection=(Connection)sessionCon.get(sessId);
try
{
if(connection==null || connection.isClosed() )
{
if ( ds == null )
{
InitialContext ic = new InitialContext();
ds = (DataSource) ic.lookup("java:comp/env/iislDB");
}
connection=ds.getConnection();
sessionCon.put(sessId, connection);
}
}
catch (SQLException e)
{
e.printStackTrace();
}
catch (Exception e)
{
e.printStackTrace();
}
return connection;
}
`
error stack is as bellow
java.sql.SQLException: ORA-01000: maximum open cursors exceeded
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:180)
at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:208)
at oracle.jdbc.ttc7.Oopen.receive(Oopen.java:118)
at oracle.jdbc.ttc7.TTC7Protocol.open(TTC7Protocol.java:472)
at oracle.jdbc.driver.OracleStatement.<init>(OracleStatement.java:499)
at oracle.jdbc.driver.OracleConnection.privateCreateStatement(OracleConnection.java:683)
at oracle.jdbc.driver.OracleConnection.createStatement(OracleConnection.java:560)
at org.apache.tomcat.dbcp.dbcp.DelegatingConnection.createStatement(DelegatingConnection.java:257)
at org.apache.tomcat.dbcp.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.createStatement(PoolingDataSource.java:216)
at com.iisl.business.adminbo.computeindex.MoviIndexComputeBO.calculateMoviValue(MoviIndexComputeBO.java:230)
Your code has a cursor leak. That's what is causing the error. It seems unlikely that your code can really go 2000 days (about 5.5 years) before encountering the error. If that was the case, I'd wager that you'd be more than happy to restart a server twice a decade.
In your try block, you create a Statement. If an exception is thrown between the time that the statement is created and the time that st.close() is called, your code will leave the statement open and you will have leaked a cursor. Once a session has leaked 4000 cursors, you'll get the error. Increasing max_open_cursors will merely delay when the error occurs, it won't fix the underlying problem.
The underlying problem is that your try/ catch block needs a finally that closes the Statement if it was left open by the try. For this to work, you'd need to declare st outside of the try
finally {
if (st != null) {
st.close();
}
}
As mentioned in another response you will leak cursors if an exception is thrown during the statement execution because st.close() won't be executed. You can use Java's try-with-resources syntax to be sure that your statement object is closed:
try (Statement st = con.createStatement())
{
flag=st.execute(query);
flag=true;
}
catch (Exception e)
{
flag=false;
e.printStackTrace();
throw new SQLException(" UNABLE TO FETCH INSERT");
}
return flag;
One of quickest solution is to increase cursor that each connection can handle by issuing following command on SQL prompt:
alter system set open_cursors = 1000
Also, add finally block in your code and close the connection to help closing cursors when ever exception occurs.
Also, run this query to see where actually cursor are opened.
select sid ,sql_text, count(*) as "OPEN CURSORS", USER_NAME from v$open_cursor
finally {
if (connection!=null) {
connection.close();
}
Can s.b. tell me what can cause this:
Server doesn't do anything anymore:
server.network.http-listener-1.thread-pool.currentthreadcount-count = 500
server.network.http-listener-1.thread-pool.currentthreadsbusy-count = 500
Lot's of th is in the log:
[#|2013-05-06T13:06:07.917+0200|WARNING|glassfish3.0.1|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=16;_ThreadName=Thread-1;|Interrupting idle Thread: http-thread-pool-8083-(498)|#]
[#|2013-05-06T13:06:09.917+0200|WARNING|glassfish3.0.1|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=16;_ThreadName=Thread-1;|Interrupting idle Thread: http-thread-pool-8083-(499)|#]
[#|2013-05-06T13:06:10.917+0200|WARNING|glassfish3.0.1|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=16;_ThreadName=Thread-1;|Interrupting idle Thread: http-thread-pool-8083-(500)|#]
Normal behaviour:
server.network.http-listener-1.thread-pool.currentthreadcount-count = 427
server.network.http-listener-1.thread-pool.currentthreadsbusy-count = 8
server.network.http-listener-1.connection-queue.countqueued1minuteaverage-count = 184
server.network.http-listener-1.connection-queue.countqueued5minutesaverage-count = 3014
server.network.http-listener-1.connection-queue.countqueued15minutesaverage-count = 10058
If you run out of connection this may indicate that your connections are not closed after they've been used. Hard to give an example without seeing your implementation of opening/closing the database connections. But normally you want to make sure that you close the connection in a finally clause, i'll provide an example below:
try {
//Some logic that reads/writes to the database
}catch (SQLException e) { //Rollback in case something goes wrong
try {
System.out.println("Rolling back current db transaction");
conn.rollback();
} catch (SQLException e1) {
System.out.println(e1);
}
}finally{ //Close the connection
conn.close();
System.out.println("DB connection is closed");
}
So that the database connection will be closed regardless if the reading/writing fails or not. If you fail to close these connections correctly these connections will remain open (or time out depending on your settings) and you will eventually run out of connections
I have been learning about using MySQL within Java using Oracle JDBC and I am trying to get into the mindset of try/catch and pool cleanup.
I am wondering if the following code is the correct way to perfectly clean everything up or if you notice holes in my code that requires something I've missed. For the record, I intend to use InnoDB and its row locking mechanism which is why I turn auto commit off.
try
{
connection = getConnection(); // obtains Connection from a pool
connection.setAutoCommit(false);
// do mysql stuff here
}
catch(SQLException e)
{
if(connection != null)
{
try
{
connection.rollback(); // undo any changes
}
catch (SQLException e1)
{
this.trace(ExtensionLogLevel.ERROR, e1.getMessage());
}
}
}
finally
{
if(connection != null)
{
try
{
if(!connection.isClosed())
{
connection.close(); // free up Connection so others using the connection pool can make use of this object
}
}
catch (SQLException e)
{
this.trace(ExtensionLogLevel.ERROR, e.getMessage());
}
}
}
getConnection() returns a Connection object from a pool and connection.close() closes it releasing it back to the pool (so I've been told, still new to this so apologies if I am talking rubbish). Any help on any of this would be greatly appreciated!
Thank you!
I recommend not setting autocommit back to true in the finally block - your other threads that are relying on autocommit being set to true should not assume that the connections in the pool are in this state, but instead they should set autocommit to true before using a connection (just as this thread is setting autocommit to false).
In addition, you should check the connection's isClosed property before calling close() on it.
Other than that, I don't see any problems.
I am (successfully) connecting to a database using the following:
java.sql.Connection connect = DriverManager.getConnection(
"jdbc:mysql://localhost/some_database?user=some_user&password=some_password");
What should I be checking to see if the connection is still open and up after some time?
I was hoping for something like connect.isConnected(); available for me to use.
Your best chance is to just perform a simple query against one table, e.g.:
select 1 from SOME_TABLE;
Oh, I just saw there is a new method available since 1.6:
java.sql.Connection.isValid(int timeoutSeconds):
Returns true if the connection has not been closed and is still valid.
The driver shall submit a query on the connection or use some other
mechanism that positively verifies the connection is still valid when
this method is called. The query submitted by the driver to validate
the connection shall be executed in the context of the current
transaction.
Nothing. Just execute your query. If the connection has died, either your JDBC driver will reconnect (if it supports it, and you enabled it in your connection string--most don't support it) or else you'll get an exception.
If you check the connection is up, it might fall over before you actually execute your query, so you gain absolutely nothing by checking.
That said, a lot of connection pools validate a connection by doing something like SELECT 1 before handing connections out. But this is nothing more than just executing a query, so you might just as well execute your business query.
Use Connection.isClosed() function.
The JavaDoc states:
Retrieves whether this Connection object has been closed. A
connection is closed if the method close has been called on it or if
certain fatal errors have occurred. This method is guaranteed to
return true only when it is called after the method Connection.close
has been called.
You also can use
public boolean isDbConnected(Connection con) {
try {
return con != null && !con.isClosed();
} catch (SQLException ignored) {}
return false;
}
If you are using MySQL
public static boolean isDbConnected() {
final String CHECK_SQL_QUERY = "SELECT 1";
boolean isConnected = false;
try {
final PreparedStatement statement = db.prepareStatement(CHECK_SQL_QUERY);
isConnected = true;
} catch (SQLException | NullPointerException e) {
// handle SQL error here!
}
return isConnected;
}
I have not tested with other databases. Hope this is helpful.
The low-cost method, regardless of the vendor implementation, would be to select something from the process memory or the server memory, like the DB version or the name of the current database. IsClosed is very poorly implemented.
Example:
java.sql.Connection conn = <connect procedure>;
conn.close();
try {
conn.getMetaData();
} catch (Exception e) {
System.out.println("Connection is closed");
}
Here is a simple solution if you are using JDBC to get the default connection
private Connection getDefaultConnection() throws SQLException, ApiException {
Connection connection = null;
try {
connection = dataSource.getConnection ();
}catch (SQLServerException sqlException) {
// DB_UNAVAILABLE EXCEPTION
}
return connection;
}