I am using commons-dbcp2 for creating a connection pool to the database. When the database is down dataSource.getConnection() method takes 20 seconds then throws an exception. I want to configure DataSource to dynamically change the timeout e.g. 5 seconds.
I tried dataSource.setLoginTimeout(), but it is not supported for BasicDataSource
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName(driverName);
dataSource.setUrl(url);
dataSource.setUsername(username);
dataSource.setPassword(password);
dataSource.setInitialSize(3);
dataSource.setMaxTotal(100);
dataSource.setValidationQuery(validationquery);
dataSource.setTestOnBorrow(true);
dataSource.setRemoveAbandonedOnBorrow(true);
try (Connection connection = dataSource.getConnection()) {
} catch (Exception e) {
}
I want after 5 seconds (as I configured) it throws the exception.
You can try validationQueryTimeout parameter which lets you time out the validation query after X seconds:
dataSource.setValidationQueryTimeout(5);
dataSource.setTestOnBorrow(true);
You don't have to set a validation SQL query, modern JDBC driver has Connection.isValid().
Unfortunately DBCP pool has issues as per Bad Behavior: Handling Database Down due to operating system TCP timeout limit. When the test was done in 2017:
Dbcp2 did not return a connection, and also did not timeout. The execution of the validation query is stuck due to unacknowledged TCP traffic. Subsequently, the SQL Statement run on the (bad) connection by the test harness hangs (due to unacknowledged TCP). setMaxWait(5000) is seemingly useless for handling network outages. There are no other meaningful timeout settings that apply to a network failure.
You can set datasource.setValidationQueryTimeout(), but keep in mind that this is for query execution. If you have network issues, you might still be stuck. For this, you need to set also setSoTimeout(), which is for the socket. The default value is 0, which means infinity.
Related
I have a websphere application server with an MSSQL database connection with 10 in the connection pool. When my worker thread runs I have a process that runs the following:
Psuedo code
for(List<String> items){
Connection connection = null;
Statement statement = null;
try{
connection = getConnection();
statement = connection.createStatement();
String sql = "exec someStoredProcedure '" + item + "';";
statement.execute(sql);
}finally{
//real code includes null checks and try catch for errors
statement.close();
connection.close();
}
}
My problem is that if my loop is larger than 10 it will hang till it hits the timeout and error "ConnectionWaitTimeoutException: J2CA1010E: Connection not available; timed out waiting for 180 seconds".
I would think that closing the statements and connections would allow me to reuse the connection from the pool. The connections do get closed/collected after the thread finishes. I've updated my code to reuse the same connection through the for loop and I can run the process rapid fire over and over without problems because each thread cleans up after itself it seems. Any idea what's going on or how to resolve? I'm worried in the future I'll have a process that needs more than 10 threads over the course of running.
The behavior that you describe sounds inconsistent with how the connection pool is supposed to work. However, there are a number of important details that impact connection reuse which are not apparent from your writeup.
You are either using sharable or unsharable connections. You can find that out by looking at the resource reference that you used for the DataSource lookup. For example, #Resource with shareable=false or a deployment descriptor resource reference with <res-sharing-scope>Unshareable</res-sharing-scope> is unsharable, whereas if you set true for the former or Shareable for the latter or omit the attribute, then the connection is sharable. If you didn't use a resource reference, connections will typically be sharable, unless you explicitly overrode that with advanced configuration.
If you are using an unsharable connection and the entire code block is not encompassed within a global transaction, then every connection.close will return the connection to the pool to become immediately reusable. If you are in a global transaction, then each connection will remain unavailable after close because there is outstanding work on it that still needs to be committed or rolled back.
If you are using a sharable connection and each of the connection requests match (same user/password or absence thereof for all), then you should keep getting the same underlying connection back and should not run out of connections. If however, the connection requests do not match, then the connections are kept around for the duration of the scope (could be a transaction or a request scope like servlet boundary) and you will keep getting a new connection with each request until you run out.
If the above isn't enough to figure out the issue, add more detail to your scenario indicating the sharability, how you request each connections, and the placement of transaction boundaries and request boundaries, so that a more detailed answer for your specific scenario can be given.
I am using hive jdbc 1.0 in my java application to create connection with hive server and execute query. I want to set the idle hive connection timeout from java code. Like say, user first creates the hive connection, and if the hive connection remains idle for next 10 minutes, then this connection object should get expired. If user uses this same connection object after 10 minutes for executing query, then hive jdbc should throw error. Can you please tell me the way to achieve this through java code.
I know there is a property hive.server2.idle.session.timeout in hive, but I don't know whether this is the right property required to be set from java code or there is some other property. I tried setting this property in jdbc connection string but it did not worked.
try {
Class.forName("org.apache.hive.jdbc.HiveDriver");
} catch (ClassNotFoundException e) {
LOG.error(ExceptionUtils.getStackTrace(e));
}
String jdbcurl = "jdbc:hive2://localhost:10000/idw?hive.server2.idle.session.timeout=1000ms";
Connection con;
con = DriverManager.getConnection(jdbcurl,"root","");
Thread.sleep(3000);
Now below I am using connection object, hive jdbc should throw error here as I used connection object after 3000 ms but I had set the idle timeout as 1000ms but hive jdbc had not thrown error
ResultSet rs = con.createStatement().executeQuery("select * from idw.emp");
Need help on this.
The hive.server2.idle.session.timeout causes a session to be terminated when it is not accessed for the specified duration. However, hive.server2.idle.session.timeout needs to be specified with hive.server2.session.check.interval set to
a positive value. basically, we need to be specify a number of session checks within the timeout interval to cause the session to be closed.
More details can be checked here
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.server2.session.check.interval
You are looking at the problem the wrong way. These properties are not set in the URL, as described in the Hive AdminManual...
In the Hadoop ecosystem, server defaults are set in XML config files -- in that case /etc/hive/conf/hive-site.xml and/or hiveserver2-site.xml
Once your session is open, you can set custom values with the set <prop>=<value> statement (somewhat similar to the Oracle ALTER SESSION).
Caveat: some properties are defined as "final" in the config files and cannot be overriden. Check with your Hadoop Admin if in doubt.
I'm developing a web app on Tomcat 8 with Maven, I'm using c3p0 to handle connections on the main thread and on 2 other concurrent threads, my connection manager class is asking a DataSource singleton class I've implemented for synchronized connections, like so
public synchronized Connection getConnection() {
try {
return cpds.getConnection();
} catch (SQLException ex) {
logger.error("Error while issuing a pooled connection", ex);
}
return null;
}
, but when I'm trying to use these connections they start to either interrupt
09:47:17.164 [QuartzScheduler_Worker-4] ERROR com.myapp.providers.DataSource - Error while issuing a pooled connection
java.sql.SQLException: An SQLException was provoked by the following failure: java.lang.InterruptedException
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106) ~[c3p0-0.9.1.2.jar:0.9.1.2]
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:65) ~[c3p0-0.9.1.2.jar:0.9.1.2]
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:62) ~[c3p0-0.9.1.2.jar:0.9.1.2]
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:531) ~[c3p0-0.9.1.2.jar:0.9.1.2]
at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128) ~[c3p0-0.9.1.2.jar:0.9.1.2]
or close in mid transaction and breaking any statements and result sets that are being used at that time
I'm configuring the DataSource object like so
cpds = new ComboPooledDataSource();
cpds.setDriverClass(oracle.jdbc.driver.OracleDriver);
cpds.setJdbcUrl(jdbc:oracle:thin:#xx.xxx.xxx.xxx:1521:XE);
cpds.setUser("username");
cpds.setPassword("password");
// database connection properties
cpds.setInitialPoolSize(10);
cpds.setAcquireIncrement(3);
cpds.setMaxPoolSize(100);
cpds.setMinPoolSize(15);
cpds.setMaxStatements(75);
// connection pool preferences
cpds.setIdleConnectionTestPeriod(60);
cpds.setMaxIdleTime(30000);
cpds.setAutoCommitOnClose(false);
cpds.setPreferredTestQuery("SELECT 1 FROM DUAL");
cpds.setTestConnectionOnCheckin(false);
cpds.setTestConnectionOnCheckout(false);
cpds.setAcquireRetryAttempts(30);
cpds.setAcquireRetryDelay(1000);
cpds.setBreakAfterAcquireFailure(false);
I've also written a small test method that runs in a loop and queries the database for n times but that works fine.
c3p0-0.9.1.2 is very, very old; please consider upgrading to 0.9.5.1, the current production version.
The problem is both clear and not so clear. The clear part is that something is calling interrupt() on client Threads that are waiting to acquire Connections. The not-so-clear part is who is doing that and why.
A guess is that Tomcat itself is doing that because the client Threads are hung too long. If the Threads are hanging at getConnection(), that could be due to a Connection leak and pool exhaustion. We see above how you acquire Connections. Are you vigilant about ensuring that they are reliably close()ed in finally blocks?
A thing you might try is to set a checkoutTimeout, e.g.
cpds.setCheckoutTimeout( 5000 ); // 5 secs
This won't actually solve the problem if Connection checkouts are hanging. But rather than a problem provoked by mysterious interrupts, you'll see c3p0 TimeoutExceptions instead. That will verify that the issue is long hangs on checkout, though, which would most likely be due to pool exhaustion, either from a Connection leak (missing calls to close()), or simply from a maxPoolSize value too low for your load.
If there does seem to be a Connection leak, please see unreturnedConnectionTimeout and debugUnreturnedConnectionStackTraces for help tracking it down. See also "Configuring to Debug and Workaround Broken Client Applications"
I'm connecting my Java application via JDBC driver and Tomcat configurations. I used this class to define my configurations. But sometimes, I got following exceptions:
com.mysql.jdbc.exceptions.MySQLTimeoutException: Statement cancelled due to timeout or client request
java.sql.SQLException: Query execution was interrupted
java.sql.BatchUpdateException: Statement cancelled due to timeout or client request
But there is not much load on database, so I think the problem is about my configuration. Here are some of my configs:
maxActive = 100
minIdle = 10
initialSize = 10
maxWait = 10000
maxIdle = 15
These are for some heavily loaded system. So I need to observe if my pool size is enough and other things like available connection count at any time. Is there any nice way of monitoring inside of connection pools?
For the timeout exceptions, consider the following pseudocode:
if(!connection.isValid())
connection = getNewConnection();
resultSet rs = connection.execute(qry);
Basically check if your connection has timed out before you execute the query.
If your query was interrupted, theres not much you can do other than to rollback the transaction and try again.
Enable JMX and tomcat will register your datasources with it's mbean server. You can then use jconsole to look at the connection pool details.
A console application executing under:
1). Multiple threads
2). Connection Pooling (as the database connections range could be 5 to 30) of type Microsoft Access using DBCP.
While executing this application at my end (not tested the database limit) it works fine. And whenever I try to introduce the same application on one of other machines it generates an error.
I'm wondering why this is happening as there is only the difference of machines here. So, it works perfectly at my end.
I don't know much about connection pooling but it seems whatever I have understood I have implemented as:
public class TestDatabases implements Runnable{
public static Map<String, Connection> correctDatabases;
#Override
public void run() {
// validating the databases using DBCP
datasource.getConnection(); // Obtaining the java.sql.Connection from DataSource
// if validated successfully °º¤ø,¸¸,ø¤º°`°º¤ø,¸,ø¤°º¤ø,¸¸,ø¤º°`°º¤ø,¸ putting them in correctDatabases
}
}
The above case is implemented using ExecutorService = Number of databases.
Finally, I'm trying to put them in a static Collection of Type
Map<String, Connection> and making use of it throughout the application. In other words: I'm trying to collect the connectionString along with the Connection in a Map.
In other parts of my application I'm simply dealing with multiple threads coming along with the Connection URL. So, to perform any database operations I'm calling the
Connection con = TestDatabases.correctDatases.get(connectUrl);
For that machine, this application works fine for around ~5 databases. And the error is always getting generated when I'm trying to fire the query using above Connection (con) as stmt.executeQuery(query);
As, I'm not able to reproduce this issue at my end, it seems something is going-on wrong with the Connection Pooling or I have not configured my application to deal with Connection Pooling correctly.
Just for your information, I'm correctly performing Connection close in finally block where my application terminates and this Application is using Quartz Scheduler as well. For Connection Pooling, a call to the following from TestDatabases class is done for setUp as:
public synchronized DataSource setUp() throws Exception {
Class.forName(RestConnectionValidator.prop.getProperty("driverClass")).newInstance();
log.debug("Class Loaded.");
connectionPool = new GenericObjectPool();
log.debug("Connection pool made.");
connectionPool.setMaxActive(100);
ConnectionFactory cf = new DriverManagerConnectionFactory(
RestConnectionValidator.prop.getProperty("connectionUrl")+new String(get().toString().trim()),
"","");
PoolableConnectionFactory pcf =
new PoolableConnectionFactory(cf, connectionPool,
null, null, false, true);
return new PoolingDataSource(connectionPool);
}
Following is the error I'm getting (at the other machine)
java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver] System resource exceeded.
Following is the Database Path:
jdbc:odbc:DRIVER= {Microsoft Access Driver (*.mdb, *.accdb)};DBQ=D:\\DataSources\\PR01.mdb
Each of those database seems to be not much heavy (its ~ 5 to 15 MB of total size).
So, I'm left with the following solutions:
1). Correction of Connection Pooling or migrate to the newer one's like c3p0 or DBPool or BoneCP.
2). Introducing batch concept - in which I will schedule my application for each group of 4 databases. It could be very expensive to deal with as any time the other schedule may also collapse.
I’m pretty sure that this is Java related error but I can’t fathom out why.
Just done the migration to BoneCP which solved my problem. I guess due to multi-threaded environment the dpcp was not providing the connection from pool rather it was trying to hit the database again and again. Maybe I could have solved the dpcp issue but migrating to BoneCP also provides advantage of performance.