setMaxConns and setMaxConnsPerHost in Astyanax client - java

I am using Astyanax client to read the data from Cassandra database. I have a single cluster with four nodes. I am having replication factor of 2. I am trying to understand what is the difference between
setMaxConns and setMaxConnsPerHost
methods in Astyanax client? I cannot find proper documentation on this.
I have a Multithreaded code which which spawn multiple threads and then create the connection to Cassandra database only once (as it is a Singleton) and then keep on reusing for other request.
Now I am trying to understand how the above two methods will play a role in read performance? And How those values should be set up?
And If I am setting those above two methods as-
setMaxConns(-1) and setMaxConnsPerHost(20)
then what does it mean? Any explanation will be of great help.
Updated Code:-
Below is the code, I am using to make the connection-
private CassandraAstyanaxConnection() {
context = new AstyanaxContext.Builder()
.forCluster(ModelConstants.CLUSTER)
.forKeyspace(ModelConstants.KEYSPACE)
.withAstyanaxConfiguration(new AstyanaxConfigurationImpl()
.setDiscoveryType(NodeDiscoveryType.RING_DESCRIBE)
)
.withConnectionPoolConfiguration(new ConnectionPoolConfigurationImpl("MyConnectionPool")
.setPort(9160)
.setMaxConnsPerHost(20)
.setMaxConns(-1)
.setSeeds("host1:9160,host2:9160,host3:9160,host4:9160")
)
.withAstyanaxConfiguration(new AstyanaxConfigurationImpl()
.setCqlVersion("3.0.0")
.setTargetCassandraVersion("1.2"))
.withConnectionPoolMonitor(new CountingConnectionPoolMonitor())
.buildKeyspace(ThriftFamilyFactory.getInstance());
context.start();
keyspace = context.getEntity();
emp_cf = ColumnFamily.newColumnFamily(
ModelConstants.COLUMN_FAMILY,
StringSerializer.get(),
StringSerializer.get());
}
If I am debugging this code, it is not even hitting the BagOfConnectionsConnectionPoolImpl class. I put a lot of breakpoint in the same class to see how it is using the conenctions and other default parameters. But don't know why it is not hitting that class.

The behavior regarding these configuration properties might be dependent on implementation.
BagOfConnectionsConnectionPoolImpl
BagOfConnectionsConnectionPoolImpl is the only implementation at the moment that honors both these properties. It behaves as follows:
Connection is borrowed from the pool on every cassandra operation (query or mutation) and returned to pool upon completion of operation.
maxConnsPerHost - maximum number of connections per single cassandra host.
maxConns - maximum number of connections in the pool.
Both these numbers must be positive, so setMaxConns(-1) just won't work.
On the attempt to borrow a connection from pool, the pool checks active connection number against maxConns. If the limit is exceeded, it waits until some connection is released. If no connection is available in specified timeout, the pool throws PoolTimeoutException.
If maxConns limit is not exceeded, the pool attempts to find a cassandra host it's aware of (specified as seed or found during discovery) that has the number of active connections below maxConnsPerHost and connect to it. If all hosts reached connection limit, the pool throws NoAvailableHostsException.
For example, let's take a client that connects to cluster of 4 nodes:
setMaxConns(100); setMaxConnsPerHost(10): Effective maximum number of connections is 40 (10 connections per node, no further connection attempts will be made). NoAvailableHostsException will be thrown.
setMaxConns(20); setMaxConnsPerHost(10): Effective maximum number of connections is 20. The connections to different hosts will be distributed uniformly, but not necessary equally. PoolTimeoutException will be thrown.
Things get more complicated if nodes join or leave cluster, but general idea is the same.
TokenAwareConnectionPoolImpl & RoundRobinConnectionPoolImpl
Both TokenAwareConnectionPoolImpl & RoundRobinConnectionPoolImpl ignore maxConns configuration property. They just select a host (depending on row token or randomly) and attempt to connect to it.
If the number of active connections to that host exceeds maxConnsPerHost, the pool waits until some connection is released. If no connection is available during specified timeout, another connection attempt to (potentially) another host is executed as a part of failover.

Related

setting connectionTimedOut to 1 sec is throwing Socket Timed Out error

I am working on a web application which runs in pcf environment and it has approximately 100 users. I am using Hikari CP library to manage databae connections and customized connectionTimedout property by setting it to 1 sec in the application code. Connection pool size is set to 100.
In one scenario, making a call to stored procedure where I am explicitly creating
Connection = DriverManager.getConnection()
object as ArrayDescriptor() is expecting connection object.
I am using ArrayDescriptor as input for stored procedure requires array of object.
However this code is throwing Socket Read Timed Out error randomly
The same code was working fine when configured with dbcp library managed connection pool.
Can anyone help? What's the problem with Hikari CP library?
As per compliance rules I can't post code on public domains.
connectionTimeout
This property controls the maximum number of milliseconds that a
client (that's you) will wait for a connection from the pool. If this
time is exceeded without a connection becoming available, a
SQLException will be thrown. Lowest acceptable connection timeout is
250 ms. Default: 30000 (30 seconds)

Is connectionPoolSetting for single mongos instance or the whole cluster?

I am a server-side developer, working on a project which uses a mongo cluster as persistent database.
I have a question for https://mongodb.github.io/mongo-java-driver/3.8/javadoc/com/mongodb/connection/ConnectionPoolSettings.html
It said to a MongoDB server
But what if I have a connectionString like following one
mongodb://user:pwd#mongos1:port,mongos3:port,mongos3:port,mongos4:port,mongos5:port,mongos6:port/admin?readPreference=secondaryPreferred
A mongodb sharded cluster which has 6 mongos instance.
Question:
Is the connectionPoolSetting related to one mongos server? or related to all mongos servers?
E.g. if we have maxSize = 10 in this setting, does it mean single client has max connection pool = 10 for single mongos server (max pool = 60 for my 6 mongos cluster)? Or max connection pool = 10 for the whole cluster no matter how many mongos server we have?
max connection pool = 10 means that in the client pool there will be max 10 connections no matter hoe many server are part of your cluster.
Mongo Client
com.mongodb.client.MongoClient interface:
A client-side representation of a MongoDB cluster. Instances can represent either a standalone MongoDB instance, a replica set, or a sharded cluster. Instance of this class are responsible for maintaining an up-to-date state of the cluster, and possibly cache resources related to this, including background threads for monitoring, and connection pools.
MongoClient object is used to get access to the database, using the getDatebase() method and work with collections and documents in it.
From the documentation:
The MongoClient instance represents a pool of connections to the
database; you will only need one instance of class MongoClient even
with multiple threads.
IMPORTANT
Typically you only create one MongoClient instance for a
given MongoDB deployment (e.g. standalone, replica set, or a sharded
cluster) and use it across your application. However, if you do create
multiple instances:
All resource usage limits (e.g. max connections, etc.) apply per
MongoClient instance.
To dispose of an instance, call MongoClient.close() to clean up resources.
The following code creates a MongoDB client connection object with connection pooling to connect to a MongoDB instance.
MongoClient mongoClient = MongoClients.create();
MongoDatabase database = mongoClient.getDatabase("test");
MongoClients.create() static method creates a connection object specified by the default host (localhost) and port (27017). You can explicitly specify other settings with the MongoClientSettings which specifies various settings to control the behavior of a MongoClient.
MongoClient mongoClient = MongoClients.create(MongoClientSettings settings)
Connection Pool Settings:
The ConnectionPoolSettings object specifies all settings that relate to the pool of connections to a MongoDB server. The application creates this connection pool when the client object is created. This creating of connection pool is driver specific.
ConnectionPoolSettings.Builder is a builder for ConnectionPoolSettings has methods to specify the connection pool properties. E.g., maxSize​(int maxSize): The maximum number of connections allowed. Default is 100. Other methods include, minSize, maxConnectionIdleTime, etc.
Code to instantiate a MongoClient with connection pool settings:
MongoClientSettings settings = MongoClientSettings.builder()
.applyToConnectionPoolSettings(builder ->
builder.maxSize(20).minSize(10)
.build();
MongoClient mongoClient = MongoClients.create(settings);
//...
// Verify the connection pool settings max size as
settings.getConnectionPoolSettings().getMaxSize()
Question: Is the connectionPoolSetting related to one mongos server?
or related to all mongos servers?
A client or application connects to the sharded cluster (includes all its shards) via the mongos router. The client program specifies the URL connection string and other options for the connection. In a sharded cluster, a client may connect thru a set of mongoss or a single mongos, or multiple clients can connect thru a single mongos, etc.,; it depends upon your application architecture.
If you are connecting via a single mongos, you can specify the mongos's host, port, user/password, etc., in the connection string. If it is a multiple mongos's, then multiple host/port values. Irrespective of the number of mongos's, the client program connects to the cluster via only one mongos.
The connection pool setting is for one mongos router only, as an application connects to one mongos irrespective of the number of mongoss specified in the connection string.

JDBC Connection Pool test query "SELECT 1" does not catch AWS RDS Writer/Reader failover

We are running an AWS RDS Aurora/MySQL database in a cluster with a writer and a reader instance where the writer is replicated to the reader.
The application accessing the database is a standard java application using a HikariCP Connection Pool. The pool is configured to use a "SELECT 1" test query on checkout.
What we noticed is that once in a while RDS fails over the writer to the reader. The failover can also be replicated manually by clicking "Instance Actions/Failover" in the AWS console.
The connection pool is not able to detect the failover and the fact that it is now connected to a reader database, as the "SELECT 1" test queries still succeed. However any subsequent database updates fail with "java.sql.SQLException: The MySQL server is running with the --read-only option so it cannot execute this statement" errors.
It appears that instead of a "SELECT 1" test query, the Connection Pool can detect that it is now connected to the reader by using a "SELECT count(1) FROM test_table WHERE 1 = 2 FOR UPDATE" test query.
Has anybody experienced the same issue?
Are there any downsides on using "FOR UPDATE" in the test query?
Are there any alternate or better approaches of handling an AWS RDS cluster writer/reader failover?
Your help is much appreciated
Bernie
I've been giving this a lot of thought in the two months since my original reply...
How Aurora endpoints work
When you start up an Aurora cluster you get multiple hostnames to access the cluster. For the purposes of this answer, the only two that we care about are the "cluster endpoint," which is read-write, and the "read-only endpoint," which is (you guessed it) read-only. You also have an endpoint for each node within the cluster, but accessing nodes directly defeats the purpose of using Aurora, so I won't mention them again.
For example, if I create a cluster named "example", I'll get the following endpoints:
Cluster endpoint: example.cluster-x91qlr44xxxz.us-east-1.rds.amazonaws.com
Read-only endpoint: example.cluster-ro-x91qlr44xxxz.us-east-1.rds.amazonaws.com
You might think that these endpoints would refer to something like an Elastic Load Balancer, which would be smart enough to redirect traffic on failover, but you'd be wrong. In fact, they're simply DNS CNAME entries with a really short time-to-live:
dig example.cluster-x91qlr44xxxz.us-east-1.rds.amazonaws.com
; <<>> DiG 9.11.3-1ubuntu1.3-Ubuntu <<>> example.cluster-x91qlr44xxxz.us-east-1.rds.amazonaws.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40120
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;example.cluster-x91qlr44xxxz.us-east-1.rds.amazonaws.com. IN A
;; ANSWER SECTION:
example.cluster-x91qlr44xxxz.us-east-1.rds.amazonaws.com. 5 IN CNAME example.x91qlr44xxxz.us-east-1.rds.amazonaws.com.
example.x91qlr44xxxz.us-east-1.rds.amazonaws.com. 4 IN CNAME ec2-18-209-198-76.compute-1.amazonaws.com.
ec2-18-209-198-76.compute-1.amazonaws.com. 7199 IN A 18.209.198.76
;; Query time: 54 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Fri Dec 14 18:12:08 EST 2018
;; MSG SIZE rcvd: 178
When a failover happens, the CNAMEs are updated (from example to example-us-east-1a):
; <<>> DiG 9.11.3-1ubuntu1.3-Ubuntu <<>> example.cluster-x91qlr44xxxz.us-east-1.rds.amazonaws.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27191
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;example.cluster-x91qlr44xxxz.us-east-1.rds.amazonaws.com. IN A
;; ANSWER SECTION:
example.cluster-x91qlr44xxxz.us-east-1.rds.amazonaws.com. 5 IN CNAME example-us-east-1a.x91qlr44xxxz.us-east-1.rds.amazonaws.com.
example-us-east-1a.x91qlr44xxxz.us-east-1.rds.amazonaws.com. 4 IN CNAME ec2-3-81-195-23.compute-1.amazonaws.com.
ec2-3-81-195-23.compute-1.amazonaws.com. 7199 IN A 3.81.195.23
;; Query time: 158 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Fri Dec 14 18:15:33 EST 2018
;; MSG SIZE rcvd: 187
The other thing that happens during a failover is that all of the connections to the "cluster" endpoint get closed, which will fail any in-process transactions (assuming that you've set reasonable query timeouts).
The connections to the "read-only" endpoint don't get closed, which means that whatever node gets promoted will get read-write traffic in addition to read-only traffic (assuming, of course, that your application doesn't just send all requests to the cluster endpoint). Since read-only connections are typically used for relatively expensive queries (eg, reporting), this may cause performance problems for your read-write operations.
The Problem: DNS Caching
When failover happens, all in-process transactions will fail (again, assuming that you've set query timeouts). There will be a short amount of time that any new connections will also fail, as the connection pool attempts to connect to the same host before it's done with recovery. In my experience, failover takes around 15 seconds, during which time your application shouldn't expect to get a connection.
After that 15 seconds (or so), everything should return to normal: your connection pool attempts to connect to the cluster endpoint, it resolves to the IP address of the new read-write node, and all is well. But if anything prevents resolving that chain of CNAMEs, you may find that your connection pool makes connections to a read-only endpoint, which will fail as soon as you try an update operation.
In the case of the OP, he had his own CNAME with a longer timeout. So rather than connect to the cluster endpoint directly, he would connect to something like database.example.com. This is a useful technique in a world where you would manually fail-over to a replica database; I suspect it's less useful with Aurora. Regardless, if you use your own CNAMEs to refer to database endpoints, you need them to have short time-to-live values (certainly no more than 5 seconds).
In my original answer, I also pointed out that Java caches DNS lookups, in some cases forever. The behavior of this cache depends on (I believe) the version of Java, and also whether you're running with a security manager installed. With OpenJDK 8 running as an application, it appears that the JVM will delegate all naming lookups and not cache anything itself. However, you should be familiar with the networkaddress.cache.ttl system property, as described in this Oracle doc and this SO question.
However, even after you've eliminated any unexpected caches, there may still be times where the cluster endpoint is resolved to a read-only node. That leaves the question of how you handle this situation.
Not-so-good solution: use a read-only test on checkout
The OP was hoping to use a database connection test to verify that his application was running on a read-only node. This is surprisingly hard to do: most connection pools (including HikariCP, which is what the OP is using) simply verify that the test query executes successfully; there's no ability to look at what it returns. This means that any test query has to throw an exception to fail.
I haven't been able to come up with a way to make MySQL throw an exception with just a stand-alone query. The best I've come up with is to create a function:
DELIMITER EOF
CREATE FUNCTION throwIfReadOnly() RETURNS INTEGER
BEGIN
IF ##innodb_read_only THEN
SIGNAL SQLSTATE 'ERR0R' SET MESSAGE_TEXT = 'database is read_only';
END IF;
RETURN 0;
END;
EOF
DELIMITER ;
Then you call that function in your test query:
select throwIfReadOnly()
This works, mostly. When running my test program I could see a series of "failed to validate connection" messages, but then, inexplicably, the update query would run with a read-only connection. Hikari doesn't have a debug message to indicate which connection it hands out, so I couldn't identify whether it had allegedly passed validation.
But aside from that possible problem, there's a deeper issue with this implementation: it hides the fact that there's a problem. A user makes a request, and maybe waits for 30 seconds to get a response. There's nothing in the log (unless you enable Hikari's debug logging) to give a reason for this delay.
Moreover, while the database is inaccessible Hikari is furiously trying to make connections: in my single-threaded test, it would attempt a new connection every 100 milliseconds. And these are real connections, they simply go to the wrong host. Throw in an app-server with a few dozen or hundred threads, and that could cause a significant ripple effect on the database.
Better solution: use a read-only test on checkout, via a wrapper Datasource
Rather than let Hikari silently retry connections, you could wrap the HikariDataSource in your own DataSource implementation and test/retry yourself. This has the benefit that you can actually look at the results of the test query, which means that you can use a self-contained query rather than calling a separately-installed function. It also lets you log the problem using your preferred log levels, lets you pause between attempts, and gives you a chance to change pool configuration.
private static class WrappedDataSource
implements DataSource
{
private HikariDataSource delegate;
public WrappedDataSource(HikariDataSource delegate) {
this.delegate = delegate;
}
#Override
public Connection getConnection() throws SQLException {
while (true) {
Connection cxt = delegate.getConnection();
try (Statement stmt = cxt.createStatement()) {
try (ResultSet rslt = stmt.executeQuery("select ##innodb_read_only")) {
if (rslt.next() && ! rslt.getBoolean(1)) {
return cxt;
}
}
}
// evict connection so that we won't get it again
// should also log here
delegate.evictConnection(cxt);
try {
Thread.sleep(1000);
}
catch (InterruptedException ignored) {
// if we're interrupted we just retry
}
}
}
// all other methods can just delegate to HikariDataSource
This solution still suffers from the problem that it introduces a delay into user requests. True, you know that it's happening (which you didn't with the on-checkout test), and you could introduce a timeout (limit the number of times through the loop). But it still represents a bad user experience.
The best (imo) solution: switch into "maintenance mode"
Users are incredibly impatient: if it takes more than a few seconds to get a response back, they'll probably try to reload the page, or submit the form again, or do something that doesn't help and may hurt.
So I think the best solution is to fail quickly and let them know that somethng's wrong. Somewhere near the top of the call stack you should already have some code that responds to exceptions. Maybe you just return a generic 500 page now, but you can do a little better: look at the exception, and return a "sorry, temporarily unavailable, try again in a few minutes" page if it's a read-only database exception.
At the same time, you should send a notification to you ops staff: this may be a normal maintance window failover, or it may be something more serious (but don't wake them up unless you have some way of knowing that it's more serious).
set connection pool idle connection timeout in your java code datasource. set around 1000ms
Aurora failover
As Sayantan Mandal hints in his comments. When using Aurora just use the MariaDb driver it has support for failover.
It is documented here:
https://aws.amazon.com/blogs/database/using-the-mariadb-jdbc-driver-with-amazon-aurora-with-mysql-compatibility/
And here:
https://mariadb.com/kb/en/failover-and-high-availability-with-mariadb-connector-j/#aurora-endpoints-and-discovery
Your connection string will start with jdbc:mariadb:aurora// or jdbc:mysql:aurora//.
The connection pool normally calls JDBC4Connection#isValid which should correctly return false with this driver when on a read only replica.
No custom code required.
DNS Caching
As for DNS caching (networkaddress.cache.ttl) depending on your JVM the default is 30 or 60s depening of whether a security manager is present.
You can retrieve the value at runtime with this snippet if unsure:
Class.forName("sun.net.InetAddressCachePolicy").getMethod("get").invoke(null)
With 30s DNS cache, your connection will start to arrive at the read-write replica at most 30s after the failover happens.

Timeout implementation of JPA transactions and Session invalidation

I have been handling a application which uses wicket+JPA+springs technologies.Recently we got many 5XX error in logs(greater than threshhold).During that time,There were some general problems due to unstable response times of the mainframe db2 which is backend for our application.
But after that once the mainframe is OK this application servers did not come to normal again.
There are a lot of hanging transactions (from my appplication).
There are many threads in the server that may be hung.
As users will go on keeping login or will access the links in aplication during that time the situation becomes worse.
When I look at webspehere logs I found following exceptions:
00000035 ThreadMonitor W WSVR0605W: Thread "WebContainer : 88" (000005ac)
has been active for 637111 milliseconds and may be hung.
There is/are 43 thread(s) in total in the server that may be hung.
In application logs i found following exceptions:
-->CouldNotLockPageException: Could not lock page 4. Attempt lasted 3 minutes
-->DefaultExceptionMapper - Connection lost, give up responding.
org.apache.wicket.protocol.http.servlet.ResponseIOException:
com.ibm.wsspi.webcontainer.ClosedConnectionException: OutputStream encountered error during
write.
--> JDBCExceptionReporter - [jcc][t4][2030][11211][3.67.27] A communication error occurred
during operations on the connection's underlying socket, socket input stream,
or socket output stream.
Error location: Reply.fill() - socketInputStream.read (-1). Message:
Connection reset. ERRORCODE=-4499, SQLSTATE=08001DSRA0010E: SQL State = 08001, Error Code = - 4.499
Now we are working on the solutions to this problem.The follwing are two solutions that we are thinking as of now.
1.I have gone through many forums and found that whenever we get CouldNotLockPageException then it would be better to invaidate the session and force user to login page.Currently We do not have session invalidation (logout) mechanism.So we will implement that one.
2.We need to implement transaction timeouts so that we can stop hanging transactions.
I need solution for this problem from java or server side.Here we are using wicket,jpa and springs frameworks.I have few queries.
1.How can we implement transaction timeouts in the above frameworks?
2.Will invalidating session can stop hanging transaction or threads that may hung?
Since you are already using Spring, it's as simple as that:
#Transactional(timeout = 300)
The Transaction annotation allow you to supply a timeout value(in seconds) and the transaction manager will forward it to the JTA transaction manager or your Data Source connection pool. It works nice with Bitronix Transaction Manager, which automatically picks it up.
You also need to make sure the java.sql.Conenction are always being closed and Transaction are always committed (when all operations succeeded) or rollbacked on failure.
Invalidating the user http session has nothing to do with jdbc connections. Your jdbc connection should always be committed/rollbacked and closed(which in case on connection pooling, will release the connection to the pool).
And make sure the max pool size is not greater than tour db max concurrent connections setting.

Java JDBC connections and Oracle

I have a scenario and the question follows
Application server has two connections pools to DB. A and B
A points to -> DatabaseA -> has 128 connections
A has Stored Procedures which access tables residing in DatabaseB over the DB link
B points to -> DatabaseB -> has 36 connections
Now lets say that Java code calls Stored Proc in DatabaseA by using connection pool A. This stored proc is getting data over the DB link from DatabaseB
Question:
Based on this scenario if we get connection closed errors on the front end. Is it viable to say that even though java is calling the SP (in DatabaseA) from pool A (128) but since the SP is bringing data from DatabaseB it has less amount of connections (36).
Basically I want to know when the data is brought over the DB link like this...does it take away from 36 connections assigned to pool B pointint to DatabaseB?
Exact Exception
Exact exception I get is: --- Cause: java.sql.SQLException: Closed Connection
Some Stack trace:
Caused by: java.sql.SQLException:
Closed Connection at
com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryWithCallback(GeneralStatement.java:185)
at
com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryForList(GeneralStatement.java:123)
at
com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java:614)
at
com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java:588)
at
com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForList(SqlMapSessionImpl.java:118)
at
org.springframework.orm.ibatis.SqlMapClientTemplate$3.doInSqlMapClient(SqlMapClientTemplate.java:268)
at
org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:193)
at
org.springframework.orm.ibatis.SqlMapClientTemplate.executeWithListResult(SqlMapClientTemplate.java:219)
at
org.springframework.orm.ibatis.SqlMapClientTemplate.queryForList(SqlMapClientTemplate.java:266)
Also, I am using iBatis ...so don't have try..catch..finally blocks
The stored procedure is running in the database; when it makes the connection to the other database it makes a direct connection and doesn't go through the app server's pool. In fact, it could make a connection to any database that is linked to A, regardless whether or not there's a connection pool to that database maintained by the app server.
This exception indicates a resource leak, i.e. the JDBC code is not properly closing connections in the finally block (to ensure that it's closed even in case of an exception) or the connection is been shared among multiple threads. If two threads share the same connection from the pool and one thread closes it, then this exception will occur when the other thread uses the connection.
The JDBC code should be written so that connections (and statements and resultsets) are acquired and closed (in reversed order) in the very same method block. E.g.
Connection connection = null;
// ...
try {
connection = database.getConnection();
// ...
} finally {
// ...
if (connection != null) try { connection.close(); } catch (SQLException logOrIgnore) {}
}
Another possible cause is that the pool is holding connections too long idle and not testing/verifying them before releasing. This is configureable in a decent connection pool. Consult its documentation.
"Basically I want to know when the data is brought over the DB link like this...does it take away from 36 connections assigned to pool B pointint to DatabaseB?"
No. The database server will make a distinct connection to the other database server irrespective of any connection pool.
I have to suffer a firewall that cuts of connections after a period of inactivity so I see this error quite a lot. Look into dbms_session.close_database_link, since the database link connection would generally remain for the duration of the session (and since you have a connection pool, that session probably sits around for a very long time).

Categories

Resources