I'm connecting my Java application via JDBC driver and Tomcat configurations. I used this class to define my configurations. But sometimes, I got following exceptions:
com.mysql.jdbc.exceptions.MySQLTimeoutException: Statement cancelled due to timeout or client request
java.sql.SQLException: Query execution was interrupted
java.sql.BatchUpdateException: Statement cancelled due to timeout or client request
But there is not much load on database, so I think the problem is about my configuration. Here are some of my configs:
maxActive = 100
minIdle = 10
initialSize = 10
maxWait = 10000
maxIdle = 15
These are for some heavily loaded system. So I need to observe if my pool size is enough and other things like available connection count at any time. Is there any nice way of monitoring inside of connection pools?
For the timeout exceptions, consider the following pseudocode:
if(!connection.isValid())
connection = getNewConnection();
resultSet rs = connection.execute(qry);
Basically check if your connection has timed out before you execute the query.
If your query was interrupted, theres not much you can do other than to rollback the transaction and try again.
Enable JMX and tomcat will register your datasources with it's mbean server. You can then use jconsole to look at the connection pool details.
Related
I am trying to insert data into a table in SQL Server hosted on AWS RDS.
It was working fine and suddenly I started getting an issue. It seems like an intermittent issue but I am unable to see why it is this happening
Fail to read any response from the server, the underlying connection might get lost unexpectedly.
This is how I am creating the database connection:
public static MSSQLPool createMssqlDbPool(Vertx vertx, ConfigModel configModel) {
MSSQLConnectOptions connectOptions = new MSSQLConnectOptions()
.setHost(System.getenv().getOrDefault("DB_HOST", configModel.getDbConfig().getHost()))
.setPort(Integer.parseInt(System.getenv().getOrDefault("DB_PORT", configModel.getDbConfig().getPort())))
.setDatabase(System.getenv().getOrDefault("DB_NAME", configModel.getDbConfig().getDatabase()))
.setUser(System.getenv().getOrDefault("DB_USER", configModel.getDbConfig().getUser()))
.setPassword(System.getenv().getOrDefault("DB_PASSWORD", configModel.getDbConfig().getPassword()));
// Pool options
PoolOptions poolOptions = new PoolOptions()
.setMaxSize(4);
LOG.info("DB connection : {}", connectOptions.toJson());
return MSSQLPool.pool(vertx, connectOptions, poolOptions);
}
I have read threads on GitHub about adding timeout but they are not definitive.
If you see this error it is likely a pooled connection has been idle for too long and closed by some intermediate proxy.
Change your pool options to close idle connections eagerly:
// Pool options
PoolOptions poolOptions = new PoolOptions()
.setMaxSize(4)
.setIdleTimeout(5)
.setIdleTimeoutUnit(TimeUnit.MINUTES);
5 minutes is just an example. The longer a connection lives, the better.
I have a websphere application server with an MSSQL database connection with 10 in the connection pool. When my worker thread runs I have a process that runs the following:
Psuedo code
for(List<String> items){
Connection connection = null;
Statement statement = null;
try{
connection = getConnection();
statement = connection.createStatement();
String sql = "exec someStoredProcedure '" + item + "';";
statement.execute(sql);
}finally{
//real code includes null checks and try catch for errors
statement.close();
connection.close();
}
}
My problem is that if my loop is larger than 10 it will hang till it hits the timeout and error "ConnectionWaitTimeoutException: J2CA1010E: Connection not available; timed out waiting for 180 seconds".
I would think that closing the statements and connections would allow me to reuse the connection from the pool. The connections do get closed/collected after the thread finishes. I've updated my code to reuse the same connection through the for loop and I can run the process rapid fire over and over without problems because each thread cleans up after itself it seems. Any idea what's going on or how to resolve? I'm worried in the future I'll have a process that needs more than 10 threads over the course of running.
The behavior that you describe sounds inconsistent with how the connection pool is supposed to work. However, there are a number of important details that impact connection reuse which are not apparent from your writeup.
You are either using sharable or unsharable connections. You can find that out by looking at the resource reference that you used for the DataSource lookup. For example, #Resource with shareable=false or a deployment descriptor resource reference with <res-sharing-scope>Unshareable</res-sharing-scope> is unsharable, whereas if you set true for the former or Shareable for the latter or omit the attribute, then the connection is sharable. If you didn't use a resource reference, connections will typically be sharable, unless you explicitly overrode that with advanced configuration.
If you are using an unsharable connection and the entire code block is not encompassed within a global transaction, then every connection.close will return the connection to the pool to become immediately reusable. If you are in a global transaction, then each connection will remain unavailable after close because there is outstanding work on it that still needs to be committed or rolled back.
If you are using a sharable connection and each of the connection requests match (same user/password or absence thereof for all), then you should keep getting the same underlying connection back and should not run out of connections. If however, the connection requests do not match, then the connections are kept around for the duration of the scope (could be a transaction or a request scope like servlet boundary) and you will keep getting a new connection with each request until you run out.
If the above isn't enough to figure out the issue, add more detail to your scenario indicating the sharability, how you request each connections, and the placement of transaction boundaries and request boundaries, so that a more detailed answer for your specific scenario can be given.
I am working on a web application which runs in pcf environment and it has approximately 100 users. I am using Hikari CP library to manage databae connections and customized connectionTimedout property by setting it to 1 sec in the application code. Connection pool size is set to 100.
In one scenario, making a call to stored procedure where I am explicitly creating
Connection = DriverManager.getConnection()
object as ArrayDescriptor() is expecting connection object.
I am using ArrayDescriptor as input for stored procedure requires array of object.
However this code is throwing Socket Read Timed Out error randomly
The same code was working fine when configured with dbcp library managed connection pool.
Can anyone help? What's the problem with Hikari CP library?
As per compliance rules I can't post code on public domains.
connectionTimeout
This property controls the maximum number of milliseconds that a
client (that's you) will wait for a connection from the pool. If this
time is exceeded without a connection becoming available, a
SQLException will be thrown. Lowest acceptable connection timeout is
250 ms. Default: 30000 (30 seconds)
I am using commons-dbcp2 for creating a connection pool to the database. When the database is down dataSource.getConnection() method takes 20 seconds then throws an exception. I want to configure DataSource to dynamically change the timeout e.g. 5 seconds.
I tried dataSource.setLoginTimeout(), but it is not supported for BasicDataSource
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName(driverName);
dataSource.setUrl(url);
dataSource.setUsername(username);
dataSource.setPassword(password);
dataSource.setInitialSize(3);
dataSource.setMaxTotal(100);
dataSource.setValidationQuery(validationquery);
dataSource.setTestOnBorrow(true);
dataSource.setRemoveAbandonedOnBorrow(true);
try (Connection connection = dataSource.getConnection()) {
} catch (Exception e) {
}
I want after 5 seconds (as I configured) it throws the exception.
You can try validationQueryTimeout parameter which lets you time out the validation query after X seconds:
dataSource.setValidationQueryTimeout(5);
dataSource.setTestOnBorrow(true);
You don't have to set a validation SQL query, modern JDBC driver has Connection.isValid().
Unfortunately DBCP pool has issues as per Bad Behavior: Handling Database Down due to operating system TCP timeout limit. When the test was done in 2017:
Dbcp2 did not return a connection, and also did not timeout. The execution of the validation query is stuck due to unacknowledged TCP traffic. Subsequently, the SQL Statement run on the (bad) connection by the test harness hangs (due to unacknowledged TCP). setMaxWait(5000) is seemingly useless for handling network outages. There are no other meaningful timeout settings that apply to a network failure.
You can set datasource.setValidationQueryTimeout(), but keep in mind that this is for query execution. If you have network issues, you might still be stuck. For this, you need to set also setSoTimeout(), which is for the socket. The default value is 0, which means infinity.
I have been handling a application which uses wicket+JPA+springs technologies.Recently we got many 5XX error in logs(greater than threshhold).During that time,There were some general problems due to unstable response times of the mainframe db2 which is backend for our application.
But after that once the mainframe is OK this application servers did not come to normal again.
There are a lot of hanging transactions (from my appplication).
There are many threads in the server that may be hung.
As users will go on keeping login or will access the links in aplication during that time the situation becomes worse.
When I look at webspehere logs I found following exceptions:
00000035 ThreadMonitor W WSVR0605W: Thread "WebContainer : 88" (000005ac)
has been active for 637111 milliseconds and may be hung.
There is/are 43 thread(s) in total in the server that may be hung.
In application logs i found following exceptions:
-->CouldNotLockPageException: Could not lock page 4. Attempt lasted 3 minutes
-->DefaultExceptionMapper - Connection lost, give up responding.
org.apache.wicket.protocol.http.servlet.ResponseIOException:
com.ibm.wsspi.webcontainer.ClosedConnectionException: OutputStream encountered error during
write.
--> JDBCExceptionReporter - [jcc][t4][2030][11211][3.67.27] A communication error occurred
during operations on the connection's underlying socket, socket input stream,
or socket output stream.
Error location: Reply.fill() - socketInputStream.read (-1). Message:
Connection reset. ERRORCODE=-4499, SQLSTATE=08001DSRA0010E: SQL State = 08001, Error Code = - 4.499
Now we are working on the solutions to this problem.The follwing are two solutions that we are thinking as of now.
1.I have gone through many forums and found that whenever we get CouldNotLockPageException then it would be better to invaidate the session and force user to login page.Currently We do not have session invalidation (logout) mechanism.So we will implement that one.
2.We need to implement transaction timeouts so that we can stop hanging transactions.
I need solution for this problem from java or server side.Here we are using wicket,jpa and springs frameworks.I have few queries.
1.How can we implement transaction timeouts in the above frameworks?
2.Will invalidating session can stop hanging transaction or threads that may hung?
Since you are already using Spring, it's as simple as that:
#Transactional(timeout = 300)
The Transaction annotation allow you to supply a timeout value(in seconds) and the transaction manager will forward it to the JTA transaction manager or your Data Source connection pool. It works nice with Bitronix Transaction Manager, which automatically picks it up.
You also need to make sure the java.sql.Conenction are always being closed and Transaction are always committed (when all operations succeeded) or rollbacked on failure.
Invalidating the user http session has nothing to do with jdbc connections. Your jdbc connection should always be committed/rollbacked and closed(which in case on connection pooling, will release the connection to the pool).
And make sure the max pool size is not greater than tour db max concurrent connections setting.