mybatis setting for mysql time-out on tomcat server - java

using java7 tomcat7 and mybatis as ORM
config.xml is like this
<transactionManager type="JDBC" />
<dataSource type="POOLED">
<property name="driver" value="com.mysql.jdbc.Driver" />
<property name="url" value="jdbc:mysql://localhost:3306/xxxxdb" />
<property name="username" value="xxxxxxx" />
<property name="password" value="xxxxxxx" />
<property name="poolPingEnabled" value="true" />
<property name="poolPingQuery" value="SELECT 1 " />
</dataSource>
</environment>
mysql settings are all default set.
Hence interactive_timeout is 28800.
When I login my service, it fails for the first time, then it succeeds for the second time.
Above error sometimes happens even though re-login within 28800 seconds.
I paste the error message in server
2015 10:03:49 org.apache.ibatis.datasource.pooled.PooledDataSource warn
WARN: Execution of ping query 'SELECT 1' failed: Communications link failure
The last packet successfully received from the server was 30,572,290 milliseconds ago. The last packet sent successfully to the server was 1 milliseconds ago.
org.apache.ibatis.exceptions.PersistenceException:
### Error querying database. Cause: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 36,001,604 milliseconds ago. The last packet sent successfully to the server was 36,001,633 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
### The error may exist in sql.xml
### The error may involve com.xxx.isRegistered-Inline
### The error occurred while setting parameters
### SQL: [query for login];
### Cause: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 36,001,604 milliseconds ago. The last packet sent successfully to the server was 36,001,633 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
I tried to add "autoReconnect=true" to the end of connection url, but it doesn't solve the problem

Possible duplicate of Java app handling for connections getting dropped
Reporting here BalusC's answer:
This exception suggests that you're opening the connection only once during application's startup and keeping forever open during the application's lifetime. This is bad. The DB will reclaim the connection sooner or later because it's been open for too long. You should close connections properly in the finally block of the very same try block as you're opening it and executing the query on it.
E.g.
public Entity find(Long id) throws SQLException {
Connection connection = null;
// ...
try {
connection = database.getConnection();
// ...
} finally {
// ...
if (connection != null) try { connection.close(); } catch (SQLException logOrIgnore) {}
}
return entity;
}
If you have a performance concern regarding this (which is very reasonable as connecting is the most expensive task), then you should be using a connection pool. It also transparently handles this kind of "connection dropped" problems. For example, BoneCP. Please note that also in case of a connection pool, you should still be closing the connections in the finally block as per the above JDBC code idiom. It will namely make them available for reuse.

Related

CommunicationsException: The last packet sent successfully to the server xxx milliseconds ago

I have a question about MySQL/JDBC connections in Java.
I wrote an application that successfully communicates with a database, but the issue that I recently found out was that my DB connection was dropping, and I need the application to have a connection to the DB at all times.
This is a small snipplet of the error I was getting:
com.mysql.cj.jdbc.exceptions.CommunicationsException: The last packet successfully received from the server was 89,225,584 milliseconds ago. The last packet sent successfully to the server was 89,225,584 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:953)
at com.mysql.cj.jdbc.ClientPreparedStatement.executeQuery(ClientPreparedStatement.java:1003)
This is also a snipplet of the constructor for my DBConnections class:
private final String url = "jdbc:mysql://localhost:3306/", database.....;
private Connection connection;
public DBConnector(){
try {
// Class.forName("com.mysql.jdbc.Driver");
connection = DriverManager.getConnection(url+database, username, password);
} catch (Exception ex) {
System.out.println("Error: " + ex);
}
}
In the errors section, I noticed it's telling me to add autoReconnect=true, I wondered; will my connection still stay up for longer if I structured the connection class like this:
connection = DriverManager.getConnection(url+database+"?autoReconnect=true", username, password);
If not, what else could I do to make sure my connection doesn't drop?
What I would suggest is to use a connection pool (Apache DBCP or HikariCP - the last one is currently having the best performance out of all solutions on the market) with configuration of testing connection before borrowing it from the pool. Depending on the library there should be an option like setTestOnBorrow(true).
In real applications you should always use connection pool instead of manually handling connections.

Spring Integration tcp/ip connection delay

We are using Spring integration 4.1.3.
Sometimes it takes more than 5 seconds to request a connection from a particular server.
What is happening between step1 and step2?
Why is it delayed?
Client Log
step1 :▶ DEBUG 11.28 18:14:33.237 [ajp-bio-8109-exec-3] org.springframework.integration.ip.tcp.connection.TcpNetClientConnectionFactory[obtainNewConnection:98] - Opening new socket connection to 10.0.12.111:36401
step2 :▶ DEBUG 11.28 18:14:38.306 [ajp-bio-8109-exec-3] org.springframework.integration.ip.tcp.connection.TcpNetConnection[<init>:138] - New connection 10.0.12.111:36401:2701:561f3524-c421-45ba-9ea5-76a7ddf96430
Client Config
<int:gateway id="gw-vacct-tcp-sender"
service-interface="com.mainpay.pay.service.TcpSendVacctGateway"
default-request-channel="vacct-input"
default-reply-channel="vacct-reply"
/>
<int-tcp:tcp-connection-factory id="vacct-client"
type="client"
host="#{springSetting['pay.pg.ngin.vip']}"
port="#{springSetting['pay.pg.ngin.vacct.port']}"
serializer="TCPJsonSerializer8"
deserializer="TCPJsonDeserializer8"
single-use="true"
so-timeout="20000"
/>
<int:channel id="vacct-input" />
<int-tcp:tcp-outbound-gateway id="vacct-outGateway"
request-channel="vacct-input"
reply-channel="vacct-reply"
connection-factory="vacct-client"
reply-timeout="20000"
/>
<int:channel id="vacct-reply" datatype="java.lang.String" />
Try setting lookup-host to false; perhaps there is a problem in the network with reverse host lookups. It appears that the lookup failed since it is an ip address in the connection id.
10.0.12.111:36401:2701:561f3524-c421-45ba-9ea5-76a7ddf96430
See the documentation.
By default, reverse DNS lookups are done on inbound packets to convert IP addresses to hostnames for use in message headers. In environments where DNS is not configured, this can cause connection delays. You can override this default behavior by setting the lookup-host attribute to false.

JDBC MySQL Connection Issue - Attempted reconnect 3 times. Giving up

I have a rest service application running the Java Spring framework. The application depends on a connection to an external MySQL DB, which is connected via JDBC.
My issue is maintaining a solid connection between the rest service and the MySQL db. I have what I consider a rudimentary connection failsafe in place that looks something like:
public Connection getConnection() throws SQLException {
if(connection == null){
this.buildConnection();
}
else if(!connection.isValid(10)){ //Rebuild connection if it is no longer valid
connection.close();
this.buildConnection();
}
return connection;
}
Using this method should ensure that the connection is valid before any query is executed. My problem is that I periodically get an exception thrown when calling this method:
Could not create connection to database server. Attempted reconnect 3 times. Giving up. SQLState: 08001. ErrorCode: 0.
The things that have me super perplexed about this are:
This error only happens periodically. Many times the connection works just find.
I test this same application on my developer machine and this error never occurs.
I custom configured the MySQL DB on my own server, so I control all its config options. From this I know that this issue isn't related to the maximum number of connections allowed, or a connection timeout.
Edit - Update 1:
This service is hosted as Cloud Service on Microsoft Azure platform.
I accidentally set it up as an instance in Northern Europe, while the DB is located in North America - probably not related, but trying to paint the whole picture.
Tried the advice at this link with no success. Not using thread pools, and all ResultSets and Statements/PreparedStatements are closed after.
Edit - Update 2
After some refactoring, I was able to successfully implement a HikariCP Connection Pool as outlined by #M.Deinum below. Unfortunately, the same problem persists. Everything works great on my local machine, and all Unit Tests pass, but as soon as I push it to Azure and wait more than a few minutes between requests, I get this error, when trying to grab a connection from the pool:
springHikariCP - Connection is not available, request timed out after 38268ms. SQLState: 08S01. ErrorCode: 0.
My HikariCP configuration is as follows:
//Set up connection pool
HikariConfig config = new HikariConfig();
config.setDriverClassName("com.mysql.jdbc.Driver");
config.setJdbcUrl("jdbc:mysql://dblocation");
//Connection pool properties
Properties prop = new Properties();
prop.setProperty("user", "Username");
prop.setProperty("password", "Password");
prop.setProperty("verifyServerCertificate", "false");
prop.setProperty("useSSL","true");
prop.setProperty("requireSSL","true");
config.setDataSourceProperties(properties);
config.setMaximumPoolSize(20);
config.setConnectionTestQuery("SELECT 1");
config.setPoolName("springHikariCP");
config.setLeakDetectionThreshold(5000);
config.addDataSourceProperty("dataSource.cachePrepStmts", "true");
config.addDataSourceProperty("dataSource.prepStmtCacheSize", "250");
config.addDataSourceProperty("dataSource.prepStmtCacheSqlLimit", "2048");
config.addDataSourceProperty("dataSource.useServerPrepStmts", "true");
dataSource = new HikariDataSource(config);
Any help would be greatly appreciated.
I suggest using a proper JDBC Connection Pool like HikariCP that together with a validation query which will execute on correct intervals should give you fresh and proper connections each time.
Assuming you are using Spring and xml to configure the datasource.
<bean id="dataSource" class="com.zaxxer.hikari.HikariDataSource">
<property name="poolName" value="springHikariCP" />
<property name="dataSourceClassName" value="com.mysql.jdbc.jdbc2.optional.MysqlDataSource" />
<property name="dataSourceProperties">
<props>
<prop key="url">${jdbc.url}</prop>
<prop key="user">${jdbc.username}</prop>
<prop key="password">${jdbc.password}</prop>
</props>
</property>
</bean>
It by default validates connections on checkout. I suggest a try out.
As you are using java bases config I suggest the following
#Bean
public DataSource dataSource() {
HikariDataSource ds = new HikariDataSource();
ds.setPoolName("springHikariCP");
ds.setMaxPoolSize(20);
ds.setLeakDetectionThreshold(5000);
ds.setDataSourceClassName("com.mysql.jdbc.jdbc2.optional.MysqlDataSource");
ds.addDataSourceProperty("url", url);
ds.addDataSourceProperty("user", username);
ds.addDataSourceProperty("password", password);
ds.addDataSourceProperty("cachePrepStmts", true);
ds.addDataSourceProperty("prepStmtCacheSize", 250);
ds.addDataSourceProperty("prepStmtCacheSqlLimit", 2048);
ds.addDataSourceProperty("useServerPrepStmts", true);
ds.addDataSourceProperty("verifyServerCertificate", false);
ds.addDataSourceProperty("useSSL", true);
ds.addDataSourceProperty("requireSSL", true);
return ds;
}
It seems to be caused by the system variable wait_timeout of MySQL.
For MySQL 5.0, 5.1, 5.5, 5.6, the default value for wait_timeout is 28800 seconds (8 hours), and the maximum value for wait_timeout:
Linux : 31536000 seconds (365 days, one year)
Windows : 2147483 seconds (2^31 milliseconds, 24 days 20 hours 31 min 23 seconds)
The number of seconds the server waits for activity on a noninteractive connection before closing it. This timeout applies only to TCP/IP and Unix socket file connections, not to connections made using named pipes, or shared memory.
So I think you can try to use a jdbc connection to keep pinging interval some seconds, or directly using a kind of JDBC Connection Pool framework to manage jdbc connections automatically.
Hope it help. Best Regards.
assuming code you have now for '//Set up connection pool' is called only once, like bean creation, to initialize dataSource.
with that, your getConnection() would be just:
public Connection getConnection() throws SQLException {
return dataSource.getConnection();
}
and make sure wait_timeout in mysql is set to minute more than maxLifetime in hikaricp which is default 30 minutes
Hope this helps.

Oracle data source connection pooling not working used with Spring and JDBCTemplate

Question: Lot of active unclosed physical connections with database even with connection pooling. Can someone tell me why is it so?
I configured the connection pool settings using oracle.jdbc.pool.OracleDataSource. However it seems the physical connections are not getting closed after use.
I thought, Since it is connection pooling, the connections will be reused from the pool, so so many physical connections will not be made,
but thats not what is happening now!
There are 100+ active physical connections in the database generating from the application [not from plsql developer or any such client tools],
due to which it kicks off TNS error while trying to do write operations on database,
where as read operations are fine even with large number of active connections.
Here is the Spring configuration,
<bean id="oracleDataSource" class="oracle.jdbc.pool.OracleDataSource" destroy-method="close"
p:URL="${url}"
p:user="${username}"
p:password="${password}"
p:connectionCachingEnabled="true">
<property name="connectionProperties">
<props merge="default">
<prop key="AutoCommit">false</prop>
</props>
</property>
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate"
p:dataSource-ref="oracleDataSource" />
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager"
p:dataSource-ref="oracleDataSource">
</bean>
The SQL that returned the 100+ active connections is ,
select username, terminal,schemaname, osuser,program from v$session where username = 'grduser'
You should configure connection cache, the default value of max connections for implicit connection cache is the max number of database sessions configured for the database.
Thanks to #Evgeniy Dorofeev.
Solution in detail :
The connectionCache was enabled, but the properties were not set.
Set the properties as written below,
`
<bean id="oracleDataSource" class="oracle.jdbc.pool.OracleDataSource" destroy-method="close"
p:URL="${url}"
p:user="${username}"
p:password="${password}"
p:connectionCachingEnabled="true">
<property name="connectionProperties">
<props merge="default">
<prop key="AutoCommit">false</prop>
</props>
</property>
<property name="connectionCacheProperties">
<props>
<prop key="MinLimit">5</prop>
<prop key="MaxLimit">10</prop>
<prop key="InactivityTimeout">2</prop>
</props>
</property>
</bean>
`
Now, for every operation in the application that requires a connection, it will try to get from the pool if available and ready to use, but is guaranteed that the database will have maximum of only 10 active physical connections. Any attempt to get a extra physical connection will lead to a database error at the application side.
Even if you have set the connectionCache, make sure your application is not explicitly trying to get a connection, like
Connection connection = getJdbcTemplate().getDataSource().getConnection();
This is alarming, JDBCTemplate doesnt manage the closing of this connection. Hence you have to close by yourself after use, otherwise the physical connection will still be active and unclosed even after use. So next time you call this again, it try to get a new physical connection , and remain unclosed, resulting in piling up of active connections until the maxLimit is reached.
The connection might be explicity needed when you want to pass it as a parameter to some other function, say in the case of an ArrayDescriptor [if you talk to PLSQL Stored procedures that has IN parameter to accept an array of values , an array of Varchar or array of RAW]. If you need to create a ArrayDescriptor,
ArrayDescriptor arrayDescriptor = ArrayDescriptor.createDescriptor(
"SOME_TYPE_NAME", connection );
ARRAY SQLArray= new ARRAY(arrayDescriptor, connection , arrayString);
Hence do a connection.close() explicity here.
Additional info:
Connection connection = getJdbcTemplate().getDataSource().getConnection()
This attempts to establish a connection with the data source that this DataSource object represents.
Calling this line of code - once, will attempt to establish a new connection.
Calling again, will establish a second connection. For each request, it will create a new connection!.So If your maxLimit is 10,
Until there are 10 active physical connections in the database, the call will be successful, but
note that all the connections are active [not closed].
So lets say now there are 10 active db connections, as maxLimit is set to 10.
So any requests that requires a database operation, that would go through the
normal route of accessing a connection via the JDBCTemplate will be picking up the already established connection [from the 10 connections]
However any request that calls this code getJdbcTemplate().getDataSource().getConnection() to access a connection
will attempt to establish a NEW connection, and will fail, resulting in exception.
The only way to resolve this is to explicitly close the connection when we explicitly create the connection.
ie calling connection.close()
When we don't explicitly create the connection, and when it is managed by Spring, then Spring will take care of closing
the connections too. In the case of using Oracle Data Source pooling along with JDBCTemplate, closing the connection[returning the
connections to the pool] is managed by Spring.

Data source rejected establishment of connection, message from server: "Too many connections"

I am using hibenate and spring and getting below exception ween we hit from jmeter with 250 users
"Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Data source rejected establishment of connection, message from server: "Too many connections"
hibernate_cfg.xml
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="hibernate.connection.url">jdbc:mysql://localhost:3306/my_db</property>
<property name="hibernate.connection.username">user1</property>
<property name="hibernate.connection.pool_size">1</property>
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.max_size">50</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">500</property>
<property name="hibernate.c3p0.idle_test_period">3000</property>
<property name="hibernate.current_session_context_class">thread</property>
<property name="hibernate.hbm2ddl.auto">update</property>`
Spring
<bean id="dataSource" scope="prototype" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName">
<value>${dbDriver}</value>
</property>
<property name="url">
<value>${dbURL}</value>
</property>
<property name="username">
<value>${dbUsername}</value>
</property>
<property name="password">
<value>${dbPassword}</value>
</property>
</bean>
This is a message from the server, so, I'd check what's the number of connected clients that the server is reporting. If this is an expected number, like 500 or so, then I'd increase this limit on the server, if you really expect that level of concurrency for your application. Otherwise, reduce the number of clients.
A bit of background on how it works: each client is a thread on the server, and each thread will consume at least one connection. If you are doing it right, the connection will return to the pool once the thread finishes (ie: once the response is sent to the client). So, in the best case, you'd have 500 connections if you have around 500 users connected. If are seeing a number close to a multiple of the number of concurrent users (ie: 2 users, 4 connections), then you might be consuming more than one connection per thread (that's the price you pay for not using the data source provided by your application server, if you are using one). If you are seeing a really high number (like, 10 times the number of users), then you might have a connection leak somewhere. This might happen if you forget to close the connection.
I'd really suggest to use an EntityManager managed by your application server, and using a DataSource also provided by it. Then, you'd not have to worry about managing the connection pooling.
I think your problem is that your datasource doesn't have a connections pool.
from org.springframework.jdbc.datasource.DriverManagerDataSource javadocs:
NOTE: This class is not an actual connection pool; it does not actually
* pool Connections.
It is also say in the javadocs of that class to use Apache's Jakarta Commons DBCP in case u do need connection pool.
If you need a "real" connection pool outside of a J2EE container,
consider Apache's
Jakarta Commons DBCP or C3P0. Commons DBCP's
BasicDataSource and C3P0's ComboPooledDataSource are full connection
pool beans, supporting the same basic properties as this class plus
specific settings (such as minimal/maximal pool size etc).
I used it and it worked like a charm :)
Hoped I helped.
if connection created every time it may caused this problem. Solution is simple. Single on. connection for each sessin. So that I am publishing some code on bottom of my post. Check incorrect and correct coding. 1-incorrect,2-correct
private EntityManagerFactory emf = null;
#SuppressWarnings("unchecked")
public BaseDAO() {
emf = Persistence.createEntityManagerFactory("aaHIBERNATE");
persistentClass = (Class<T>) ((ParameterizedType) getClass()
.getGenericSuperclass()).getActualTypeArguments()[0];
}
private static EntityManagerFactory emf = null;
#SuppressWarnings("unchecked")
public BaseDAO() {
if (emf == null) {
emf = Persistence.createEntityManagerFactory("aaHIBERNATE");}
persistentClass = (Class<T>) ((ParameterizedType) getClass()
.getGenericSuperclass()).getActualTypeArguments()[0];
}
You need to trace it in application and database server both.
Check the database configuration for possible maximum open connections.
If you are using the database connections from another clients, Check for the current opened connections in database server.
Check your application if you are opening too many connections and not closing them.
You need to increase the value of max open connections in database server settings.
To change max open connections you need to edit max_connections and max_user_connections in my.cnf file under database server.
You can also grant/edit max number of connections with per user. more info available here

Categories

Resources