Tomcat/TomEE/OpenEJB Datasource Connection to SQL Server - CPU Creep - java

We are using TomEE (Tomcat with OpenEJB) to connect to a SQL Server database. The connection works fine, but we are seeing odd CPU creeping issues. The application runs and performs well for about 24 hours (SQL Server CPU at about 15%-20%), and then performance starts to suffer after that. We check and the CPU is now at 60%. If we shut down the running jobs (but leave Tomcat running), the CPU drop down to almost zero. If we restart the processing, the CPU jumps back up immediately to the 60%. If we stop the Tomcat instance and then restart, then the CPU usage on the SQL Server machine stays at its low usage for about 24 hours and then it begins to creep back up again.
I think that this must be due to the datasource configuration, but I'm not sure what changes we should make. Can someone help out? Here is the datasource configuration we are using:
<Resource id="Datasource" type="javax.sql.DataSource">
accessToUnderlyingConnectionAllowed = false
defaultAutoCommit = true
ignoreDefaultValues = false
initialSize = 0
jdbcDriver = com.microsoft.sqlserver.jdbc.SQLServerDriver
jdbcUrl = jdbc:sqlserver://{url}
jtaManaged = true
maxActive = 200
maxIdle = 20
maxOpenPreparedStatements = 0
maxWaitTime = -1 millisecond
minEvictableIdleTime = 30 minutes
minIdle = 0
numTestsPerEvictionRun = 3
password = {password}
passwordCipher = Static3DES
poolPreparedStatements = false
testOnBorrow = true
testOnReturn = false
testWhileIdle = false
timeBetweenEvictionRuns = -1 millisecond
userName = {user}
validationQuery = SELECT 1
</Resource>
Thanks in advance!

Related

Connection not released back to the tomcat jdbc pool

I have a grails2 based application which is using tomcat jdbc pool, recently I have been getting into problem where all the connections in the pool get used up and I start getting:-
org.springframework.transaction.CannotCreateTransactionException: Could not open Hibernate Session for transaction; nested exception is org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-nio-8443-exec-38] Timeout: Pool empty. Unable to fetch a connection in 10 seconds, none available[size:100; busy:100; idle:0; lastwait:10000].; nested exception is org.springframework.transaction.CannotCreateTransactionException: Could not open Hibernate Session for transaction; nested exception is org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-nio-8443-exec-38] Timeout: Pool empty. Unable to fetch a connection in 10 seconds, none available[size:100; busy:100; idle:0; lastwait:10000].
I have a few query that requires heavy join and some stored proc that executes for about 2 - 3 minutes, for it i am manually get the connection from the datasource bean :-
currentConnection = dataSource.connection
sqlInstance = new Sql(currentConnection)
sqlInstance.execute(query)
sqlInstance.close()
I've logged the total active connection in stdout and i see that the no. of active connection keeps on rising and rising and it never drops, it then gets to 100 which is the total active connection allowed and then i start getting issue of poolexhaustauion, can anyone give me an idea, what i might be missing or where the connection might be leaking. here is my connection detail :-
dataSource {
pooled = true
driverClassName = "com.mysql.jdbc.Driver"
url="jdbc:mysql://something:3306/something?zeroDateTimeBehavior=convertToNull&autoReconnect=true&relaxAutoCommit=true"
username="#####"
password='#$#$$$$$$$'
dbCreate = "update"
properties {
initialSize=5
maxActive=100
minIdle=5
maxIdle=25
maxWait = 10000
maxAge = 10 * 60000
timeBetweenEvictionRunsMillis=5000
minEvictableIdleTimeMillis=60000
validationQuery="SELECT 1"
validationInterval=15000
testWhileIdle=true
testOnBorrow=true
testOnReturn=true
removeAbandoned=true
removeAbandonedTimeout=400
logAbandoned=true
jdbcInterceptors = "ConnectionState"
defaultTransactionIsolation = java.sql.Connection.TRANSACTION_READ_COMMITTED
}
}

Jetty Websocket IdleTimeout

I've been working on annotated websockets lately, with the Jetty API (9.4.5 release) , and made a chat with it.
However i got an issue, after 5 minutes (which i believe is the default timer), the session is closed (it is not due to an error).
The only solution I've found yet, is to notify my socket On closing event and reopen the connection in a new socket.
However i've read on stackOverflow, that by setting IdleTimeOut in the WebsocketPolicy, i could avoid the issue:
I've tried setting to 3600000 for instance, but the behavior does not change at all
I also tried to set it to -1 but i get the following error: IdleTimeout [-1] must be a greater than or equal to 0
private ServletContextHandler setupWebsocketContext() {
ServletContextHandler websocketContext = new AmosContextHandler(ServletContextHandler.SESSIONS | ServletContextHandler.SECURITY);
WebSocketHandler socketCreator = new WebSocketHandler(){
#Override
public void configure(WebSocketServletFactory factory){
factory.getPolicy().setIdleTimeout(-1);
factory.getPolicy().setMaxTextMessageBufferSize(MAX_MESSAGE_SIZE);
factory.getPolicy().setMaxBinaryMessageBufferSize(MAX_MESSAGE_SIZE);
factory.getPolicy().setMaxTextMessageSize(MAX_MESSAGE_SIZE);
factory.getPolicy().setMaxBinaryMessageSize(MAX_MESSAGE_SIZE);
factory.setCreator(new UpgradedSocketCreator());
}
};
ServletHolder sh = new ServletHolder(new WebsocketChatServlet());
websocketContext.addServlet(sh, "/*");
websocketContext.setContextPath("/Chat");
websocketContext.setHandler(socketCreator);
websocketContext.getSessionHandler().setMaxInactiveInterval(0);
return websocketContext;
}
I've also tried to change the policy directly in the OnConnect event, by using the call session.getpolicy.setIdleTimeOut(), but I haven't noticed any results.
Is this an expected behavior or am I missing something? Thanks for your help.
EDIT:
Log on the closure:
Client Side:
2017-07-03T12:48:00.552 DEBUG HttpClient#179313750-scheduler Ignored idle endpoint SocketChannelEndPoint#2fb4b627{localhost/127.0.0.1:5080<->/127.0.0.1:53835,OPEN,fill=-,flush=-,to=1/300000}{io=0/0,kio=0,kro=1}->WebSocketClientConnection#e0198ece[ios=IOState#3ac0ec79[CLOSING,in,!out,close=CloseInfo[code=1000,reason=null],clean=false,closeSource=LOCAL],f=Flusher[queueSize=0,aggregateSize=0,failure=null],g=Generator[CLIENT,validating],p=Parser#65c4d838[ExtensionStack,s=START,c=0,len=187,f=null]]
Server side:
2017-07-03T12:48:00.595 DEBUG Idle pool thread onClose WebSocketServerConnection#e0033d54[ios=IOState#10d40dca[CLOSED,!in,!out,finalClose=CloseInfo[code=1000,reason=null],clean=true,closeSource=REMOTE],f=Flusher[queueSize=0,aggregateSize=0,failure=null],g=Generator[SERVER,validating],p=Parser#317213f3[ExtensionStack,s=START,c=0,len=2,f=CLOSE[len=2,fin=true,rsv=...,masked=true]]]<-SocketChannelEndPoint#690dfbfb'{'/127.0.0.1:53835<->/127.0.0.1:5080,CLOSED,fill=-,flush=-,to=1/360000000}'{'io=0/0,kio=-1,kro=-1}->WebSocketServerConnection#e0033d54[ios=IOState#10d40dca[CLOSED,!in,!out,finalClose=CloseInfo[code=1000,reason=null],clean=true,closeSource=REMOTE],f=Flusher[queueSize=0,aggregateSize=0,failure=null],g=Generator[SERVER,validating],p=Parser#317213f3[ExtensionStack,s=START,c=0,len=2,f=CLOSE[len=2,fin=true,rsv=...,masked=true]]]
2017-07-03T12:48:00.595 DEBUG Idle pool thread org.eclipse.jetty.util.thread.Invocable$InvocableExecutor#4f13dee2 invoked org.eclipse.jetty.io.ManagedSelector$$Lambda$193/682154970#551e133a
2017-07-03T12:48:00.595 DEBUG Idle pool thread EatWhatYouKill#6ba355e4/org.eclipse.jetty.io.ManagedSelector$SelectorProducer#7b1559f1/PRODUCING/0/1 produce exit
2017-07-03T12:48:00.595 DEBUG Idle pool thread ran EatWhatYouKill#6ba355e4/org.eclipse.jetty.io.ManagedSelector$SelectorProducer#7b1559f1/PRODUCING/0/1
2017-07-03T12:48:00.595 DEBUG Idle pool thread run EatWhatYouKill#6ba355e4/org.eclipse.jetty.io.ManagedSelector$SelectorProducer#7b1559f1/PRODUCING/0/1
2017-07-03T12:48:00.595 DEBUG Idle pool thread EatWhatYouKill#6ba355e4/org.eclipse.jetty.io.ManagedSelector$SelectorProducer#7b1559f1/PRODUCING/0/1 run
2017-07-03T12:48:00.597 DEBUG Idle pool thread 127.0.0.1 has disconnected !
2017-07-03T12:48:00.597 DEBUG Idle pool thread Disconnected: 127.0.0.1 (127.0.0.1) (statusCode= 1,000 , reason=null)
Annotated WebSockets have their own timeout settings in the annotation.
#WebSocket(maxIdleTime=30000)
The annotation #WebSocket has option:
int maxIdleTime() default -2;
In fact it's not clear what does it mean.
If you check implementation, you can find:
if (anno.maxIdleTime() > 0)
{
this.policy.setIdleTimeout(anno.maxIdleTime());
}
method implementation:
/**
* The time in ms (milliseconds) that a websocket may be idle before closing.
*
* #param ms
* the timeout in milliseconds
*/
public void setIdleTimeout(long ms)
{
assertGreaterThan("IdleTimeout",ms,0);
this.idleTimeout = ms;
}
and finally:
/**
* The time in ms (milliseconds) that a websocket may be idle before closing.
* <p>
* Default: 300000 (ms)
*/
private long idleTimeout = 300000;
Conclusion: negative value apply default behavior (300000 ms). You need to configure 'idleTimeout' according your business value.
PS: solved my case with:
#WebSocket(maxIdleTime = Integer.MAX_VALUE)

Hazelcast Operation Heartbeat Timeouts appearing sporadically

We have a Hazelcast client (3.7.4):
//Initializes Hazelcast client config
ClientConfig aHazelcastClientConfig = new ClientConfig();
String aHazelcastUrl = this.getHost()+":"+this.getPort().toString();
ClientNetworkConfig aHazelcastNetworkConfig=
aHazelcastClientConfig.getNetworkConfig();
aHazelcastNetworkConfig.addAddress(aHazelcastUrl);
GroupConfig group = new GroupConfig (getGroupName(),getGroupPassword());
aHazelcastClientConfig.setGroupConfig(group);
HazelcastInstance aHazelcastClient=
HazelcastClient.newHazelcastClient(aHazelcastClientConfig);
...
IMap aMonitoredMap = aHazelcastClient.getMap(getMonitoredMap());
that periodically checks one HZ Server (3.7.4), and we have observed sometimes next exceptions are appearing in the client side:
InitializeDistributedObjectOperation invocation failed to complete due to operation-heartbeat-timeout. Current time: 2017-02-07 18:07:30.329. Total elapsed time: 120189 ms. Last operation heartbeat: never. Last operation heartbeat from member: 2017-02-07 18:05:37.489. Invocation{op=com.hazelcast.spi.impl.proxyservice.impl.operations.InitializeDistributedObjectOperation{serviceName='hz:impl:mapService', identityHash=9759664, partitionId=-1, replicaIndex=0, callId=0, invocationTime=1486487130140 (2017-02-07 18:05:30.140), waitTimeout=-1, callTimeout=60000}, tryCount=1, tryPauseMillis=500, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1486487130140, firstInvocationTime='2017-02-07 18:05:30.140', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 01:00:00.000', target=[10.118.152.82]:5720, pendingResponse={VOID}, backupsAcksExpected=0, backupsAcksReceived=0, connection=Connection[id=7, /172.22.191.200:5720->/10.118.152.82:42563, endpoint=[10.118.152.82]:5720, alive=true, type=MEMBER]}
It seems the maximum call waiting timeout (by default 60000 msecs) is being reached. In the above example, the total elapsed time is more than 2 minutes (120189 ms)
This problem is appearing sporadically, without any regular appearance pattern.
It seems the network is working correctly when it has appeared, so we can discard some network connectivity issue.
Any hint or recommendation about which reasons could provoke it?
Thanks a lot.
Best Regards,
Jorge

Grails app denied connection next day

I wrote a web application using grails. It runs fine throughout the day however when I wake up and check it the next day it will not connect to the database properly without me reloading it. (MySQL). I feel as if the connection is being refused?
Here is my stack trace:
2014-07-28 13:28:07,103 [http-bio-8081-exec-93] ERROR StackTrace - Full Stack Trace:
java.net.SocketException: Software caused connection abort: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(Unknown Source)
at java.net.SocketOutputStream.write(Unknown Source)
at java.io.BufferedOutputStream.flushBuffer(Unknown Source)
at java.io.BufferedOutputStream.flush(Unknown Source)
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3345)
Here is my datasource:
dataSource {
// Production
driverClassName = "com.mysql.jdbc.Driver"
username = "ROOT"
password = "PASS"
dbCreate = "update"
url = "jdbc:mysql://172.16.1.3/work_orders_v2"
dialect = "org.hibernate.dialect.MySQL5InnoDBDialect"
pooled = true
properties {
maxActive = 50
maxIdle = 25
minIdle = 5
initialSize = 5
minEvictableIdleTimeMillis = 60000
timeBetweenEvictionRunsMillis = 60000
maxWait = 10000
}
}
Iv tried a few things that had no affect. Can you spot anything odd?
Thank you.
It sounds like the connections in your pool are being closed by the database server after sitting idle. This is normal behavior that I would expect to happen.
If you add these validation settings the connections in the pool will be tested before your code gets them. Any closed connections will be dropped from the pool.
properties {
maxActive = 50
maxIdle = 25
minIdle = 5
initialSize = 5
minEvictableIdleTimeMillis = 60000
timeBetweenEvictionRunsMillis = 60000
maxWait = 10000
// Connection Validation Settings
testOnBorrow=true
testWhileIdle=true
testOnReturn=true
validationQuery="SELECT 1"
}
Use the following properties which work fine for me.
properties {
maxActive = -1
minEvictableIdleTimeMillis=1800000
timeBetweenEvictionRunsMillis=1800000
numTestsPerEvictionRun=3
testOnBorrow=true
testWhileIdle=true
testOnReturn=false
validationQuery="SELECT 1"
jdbcInterceptors="ConnectionState"
}

Database connection error MySQL,Tomcat

I am using following setting of dbcp for connection pooling
maxActive = 50
maxIdle = 10
minIdle = 2
initialSize = 2
maxWait = 30000
validationQuery="select 1"
testOnBorrow=true
testWhileIdle=true
timeBetweenEvictionRunsMillis=60000
Also, I have set auto_reconnect to true in connection URL. I am seeing following error on regular basis.
com.mysql.jdbc.CommunicationsException:
The last communications with the server was 28810 seconds ago,
which is longer than the server configured value of 'wait_timeout'.
You should consider either expiring and/or testing connection validity
before use in your application, increasing the server configured values
for client timeouts, or using the Connector/J connection property
'autoReconnect=true' to avoid this problem
Does anybody has any idea what is wrong here?

Categories

Resources