I'm currently working on a project involving a distributed Apache Ignite DB on a cluster of Raspberry Pi.
I want to have 2 separate data regions including one with persistence enabled. Here is my custom config :
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="default_region"/>
<!-- Set max cache size to 50 MBytes-->
<property name="maxSize" value="#{50 * 1024 * 1024}"/>
<!-- Specifiying an eviction policy that evicts the latest used data when the data hits 90% of the max storage capacity-->
<property name="pageEvictionMode" value="RANDOM_LRU"/>
<property name="evictionThreshold" value="0.9"/>
</bean>
</property>
<property name="dataRegionConfigurations">
<list>
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="persistence_region"/>
<!-- Set max cache size to 100 MBytes-->
<property name="maxSize" value="#{100 * 1024 * 1024}"/>
<!-- Enable persistent data storage -->
<property name="persistenceEnabled" value="true"/>
</bean>
</list>
</property>
</bean>
</property>
<property name="authenticationEnabled" value="true"/>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="multicastGroup" value="228.10.10.157"/>
</bean>
</property>
</bean>
</property>
</bean>
I built my own Docker image with my custom conf based on the official Dockerfile except that I changed the base image with: FROM arm32v7/openjdk:8-jre-alpine
The troubles begin when I try to start my image... I have some warnings such as :
Nodes started on local machine require more than 80% of physical RAM what can lead to significant slowdown due to swapping (please decrease JVM heap size, data region size or checkpoint buffer size) [required=1300MB, available=924MB]
And then several errors like :
[SEVERE][tcp-disco-msg-worker-[crd]-#2-#46][G] Blocked system-critical thread has been detected. This can lead to cluster-wide undefined behaviour [workerName=disco-notifier-worker, threadName=disco-notifier-worker-#45, blockedFor=11s]
After that, it's impossible for me to activate the cluster (through REST, or the control.sh script) and to query it.
If someone has a config file working on Rpi I'm really interested!
EDIT :
I have tried with #alamar 's suggestion (-Xmx384 and checkpoint page buffer size of 20M) but I still have theses errors when activating :
[SEVERE][rest-#68][GridTcpRestProtocol] Failed to process client request [ses=GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=90 lim=90 cap=8192], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-1, igniteInstanceName=null, finished=false, heartbeatTs=1607710531775, hashCode=9092883, interrupted=false, runner=grid-nio-worker-tcp-rest-1-#39]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, outboundMessagesQueueSizeMetric=null, super=GridNioSessionImpl [locAddr=/127.0.0.1:11211, rmtAddr=/127.0.0.1:59558, createTime=1607710503680, closeTime=1607710511719, bytesSent=2, bytesRcvd=96, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1607710533267, lastSndTime=1607710503680, lastRcvTime=1607710511719, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [marsh=JdkMarshaller [clsFilter=o.a.i.marshaller.MarshallerUtils$1#1eb05], routerClient=false], directMode=false]], accepted=true, markedForClose=true]], msg=GridClientAuthenticationRequest [cred=SecurityCredentials [login=ignite], super=GridClientAbstractMessage [reqId=1, id=b2d7025f-e4a0-4ab7-8c3e-92e2c1c4aea9, destId=null, super=o.a.i.i.processors.rest.client.message.GridClientAuthenticationRequest#18dd064]]]
class org.apache.ignite.IgniteCheckedException: Failed to send message (connection was closed): GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=90 lim=90 cap=8192], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-1, igniteInstanceName=null, finished=false, heartbeatTs=1607710531775, hashCode=9092883, interrupted=false, runner=grid-nio-worker-tcp-rest-1-#39]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, outboundMessagesQueueSizeMetric=null, super=GridNioSessionImpl [locAddr=/127.0.0.1:11211, rmtAddr=/127.0.0.1:59558, createTime=1607710503680, closeTime=1607710511719, bytesSent=2, bytesRcvd=96, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1607710533267, lastSndTime=1607710503680, lastRcvTime=1607710511719, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1#1eb05], routerClient=false], directMode=false]], accepted=true, markedForClose=true]]
at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7589)
at org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:260)
at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:172)
at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1$1.apply(GridTcpRestNioListener.java:296)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1$1.apply(GridTcpRestNioListener.java:293)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
at org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1.apply(GridTcpRestNioListener.java:293)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1.apply(GridTcpRestNioListener.java:261)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
at org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:347)
at org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:335)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:511)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:490)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:467)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2$1.apply(GridRestProcessor.java:187)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2$1.apply(GridRestProcessor.java:184)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
at org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:184)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Failed to send message (connection was closed): GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=90 lim=90 cap=8192], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-1, igniteInstanceName=null, finished=false, heartbeatTs=1607710531775, hashCode=9092883, interrupted=false, runner=grid-nio-worker-tcp-rest-1-#39]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, outboundMessagesQueueSizeMetric=null, super=GridNioSessionImpl [locAddr=/127.0.0.1:11211, rmtAddr=/127.0.0.1:59558, createTime=1607710503680, closeTime=1607710511719, bytesSent=2, bytesRcvd=96, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1607710533267, lastSndTime=1607710503680, lastRcvTime=1607710511719, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1#1eb05], routerClient=false], directMode=false]], accepted=true, markedForClose=true]]
at org.apache.ignite.internal.util.nio.GridNioServer.send0(GridNioServer.java:642)
at org.apache.ignite.internal.util.nio.GridNioServer.send(GridNioServer.java:583)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onSessionWrite(GridNioServer.java:3693)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionWrite(GridNioFilterAdapter.java:121)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onSessionWrite(GridNioCodecFilter.java:96)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionWrite(GridNioFilterAdapter.java:121)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onSessionWrite(GridNioFilterChain.java:269)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onSessionWrite(GridNioFilterChain.java:192)
at org.apache.ignite.internal.util.nio.GridNioSessionImpl.send(GridNioSessionImpl.java:117)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1.apply(GridTcpRestNioListener.java:290)
... 16 more
Result of "control.sh --set-state ACTIVE" :
Control utility [ver. 2.10.0-SNAPSHOT#20201013-sha1:a2fa7ec3]
2020 Copyright(C) Apache Software Foundation
User: root
Time: 2020-12-11T18:19:04.421
This cluster requires authentication.
Connection to cluster failed. Latest topology update failed.
Command [SET-STATE] finished with code: 2
Control utility has completed execution at: 2020-12-11T18:19:56.559
Execution time: 52138 ms
Thank you!
People did start it occasionally on Rpi.
In your case, please try decreasing JVM's Xmx (-Xmx384 will be OK) and also specify checkpoint page buffer size for persistent region explicitly (20M should be OK in your case).
If you still see "threads blocked" exceptions, please share complete log. You may use Apache Ignite userlist for that. Also, please describe what happens when you try to activate.
There are a couple of things here.
First, I don't think you needed to update the Docker file. A Pi3 has 64-bit cores, so the default should work. I created this one and it works fine on a Pi4.
Second, between your JVM and your data region, you're allocating more memory than your Pi has:
Nodes started on local machine require more than 80% of physical RAM what can lead to significant slowdown due to swapping (please decrease JVM heap size, data region size or checkpoint buffer size) [required=1300MB, available=924MB]
You only appear to have 150Mb for off-heap, so I wonder how big you Java heap is? 8Gb should be plenty.
If your Pi is swapping to disk, that might be causing the blocked system threads.
Finally, since you enabled authentication, you need to supply a username and password when you try to activate the cluster. The default appears to be ignite/ignite.
I am using mybatis 3.4.6 along with org.xerial:sqlite-jdbc 3.28.0. Below is my configuration to use an in-memory database with shared mode enabled
db.driver=org.sqlite.JDBC
db.url=jdbc:sqlite:file::memory:?cache=shared
The db.url is correct according to this test class
And I managed to setup the correct transaction isolation level with below mybatis configuration though there is a typo of property read_uncommitted according to this issue which is reported by me as well
<environment id="${db.env}">
<transactionManager type="jdbc"/>
<dataSource type="POOLED">
<property name="driver" value="${db.driver}" />
<property name="url" value="${db.url}"/>
<property name="username" value="${db.username}" />
<property name="password" value="${db.password}" />
<property name="defaultTransactionIsolationLevel" value="1" />
<property name="driver.synchronous" value="OFF" />
<property name="driver.transaction_mode" value="IMMEDIATE"/>
<property name="driver.foreign_keys" value="ON"/>
</dataSource>
</environment>
This line of configuration
<property name="defaultTransactionIsolationLevel" value="1" />
does the trick to set the correct value of PRAGMA read_uncommitted
I am pretty sure of it since I debugged the underneath code which initialize the connection and check the value has been set correctly
However with the above setting, my program still encounters SQLITE_LOCKED_SHAREDCACHE intermittently while reading, which I think it shouldn't happen according the description highlighted in the red rectangle of below screenshot. I want to know the reason and how to resolve it, though the occurring probability of this error is low.
Any ideas would be appreciated!!
The debug configurations is below
===CONFINGURATION==============================================
jdbcDriver org.sqlite.JDBC
jdbcUrl jdbc:sqlite:file::memory:?cache=shared
jdbcUsername
jdbcPassword ************
poolMaxActiveConnections 10
poolMaxIdleConnections 5
poolMaxCheckoutTime 20000
poolTimeToWait 20000
poolPingEnabled false
poolPingQuery NO PING QUERY SET
poolPingConnectionsNotUsedFor 0
---STATUS-----------------------------------------------------
activeConnections 5
idleConnections 5
requestCount 27
averageRequestTime 7941
averageCheckoutTime 4437
claimedOverdue 0
averageOverdueCheckoutTime 0
hadToWait 0
averageWaitTime 0
badConnectionCount 0
===============================================================
Attachments:
The exception is below
org.apache.ibatis.exceptions.PersistenceException:
### Error querying database. Cause: org.apache.ibatis.transaction.TransactionException: Error configuring AutoCommit. Your driver may not support getAutoCommit() or setAutoCommit(). Requested setting: false. Cause: org.sqlite.SQLiteException: [SQLITE_LOCKED_SHAREDCACHE] Contention with a different database connection that shares the cache (database table is locked)
### The error may exist in mapper/MsgRecordDO-sqlmap-mappering.xml
### The error may involve com.super.mock.platform.agent.dal.daointerface.MsgRecordDAO.getRecord
### The error occurred while executing a query
### Cause: org.apache.ibatis.transaction.TransactionException: Error configuring AutoCommit. Your driver may not support getAutoCommit() or setAutoCommit(). Requested setting: false. Cause: org.sqlite.SQLiteException: [SQLITE_LOCKED_SHAREDCACHE] Contention with a different database connection that shares the cache (database table is locked)
I finally resolved this issue by myself and share the workaround below in case someone else encounters similar issue in the future.
First of all, we're able to get the completed call stack of the exception shown below
Going through the source code indicated by the callback, we have below findings.
SQLite is built-in with auto commit enabled by default which is contradict with MyBatis which disables auto commit by default since we're using SqlSessionManager
MyBatis would override the auto commit property during connection initialization using method setDesiredAutoCommit which finally invokes SQLiteConnection#setAutoCommit
SQLiteConnection#setAutoCommit would incur a begin immediate operation against the database which is actually exclusive, check out below source code screenshots for detailed explanation since we configure our transaction mode to be IMMEDIATE
<property name="driver.transaction_mode" value="IMMEDIATE"/>
So until now, An apparent solution is to change the transaction mode to be DEFERRED. Furthermore, the solution of making the auto commit setting the same between MyBatis and SQLite has been considered as well, however, it's not adopted since there is no way to set the auto commit of SQLiteConnection during initialization stage, there would be always switching (from true to false or vice versa) and switch would cause the above error probably if transaction mode is not set properly
I have a very simple computation which produces letter matrices finds probably all the words in the matrix. The letters in the word are adjacent cells.
for (int i = 0; i < 500; i++) {
System.out.println(i);
Matrix matrix = new Matrix(4);
matrix.scanWordsRandomly(9);
matrix.printMatrix();
System.out.println(matrix.getSollSize());
matrix.write_to_db();
}
Here is the persisting code.
public void write_to_db() {
Session session = null;
try {
session = HibernateUtil.getSessionFactory().openSession();
session.beginTransaction();
Matrixtr onematrixtr = new Matrixtr();
onematrixtr.setDimension(dimension);
onematrixtr.setMatrixstr(this.toString());
onematrixtr.setSolsize(getSollSize());
session.save(onematrixtr);
for (Map.Entry<Kelimetr, List<Cell>> sollution : sollutions.entrySet()) {
Kelimetr kelimetr = sollution.getKey();
List<Cell> solpath = sollution.getValue();
Solstr onesol = new Solstr();
onesol.setKelimetr(kelimetr);
onesol.setMatrixtr(onematrixtr);
onesol.setSoltext(solpath.toString().replace("[", "").replace("]", "").replace("true", "").replace("false", ""));
session.save(onesol);
}
session.getTransaction().commit();
session.close();
}
catch (HibernateException he) {
System.out.println("DB Error : " + he.getMessage());
session.close();
}
catch (Exception ex) {
System.out.println("General Error : " + ex.getMessage());
}
}
Here is the hibernate configuration file.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="hibernate.connection.url">jdbc:mysql://localhost:3306/kelimegame_db_dev?autoReconnect=true&useUnicode=true&characterEncoding=UTF-8</property>
<property name="hibernate.connection.username">root</property>
<property name="hibernate.connection.password">!.Wlu9RrCA</property>
<property name="hibernate.show_sql">false</property>
<property name="hibernate.query.factory_class">org.hibernate.hql.classic.ClassicQueryTranslatorFactory</property>
<property name="hibernate.format_sql">false</property>
<!-- Use the C3P0 connection pool provider -->
<property name="hibernate.c3p0.acquire_increment">50</property>
<property name="hibernate.c3p0.min_size">10</property>
<property name="hibernate.c3p0.max_size">100</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">5</property>
<property name="hibernate.c3p0.idle_test_period">3000</property>
<mapping resource="kelimegame/entity/Progress.hbm.xml"/>
<mapping resource="kelimegame/entity/Solstr.hbm.xml"/>
<mapping resource="kelimegame/entity/Kelimetr.hbm.xml"/>
<mapping resource="kelimegame/entity/User.hbm.xml"/>
<mapping resource="kelimegame/entity/Achievement.hbm.xml"/>
<mapping resource="kelimegame/entity/Matrixtr.hbm.xml"/>
</session-factory>
</hibernate-configuration>
After finding all possible solutions I persist the matrix and the solutions using hibernate. I am also using c3pO library. I am not spawning any thread. All the work is being done in a very simple iterative way. But I am running the jar in separate processes.
From different terminals I am executing this :
java -jar NewDB.jar
I got a deadlock as follows :
Apr 25, 2013 8:38:05 PM com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector run
WARNING: com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#7f0c09f9 -- APPARENT DEADLOCK!!! Creating emergency threads for unassigned pending tasks!
Apr 25, 2013 9:08:23 PM com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector run
WARNING: com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#7f0c09f9 -- APPARENT DEADLOCK!!! Complete Status:
Managed Threads: 3
Active Threads: 3
Active Tasks:
com.mchange.v2.resourcepool.BasicResourcePool$1DestroyResourceTask#2933f261
on thread: C3P0PooledConnectionPoolManager[identityToken->z8kfsx8uibeyqevbbapc|4045cf35]-HelperThread-#1
com.mchange.v2.resourcepool.BasicResourcePool$1DestroyResourceTask#116dd369
on thread: C3P0PooledConnectionPoolManager[identityToken->z8kfsx8uibeyqevbbapc|4045cf35]-HelperThread-#0
com.mchange.v2.resourcepool.BasicResourcePool$1DestroyResourceTask#41529b6f
on thread: C3P0PooledConnectionPoolManager[identityToken->z8kfsx8uibeyqevbbapc|4045cf35]-HelperThread-#2
Pending Tasks:
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#165ab5ea
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#1d5d211d
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#4d2905fa
Pool thread stack traces:
Thread[C3P0PooledConnectionPoolManager[identityToken->z8kfsx8uibeyqevbbapc|4045cf35]-HelperThread-#1,5,main]
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:662)
Thread[C3P0PooledConnectionPoolManager[identityToken->z8kfsx8uibeyqevbbapc|4045cf35]-HelperThread-#0,5,main]
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:662)
Thread[C3P0PooledConnectionPoolManager[identityToken->z8kfsx8uibeyqevbbapc|4045cf35]-HelperThread-#2,5,main]
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:662)
Apr 25, 2013 9:41:29 PM com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector run
WARNING: com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#7f0c09f9 -- APPARENT DEADLOCK!!! Creating emergency threads for unassigned pending tasks!
Apr 25, 2013 9:55:18 PM com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector run
WARNING: com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#7f0c09f9 -- APPARENT DEADLOCK!!! Complete Status:
Managed Threads: 3
Active Threads: 3
Active Tasks:
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#5a337b7d
on thread: C3P0PooledConnectionPoolManager[identityToken->z8kfsx8uibeyqevbbapc|4045cf35]-HelperThread-#0
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#69f079ce
on thread: C3P0PooledConnectionPoolManager[identityToken->z8kfsx8uibeyqevbbapc|4045cf35]-HelperThread-#1
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#2accf9b8
on thread: C3P0PooledConnectionPoolManager[identityToken->z8kfsx8uibeyqevbbapc|4045cf35]-HelperThread-#2
Pending Tasks:
com.mchange.v2.resourcepool.BasicResourcePool$1DestroyResourceTask#771eb4fb
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#fc07d6
com.mchange.v2.resourcepool.BasicResourcePool$1DestroyResourceTask#2266731b
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#740f0341
com.mchange.v2.resourcepool.BasicResourcePool$1DestroyResourceTask#59edbee
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#78e924
com.mchange.v2.resourcepool.BasicResourcePool$1DestroyResourceTask#2123aba
com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#7acd8a65
Pool thread stack traces:
Thread[C3P0PooledConnectionPoolManager[identityToken->z8kfsx8uibeyqevbbapc|4045cf35]-HelperThread-#0,5,main]
java.text.NumberFormat.getInstance(NumberFormat.java:769)
java.text.NumberFormat.getInstance(NumberFormat.java:393)
java.text.MessageFormat.subformat(MessageFormat.java:1262)
java.text.MessageFormat.format(MessageFormat.java:860)
java.text.Format.format(Format.java:157)
java.text.MessageFormat.format(MessageFormat.java:836)
com.mysql.jdbc.Messages.getString(Messages.java:106)
com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:2552)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3002)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2991)
com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3532)
com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:943)
com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:4113)
com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1308)
com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2336)
com.mysql.jdbc.ConnectionImpl.connectWithRetries(ConnectionImpl.java:2176)
com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2158)
com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:792)
com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
sun.reflect.GeneratedConstructorAccessor7.newInstance(Unknown Source)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:525)
com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:381)
com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:305)
com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:134)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:183)
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:172)
com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:188)Killed
caglar#ubuntu:~/NetBeansProjects/NewDB/dist$
My question is as follows :
Can this deadlock in c3po happen since I am running the program in
separate processes?
Should I use one process and multiple threads inside of this
process?
How can I trace this deadlock understand the cause of it? Is there a way to trace multiple JVMs causing deadlocks?
this is an interesting one.
you've published two distinct APPARENT DEADLOCKS. the first one is being caused by c3p0 attempting to close() Connections, and those close() operations are neither succeeding nor failing with an Exception in a timely manner. the second APPARENT DEADLOCK shows problems with Connection acquisition: c3p0 is attempting to acquire new Connections, and those attempts are neither succeeding nor failing with an Exception in a timely manner. the fact that very different operations are freezing suggests that it might be a more general problem with your dbms locking up under the stress of what you are doing or somesuch. it should be no problem to run multiple processes against your database, but you need to stay cognizant of limits.
there are a few interesting things about your configuration:
1) hibernate.c3p0.max_statements=5 is a very bad idea, on almost any pool and particularly on pools this large. you've got up to 100 Connections, and you're only allowing a total of 5 Statements to be cached between all of them. this might stress both the pool and the DBMS, as you will constantly be churning through PreparedStatements and the statement cache does a lot of bookkeeping about that. you may have meant that to be 5 cached statements per connection, but that's not what you have configured. you have set a global maximum for your pool. maybe try hibernate.c3p0.maxStatementsPerConnection=5 instead? or set max_statements to zero to turn statement caching off, at least until you resolve your deadlock. see http://www.mchange.com/projects/c3p0/#configuring_statement_pooling
2) if you are running your computation in multiple processes rather than multiple Threads, do you really need each process to hold 50 - 100 Connections? things may well be freezing up simply because you are stressing the dbms with too many Connections outstanding as each of your multiple processes acquire lots of resource-heavy Connections. you don't need more Connections in any process than you might have client Threads running concurrently within that process. i'd set hibernate.c3p0.acquire_increment and probably hibernate.c3p0.max_size to much smaller values.
3) if you really do need all those Connections running simultaneously, you can reduce the vulnerability of your pools to deadlock by increasing the config parameter numHelperThreads to some value greater than its default of 3. you probably want numHelperThreads to be something like twice the number of cores available on your machine. given that you are running multiple processes though, you might find that you are saturating your CPU, and that is freezing things up. so watch for that.
basically, try updating your configuration so that you are using resources -- file handles, network connections, CPU -- as efficiently as possible and so that you are not unnecessarily stressing the pool / statement cache / dbms more than you need to be.
if these suggestions don't resolve the problem, please post the fill config of your pools. c3p0 dumps its config at INFO level on pool initialization.
good luck!
I do a project merging Hibernate and Spring in a Java web application, using Tomcat under Linux environment. Due to the Mysql 8 hours timeout problem, we want to use C3P0 to manage a connection pool with our Mysql database.
But when we use it, we have numerous threads that are created. I figured it out beacause I did on each request a print of all of them with a memory status that show me the increasing memory and that kind of threads:
name: C3P0PooledConnectionPoolManager[identityToken->1hged7o8r13kpj7n1h3ycia|39c446]-HelperThread-#0 daemon: true group! main groupParent: system alive: true interrupted: false
name: C3P0PooledConnectionPoolManager[identityToken->1hged7o8r13kpj7n1h3ycia|17ec0e8]-AdminTaskTimer daemon: true group! main groupParent: system alive: true interrupted: false
It can produce more than 500 threads like these ones, after enough time.
Here is my Hibernate.cfg.xml:
<property name="connection.provider_class">
org.hibernate.connection.C3P0ConnectionProvider</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.idle_test_period">5</property>
<property name="hibernate.c3p0.max_size">100</property>
<property name="hibernate.c3p0.max_statements">100</property>
<property name="hibernate.c3p0.min_size">10</property>
<property name="hibernate.c3p0.timeout">5</property>
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="hibernate.connection.url">jdbc:mysql://localhost:3306/myBase</property>
<property name="hibernate.connection.username">root</property>
<property name="hibernate.connection.password"></property>
<property name="hibernate.hbm2ddl.auto">update</property>
<property name="hibernate.default_schema">myProject</property>
<property name="hibernate.query.factory_class">org.hibernate.hql.classic.ClassicQueryTranslatorFactory</property>
<property name="show_sql">false</property>
<property name="cache.provider_class">
org.hibernate.cache.NoCacheProvider
</property>
I also tried to add a C3P0 propeties file, but except reducing the helper thread number, it don't delete the unsused thread:
c3p0.maxStatements=5
c3p0.maxIdleTime=10
c3p0.numHelperThreads=1
c3p0.testConnectionOnCheckout=true
c3p0.preferredTestQuery=SELECT 1
c3p0.initialPoolSize=1
c3p0.minPoolSize=1
c3p0.maxPoolSize=10
c3p0.acquireIncrement=1
c3p0.idleConnectionTestPeriod=1
Does anyone have an idea of why this happen and how to solve this problem?
Thanks a lot.
if you are seeing a multiplication of c3p0 helper and timer threads, you are somehow creating a multitude of c3p0 DataSources when you want there to be just one. sometimes this happens if you are hot-reloading your app but forgetting to close() your old c3p0 DataSource when you recycle.
effectively it looks like you are "leaking" DataSources. you need to figure out why/where this is happening. for some clues, check out your logs for c3p0 DataSource initialization messages at INFO level. Search for the string "Initializing c3p0 pool", for example.
good luck!
Ok I found a combination of properties to solve my problem, keeping in mind that I don't need a lot of connection at a time:
c3p0.maxStatements=5
c3p0.maxIdleTime=10
c3p0.numHelperThreads=3
c3p0.testConnectionOnCheckout=true
c3p0.preferredTestQuery=SELECT 1
c3p0.initialPoolSize=1
c3p0.minPoolSize=1
c3p0.maxPoolSize=1
c3p0.acquireIncrement=1
c3p0.idleConnectionTestPeriod=1
c3p0.maxAdministrativeTaskTime=1
Thanks to everyone
I would expect it to create a number of threads proportional to c3p0.minPoolSize
and c3p0.maxPoolSize and your maximum is 10.
http://www.mchange.com/projects/c3p0/#other_ds_configuration
"numHelperThreads and maxAdministrativeTaskTime help to configure the behavior of DataSource thread pools. By default, each DataSource has only three associated helper threads. If performance seems to drag under heavy load, or if you observe via JMX or direct inspection of a PooledDataSource, that the number of "pending tasks" is usually greater than zero, try increasing numHelperThreads. maxAdministrativeTaskTime may be useful for users experiencing tasks that hang indefinitely and "APPARENT DEADLOCK" messages. (See Appendix A for more.) "
numHelperThreads defines how many threads per DataSource are used, therefore indeed you will have 10 threads with numHelperThreads=1.
The only way to make sure C3P0 consumes only one Thread is to set c3p0.minPoolSize
and c3p0.maxPoolSize to 1 but this defeats the purpose of connection pooling.
I am using Spring JDBCTemplate to perform SQL operations on an apache commons datasource (org.apache.commons.dbcp.BasicDataSource) and when the service is up and running to long, i end up getting this exception:
org.springframework.dao.RecoverableDataAccessException: StatementCallback; SQL [SELECT * FROM vendor ORDER BY name]; The last packet successfully received from the server was 64,206,061 milliseconds ago. The last packet sent successfully to the server was 64,206,062 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 64,206,061 milliseconds ago. The last packet sent successfully to the server was 64,206,062 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
at org.springframework.jdbc.support.SQLExceptionSubclassTranslator.doTranslate(SQLExceptionSubclassTranslator.java:98)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:406)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:455)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:463)
at com.cable.comcast.neto.nse.taac.dao.VendorDao.getAllVendor(VendorDao.java:25)
at com.cable.comcast.neto.nse.taac.controller.RemoteVendorAccessController.requestAccess(RemoteVendorAccessController.java:78)
I have tried adding the 'autoReconnect=true' to the connection string, but this problem still occurs. Is there another datasource that can be used that will manage the reconnecting for me?
BasicDataSource can manage keeping the connections alive for you. You need to set the following properties :
minEvictableIdleTimeMillis = 120000 // Two minutes
testOnBorrow = true
timeBetweenEvictionRunsMillis = 120000 // Two minutes
minIdle = (some acceptable number of idle connections for your server)
These will configure the data source to keep continually test your connections, and expire and remove them if they become stale. There's a number of other properties on the basic data source that you may want to consider checking into as well to tweak your connection pooling performance. I've run into some strange problems in the past where I was having issues with my database access and it all came down to how the connection pool was configured.
You can try to C3PO:
http://sourceforge.net/projects/c3p0/
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"destroy-method="close">
<property name="user" value="${db.username}"/>
<property name="password" value="${db.password}"/>
<property name="driverClass" value="${db.driverClassName}"/>
<property name="jdbcUrl" value="${db.url}"/>
<property name="initialPoolSize" value="0"/>
<property name="maxPoolSize" value="1"/>
<property name="minPoolSize" value="1"/>
<property name="acquireIncrement" value="1"/>
<property name="acquireRetryAttempts" value="0"/>
<property name="idleConnectionTestPeriod" value="600"/> <!--in seconds-->
</bean>
grettings
pacovr