Apache Ignite on Raspberry Pi 3 - java
I'm currently working on a project involving a distributed Apache Ignite DB on a cluster of Raspberry Pi.
I want to have 2 separate data regions including one with persistence enabled. Here is my custom config :
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="default_region"/>
<!-- Set max cache size to 50 MBytes-->
<property name="maxSize" value="#{50 * 1024 * 1024}"/>
<!-- Specifiying an eviction policy that evicts the latest used data when the data hits 90% of the max storage capacity-->
<property name="pageEvictionMode" value="RANDOM_LRU"/>
<property name="evictionThreshold" value="0.9"/>
</bean>
</property>
<property name="dataRegionConfigurations">
<list>
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="persistence_region"/>
<!-- Set max cache size to 100 MBytes-->
<property name="maxSize" value="#{100 * 1024 * 1024}"/>
<!-- Enable persistent data storage -->
<property name="persistenceEnabled" value="true"/>
</bean>
</list>
</property>
</bean>
</property>
<property name="authenticationEnabled" value="true"/>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="multicastGroup" value="228.10.10.157"/>
</bean>
</property>
</bean>
</property>
</bean>
I built my own Docker image with my custom conf based on the official Dockerfile except that I changed the base image with: FROM arm32v7/openjdk:8-jre-alpine
The troubles begin when I try to start my image... I have some warnings such as :
Nodes started on local machine require more than 80% of physical RAM what can lead to significant slowdown due to swapping (please decrease JVM heap size, data region size or checkpoint buffer size) [required=1300MB, available=924MB]
And then several errors like :
[SEVERE][tcp-disco-msg-worker-[crd]-#2-#46][G] Blocked system-critical thread has been detected. This can lead to cluster-wide undefined behaviour [workerName=disco-notifier-worker, threadName=disco-notifier-worker-#45, blockedFor=11s]
After that, it's impossible for me to activate the cluster (through REST, or the control.sh script) and to query it.
If someone has a config file working on Rpi I'm really interested!
EDIT :
I have tried with #alamar 's suggestion (-Xmx384 and checkpoint page buffer size of 20M) but I still have theses errors when activating :
[SEVERE][rest-#68][GridTcpRestProtocol] Failed to process client request [ses=GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=90 lim=90 cap=8192], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-1, igniteInstanceName=null, finished=false, heartbeatTs=1607710531775, hashCode=9092883, interrupted=false, runner=grid-nio-worker-tcp-rest-1-#39]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, outboundMessagesQueueSizeMetric=null, super=GridNioSessionImpl [locAddr=/127.0.0.1:11211, rmtAddr=/127.0.0.1:59558, createTime=1607710503680, closeTime=1607710511719, bytesSent=2, bytesRcvd=96, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1607710533267, lastSndTime=1607710503680, lastRcvTime=1607710511719, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [marsh=JdkMarshaller [clsFilter=o.a.i.marshaller.MarshallerUtils$1#1eb05], routerClient=false], directMode=false]], accepted=true, markedForClose=true]], msg=GridClientAuthenticationRequest [cred=SecurityCredentials [login=ignite], super=GridClientAbstractMessage [reqId=1, id=b2d7025f-e4a0-4ab7-8c3e-92e2c1c4aea9, destId=null, super=o.a.i.i.processors.rest.client.message.GridClientAuthenticationRequest#18dd064]]]
class org.apache.ignite.IgniteCheckedException: Failed to send message (connection was closed): GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=90 lim=90 cap=8192], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-1, igniteInstanceName=null, finished=false, heartbeatTs=1607710531775, hashCode=9092883, interrupted=false, runner=grid-nio-worker-tcp-rest-1-#39]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, outboundMessagesQueueSizeMetric=null, super=GridNioSessionImpl [locAddr=/127.0.0.1:11211, rmtAddr=/127.0.0.1:59558, createTime=1607710503680, closeTime=1607710511719, bytesSent=2, bytesRcvd=96, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1607710533267, lastSndTime=1607710503680, lastRcvTime=1607710511719, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1#1eb05], routerClient=false], directMode=false]], accepted=true, markedForClose=true]]
at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7589)
at org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:260)
at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:172)
at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1$1.apply(GridTcpRestNioListener.java:296)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1$1.apply(GridTcpRestNioListener.java:293)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
at org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1.apply(GridTcpRestNioListener.java:293)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1.apply(GridTcpRestNioListener.java:261)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
at org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:347)
at org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:335)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:511)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:490)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:467)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2$1.apply(GridRestProcessor.java:187)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2$1.apply(GridRestProcessor.java:184)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
at org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:184)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Failed to send message (connection was closed): GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=90 lim=90 cap=8192], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-1, igniteInstanceName=null, finished=false, heartbeatTs=1607710531775, hashCode=9092883, interrupted=false, runner=grid-nio-worker-tcp-rest-1-#39]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, outboundMessagesQueueSizeMetric=null, super=GridNioSessionImpl [locAddr=/127.0.0.1:11211, rmtAddr=/127.0.0.1:59558, createTime=1607710503680, closeTime=1607710511719, bytesSent=2, bytesRcvd=96, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1607710533267, lastSndTime=1607710503680, lastRcvTime=1607710511719, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1#1eb05], routerClient=false], directMode=false]], accepted=true, markedForClose=true]]
at org.apache.ignite.internal.util.nio.GridNioServer.send0(GridNioServer.java:642)
at org.apache.ignite.internal.util.nio.GridNioServer.send(GridNioServer.java:583)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onSessionWrite(GridNioServer.java:3693)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionWrite(GridNioFilterAdapter.java:121)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onSessionWrite(GridNioCodecFilter.java:96)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionWrite(GridNioFilterAdapter.java:121)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onSessionWrite(GridNioFilterChain.java:269)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onSessionWrite(GridNioFilterChain.java:192)
at org.apache.ignite.internal.util.nio.GridNioSessionImpl.send(GridNioSessionImpl.java:117)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1.apply(GridTcpRestNioListener.java:290)
... 16 more
Result of "control.sh --set-state ACTIVE" :
Control utility [ver. 2.10.0-SNAPSHOT#20201013-sha1:a2fa7ec3]
2020 Copyright(C) Apache Software Foundation
User: root
Time: 2020-12-11T18:19:04.421
This cluster requires authentication.
Connection to cluster failed. Latest topology update failed.
Command [SET-STATE] finished with code: 2
Control utility has completed execution at: 2020-12-11T18:19:56.559
Execution time: 52138 ms
Thank you!
People did start it occasionally on Rpi.
In your case, please try decreasing JVM's Xmx (-Xmx384 will be OK) and also specify checkpoint page buffer size for persistent region explicitly (20M should be OK in your case).
If you still see "threads blocked" exceptions, please share complete log. You may use Apache Ignite userlist for that. Also, please describe what happens when you try to activate.
There are a couple of things here.
First, I don't think you needed to update the Docker file. A Pi3 has 64-bit cores, so the default should work. I created this one and it works fine on a Pi4.
Second, between your JVM and your data region, you're allocating more memory than your Pi has:
Nodes started on local machine require more than 80% of physical RAM what can lead to significant slowdown due to swapping (please decrease JVM heap size, data region size or checkpoint buffer size) [required=1300MB, available=924MB]
You only appear to have 150Mb for off-heap, so I wonder how big you Java heap is? 8Gb should be plenty.
If your Pi is swapping to disk, that might be causing the blocked system threads.
Finally, since you enabled authentication, you need to supply a username and password when you try to activate the cluster. The default appears to be ignite/ignite.
Related
SQLite in-memory database encounters SQLITE_LOCKED_SHAREDCACHE intermittently
I am using mybatis 3.4.6 along with org.xerial:sqlite-jdbc 3.28.0. Below is my configuration to use an in-memory database with shared mode enabled db.driver=org.sqlite.JDBC db.url=jdbc:sqlite:file::memory:?cache=shared The db.url is correct according to this test class And I managed to setup the correct transaction isolation level with below mybatis configuration though there is a typo of property read_uncommitted according to this issue which is reported by me as well <environment id="${db.env}"> <transactionManager type="jdbc"/> <dataSource type="POOLED"> <property name="driver" value="${db.driver}" /> <property name="url" value="${db.url}"/> <property name="username" value="${db.username}" /> <property name="password" value="${db.password}" /> <property name="defaultTransactionIsolationLevel" value="1" /> <property name="driver.synchronous" value="OFF" /> <property name="driver.transaction_mode" value="IMMEDIATE"/> <property name="driver.foreign_keys" value="ON"/> </dataSource> </environment> This line of configuration <property name="defaultTransactionIsolationLevel" value="1" /> does the trick to set the correct value of PRAGMA read_uncommitted I am pretty sure of it since I debugged the underneath code which initialize the connection and check the value has been set correctly However with the above setting, my program still encounters SQLITE_LOCKED_SHAREDCACHE intermittently while reading, which I think it shouldn't happen according the description highlighted in the red rectangle of below screenshot. I want to know the reason and how to resolve it, though the occurring probability of this error is low. Any ideas would be appreciated!! The debug configurations is below ===CONFINGURATION============================================== jdbcDriver org.sqlite.JDBC jdbcUrl jdbc:sqlite:file::memory:?cache=shared jdbcUsername jdbcPassword ************ poolMaxActiveConnections 10 poolMaxIdleConnections 5 poolMaxCheckoutTime 20000 poolTimeToWait 20000 poolPingEnabled false poolPingQuery NO PING QUERY SET poolPingConnectionsNotUsedFor 0 ---STATUS----------------------------------------------------- activeConnections 5 idleConnections 5 requestCount 27 averageRequestTime 7941 averageCheckoutTime 4437 claimedOverdue 0 averageOverdueCheckoutTime 0 hadToWait 0 averageWaitTime 0 badConnectionCount 0 =============================================================== Attachments: The exception is below org.apache.ibatis.exceptions.PersistenceException: ### Error querying database. Cause: org.apache.ibatis.transaction.TransactionException: Error configuring AutoCommit. Your driver may not support getAutoCommit() or setAutoCommit(). Requested setting: false. Cause: org.sqlite.SQLiteException: [SQLITE_LOCKED_SHAREDCACHE] Contention with a different database connection that shares the cache (database table is locked) ### The error may exist in mapper/MsgRecordDO-sqlmap-mappering.xml ### The error may involve com.super.mock.platform.agent.dal.daointerface.MsgRecordDAO.getRecord ### The error occurred while executing a query ### Cause: org.apache.ibatis.transaction.TransactionException: Error configuring AutoCommit. Your driver may not support getAutoCommit() or setAutoCommit(). Requested setting: false. Cause: org.sqlite.SQLiteException: [SQLITE_LOCKED_SHAREDCACHE] Contention with a different database connection that shares the cache (database table is locked)
I finally resolved this issue by myself and share the workaround below in case someone else encounters similar issue in the future. First of all, we're able to get the completed call stack of the exception shown below Going through the source code indicated by the callback, we have below findings. SQLite is built-in with auto commit enabled by default which is contradict with MyBatis which disables auto commit by default since we're using SqlSessionManager MyBatis would override the auto commit property during connection initialization using method setDesiredAutoCommit which finally invokes SQLiteConnection#setAutoCommit SQLiteConnection#setAutoCommit would incur a begin immediate operation against the database which is actually exclusive, check out below source code screenshots for detailed explanation since we configure our transaction mode to be IMMEDIATE <property name="driver.transaction_mode" value="IMMEDIATE"/> So until now, An apparent solution is to change the transaction mode to be DEFERRED. Furthermore, the solution of making the auto commit setting the same between MyBatis and SQLite has been considered as well, however, it's not adopted since there is no way to set the auto commit of SQLiteConnection during initialization stage, there would be always switching (from true to false or vice versa) and switch would cause the above error probably if transaction mode is not set properly
Apache Ignite IGFS Not using Non Heap Space
I am using Apache Ignite 2.6. I am using Ignite Filesystem, and when I write a specific file, which is about 25 MB, to IGFS, over and over, the data is not saved into the non-heap space. Instead, it goes into heap, which is subject to Garbage Collection, and it is relatively slow. How do I get IGFS to save a file into the large heap space I have allocated for it? High level architecture--I have a client ignite node running inside of a tomcat for now, and a server ignite node, on which I intend this data to be stored. Scaling can occur once I get this working as expected--but it is very slow because of the aforementioned problem. It also OOMs when it runs out of heap space very quickly. Thing is, I want it to use the 30G of NON HEAP space I have allocated! I intend this it to be an in memory cache. I am allocating 2 G of heap space and 30G of non heap space to the JVM. The non heap space never gets used and it runs out of memory as a result. I have confirmed that the non-heap space is not used using the JMX Console Memory tab--non heap space stays well below 100M, while heap space quickly balloons to 2G and then the JVM crashes. The details: First, my ignite configuration (spring xml): <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/> <property name="searchSystemEnvironment" value="true"/> </bean> <bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration"> <property name="marshaller"> <bean class="org.apache.ignite.internal.binary.BinaryMarshaller" /> </property> <property name="fileSystemConfiguration"> <list> <bean class="org.apache.ignite.configuration.FileSystemConfiguration"> <property name="name" value="igfs"/> <property name="blockSize" value="#{128 * 1024}"/> <property name="perNodeBatchSize" value="512"/> <property name="perNodeParallelBatchCount" value="16"/> <property name="prefetchBlocks" value="32"/> </bean> </list> </property> <property name="discoverySpi"> <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> <property name="ipFinder"> <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"> <property name="addresses"> <list> <value>127.0.0.1:47500..47509</value> </list> </property> </bean> </property> </bean> </property> <property name="dataStorageConfiguration"> <bean class="org.apache.ignite.configuration.DataStorageConfiguration" > <!-- if I don't set this, the system region runs out of memory almost immediately --> <property name="systemRegionMaxSize" value="#{6L * 1024 * 1024 * 1024"} /> <property name="systemRegionInitialSize" value="#{6L * 1024 * 1024 * 1024"} /> </bean> </property> </bean> Here is the script I use to start up my ignite server process. It's a shell script running on a Linux machine with 64 G RAM and 40 G disk space. IGNITE_HOME=/data/apache-ignite export IGNITE_HOME IGNITE_JMX_PORT=1234 export IGNITE_JMX_PORT $IGNITE_HOME/bin/ignite.sh $IGNITE_HOME/ignite-media-server.xml -J-Xmx2G -J-Xms2G -J-XX::+HeapDumpOnOutOfMemoryError -J-XX:HeapDumpPath=$IGNITE_HOME -J-XX:+PrintGC -J-XX:+PrintGCTimeStamps -J-XX:+PrintGCDateStamps -J-Xloggc:$IGNITE_HOME/gc.log-$(date +%m%d-%H%M%S) -J-XX:+UseG1GC -J-XX:DisableExplicitGC -J-XX:MaxDirectMemorySize=30G This is the code that creates my client igfs object, through which I save files to ignite. They tend to be on the large side. public void init() throws Exception{ igniteInstanceName = "client-name=" + hostInfo.getLocalHost(); Ignition.setClientMode(true); // reading in the same config file as the server uses to start up above. The big difference is the clientMode set to true here. try(InputStream configFileInputStream = new FileInputStream(ResourceUtils.getFile("ignite-media-server.xml"));){ ignite = IgnitionEx.start(configFileInputStream, igniteInstanceName, null, null); igfs = ignite.fileSystem("igfs"); } catch(Throwable t){ /* do log */} } Here is a save method, that saves my files to ignite: public saveStream(String cachePath, AudioInputStream toCache){ OutputStream os = null; try{ IgfsPath cacheFile = new IgfsPath(cachePath); os = igfs.create(cacheFile, true); AudioSystem.write(toCache.getDataStream, AudioFileFormat.TYPE.WAVE, os); } finally{ // close streams } } Why doesn't my data get saved to the speedy off-heap space? What am I missing? my server.config comes almost straight from the igfs provided example. In other confusion, when I use ignitevisor.cmd to inspect memory usage on the server node before and after a shorter test (that doesn't make it crash) I see the following: Look at memory allocation while ignite is empty in ignitevisor.cmd. See that my igfs region says: Heap Memory Initialized: 2g Heap Memory Used: 56mb Non-heap memory initialized: 2mb Non Heap memory used: 49 mb Non heap memory maximum: 744mb Create JUST SHY 2 G worth of files saved in IGFS--just short of an OOM since from bitter experience I know it will blow up shortly. Use ignitevisor.cmd to look at the memory allocation of the nodes. This is what ... – MeowCode 2 mins ago Heap memory initialized: 2gb Heap memory used: 1gb Non Heap memory used 64 MB Non heap memory maximum: 744mb Why is there still almost nothing in non-heap? And why does ignitevisor think that the non-heap maximum is 744 MB when it should be 30 GB? In other points of interest, if I increase my heap size to 6 GB, it runs longer, but still the server crashes with an "OutOfMemoryError:Java heap space". Interestingly, I can reproduce this even when I enable disk persistence. Inspecting the heap dump file reveals a lot of ConcurrentLinkedHashMap entries. The entries themselves are "org.apache.ignite.internal.GridTopic" objects. Each one has a uuid and most appear to be of type TOPIC_DATASTREAM.
Data is saved to Off-Heap all right, but you should be aware that a lot of transient objects involved in IGFS operation will still be briefly held on heap (and GCed after that). "JMX Console Memory tab--non heap space" is the wrong metric. I don't think that there are any JVM metrics for Off-Heap. However, Ignite will print Off-Heap statistics at regular intervals. Why you would run out of memory is not obvious. Have you tried collecting heap dump and analyzing it?
how to avoid "lock timeout" when updating DB using multiple threads?
I am trying to update a table using multiple threads. But I am not updating the same records/rows at the same time. I am grouping the table into different groups and trying to update them simultaneously. However, I am getting the locked timeout error all the time. I am using Hibernate, Spring MVC, ThreadPoolTaskExecutor and MySQL. I am getting the data from another DB schema and updating my own database. The data is huge which is why i want to use multi threads so it can be done faster. However, it's producing "lock timeout" error. Can anyone help please? thanks for your good heart. I call sessionFactory.getCurrenSession() to update the database table. here is my config: <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${jdbc.driverClassName}" p:url="${jdbc.url}" p:username="${jdbc.username}" p:password="${jdbc.password}"> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation"> <value>classpath:hibernate.cfg.xml</value> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop> <prop key="hibernate.show_sql">true</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> <bean id="taskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor"> <property name="corePoolSize" value="5" /> <property name="maxPoolSize" value="10" /> <property name="WaitForTasksToCompleteOnShutdown" value="true" /> </bean> here is my stacktrace: WARN : org.hibernate.engine.jdbc.spi.SqlExceptionHelper - SQL Error: 1205, SQLState: 41000 ERROR: org.hibernate.engine.jdbc.spi.SqlExceptionHelper - Lock wait timeout exceeded; try restarting transaction Exception in thread "taskExecutor-5" Exception in thread "taskExecutor-4" Exception in thread "taskExecutor-2" org.hibernate.exception.LockTimeoutException: could not execute statement at org.hibernate.dialect.MySQLDialect$1.convert(MySQLDialect.java:407) at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:49) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110) at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:136) at org.hibernate.hql.internal.ast.exec.BasicExecutor.execute(BasicExecutor.java:103) at org.hibernate.hql.internal.ast.QueryTranslatorImpl.executeUpdate(QueryTranslatorImpl.java:413) at org.hibernate.engine.query.spi.HQLQueryPlan.performExecuteUpdate(HQLQueryPlan.java:282) at org.hibernate.internal.SessionImpl.executeUpdate(SessionImpl.java:1289) at org.hibernate.internal.QueryImpl.executeUpdate(QueryImpl.java:116) org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:96) at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:260) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:94) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1084) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4232) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4164) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2615) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2776) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2838) at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2082) at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2334) at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2262) at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2246) at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105) at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105) at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:133) ... 25 more
A "lock wait timeout" can always happen (even with a large amount of inserts in one transaction) and there is no silver bullet to solve it. But I managed to get around it when I was trying to update half of all the records in one (relative small) table while the other half was being modified by another server. Review all SQL statements in the transaction. Use explain to make sure indexes are used where possible. Remove any statements that are not needed as part of the transaction. Optimize the order of the SQL statements in the transaction. This was a bit of trial and error for me, but try to imagine which order of SQL statements coming from multiple threads/connections might be easier to deal with for the database. In my case, just switching the order of two SQL statements made the "lock wait timeout" occur less frequent. Update smaller subsets. This finally solved the "lock wait timeout" for me. In my case there was an indexed column that allowed me to divide the larger update set into smaller subsets. So now one big update transaction was turned into about ten smaller update transactions. Keep in mind though that you need to be able to continue the smaller transactions after a crash (i.e. data must remain consistent in such a way that your application can redo the operation and have the same result). Whether or not multiple threads will improve the throughput (updated rows per second) remains to be seen: it depends on the size of the update sets (network latency) and how efficiently MySQL can handle the locks for the table(s) to update the rows. You might only see a marginal improvement when using two threads/connections instead of one. [Edit] Also watch out for database triggers/procedures: they can impact performance in a bad manner.
Maybe you could try to lower isolation level. If it helps you can dig more. It should speed up also execution in multi threaded environment. If you are using annotations you can achieve this by #Transactional(isolation=Isolation.READ_UNCOMMITTED) on top of your transactional class.
This appears to be a timeout on the database side. I'd guess that the database is the limiting factor, so adding threads in your application doesn't help. If you want to use threads to speed things up, I'd suggest using only two threads. While one thread reads from the other database, the second thread writes to the MySQL database. Note that if both databases are on the same database server, even that won't help. You would need a faster database or a beefier database machine.
Many threads created using C3P0 with Hibernate/Spring
I do a project merging Hibernate and Spring in a Java web application, using Tomcat under Linux environment. Due to the Mysql 8 hours timeout problem, we want to use C3P0 to manage a connection pool with our Mysql database. But when we use it, we have numerous threads that are created. I figured it out beacause I did on each request a print of all of them with a memory status that show me the increasing memory and that kind of threads: name: C3P0PooledConnectionPoolManager[identityToken->1hged7o8r13kpj7n1h3ycia|39c446]-HelperThread-#0 daemon: true group! main groupParent: system alive: true interrupted: false name: C3P0PooledConnectionPoolManager[identityToken->1hged7o8r13kpj7n1h3ycia|17ec0e8]-AdminTaskTimer daemon: true group! main groupParent: system alive: true interrupted: false It can produce more than 500 threads like these ones, after enough time. Here is my Hibernate.cfg.xml: <property name="connection.provider_class"> org.hibernate.connection.C3P0ConnectionProvider</property> <property name="hibernate.c3p0.acquire_increment">1</property> <property name="hibernate.c3p0.idle_test_period">5</property> <property name="hibernate.c3p0.max_size">100</property> <property name="hibernate.c3p0.max_statements">100</property> <property name="hibernate.c3p0.min_size">10</property> <property name="hibernate.c3p0.timeout">5</property> <property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property> <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property> <property name="hibernate.connection.url">jdbc:mysql://localhost:3306/myBase</property> <property name="hibernate.connection.username">root</property> <property name="hibernate.connection.password"></property> <property name="hibernate.hbm2ddl.auto">update</property> <property name="hibernate.default_schema">myProject</property> <property name="hibernate.query.factory_class">org.hibernate.hql.classic.ClassicQueryTranslatorFactory</property> <property name="show_sql">false</property> <property name="cache.provider_class"> org.hibernate.cache.NoCacheProvider </property> I also tried to add a C3P0 propeties file, but except reducing the helper thread number, it don't delete the unsused thread: c3p0.maxStatements=5 c3p0.maxIdleTime=10 c3p0.numHelperThreads=1 c3p0.testConnectionOnCheckout=true c3p0.preferredTestQuery=SELECT 1 c3p0.initialPoolSize=1 c3p0.minPoolSize=1 c3p0.maxPoolSize=10 c3p0.acquireIncrement=1 c3p0.idleConnectionTestPeriod=1 Does anyone have an idea of why this happen and how to solve this problem? Thanks a lot.
if you are seeing a multiplication of c3p0 helper and timer threads, you are somehow creating a multitude of c3p0 DataSources when you want there to be just one. sometimes this happens if you are hot-reloading your app but forgetting to close() your old c3p0 DataSource when you recycle. effectively it looks like you are "leaking" DataSources. you need to figure out why/where this is happening. for some clues, check out your logs for c3p0 DataSource initialization messages at INFO level. Search for the string "Initializing c3p0 pool", for example. good luck!
Ok I found a combination of properties to solve my problem, keeping in mind that I don't need a lot of connection at a time: c3p0.maxStatements=5 c3p0.maxIdleTime=10 c3p0.numHelperThreads=3 c3p0.testConnectionOnCheckout=true c3p0.preferredTestQuery=SELECT 1 c3p0.initialPoolSize=1 c3p0.minPoolSize=1 c3p0.maxPoolSize=1 c3p0.acquireIncrement=1 c3p0.idleConnectionTestPeriod=1 c3p0.maxAdministrativeTaskTime=1 Thanks to everyone
I would expect it to create a number of threads proportional to c3p0.minPoolSize and c3p0.maxPoolSize and your maximum is 10. http://www.mchange.com/projects/c3p0/#other_ds_configuration "numHelperThreads and maxAdministrativeTaskTime help to configure the behavior of DataSource thread pools. By default, each DataSource has only three associated helper threads. If performance seems to drag under heavy load, or if you observe via JMX or direct inspection of a PooledDataSource, that the number of "pending tasks" is usually greater than zero, try increasing numHelperThreads. maxAdministrativeTaskTime may be useful for users experiencing tasks that hang indefinitely and "APPARENT DEADLOCK" messages. (See Appendix A for more.) " numHelperThreads defines how many threads per DataSource are used, therefore indeed you will have 10 threads with numHelperThreads=1. The only way to make sure C3P0 consumes only one Thread is to set c3p0.minPoolSize and c3p0.maxPoolSize to 1 but this defeats the purpose of connection pooling.
Spring JDBCTemplate other MySQL datasource than apache commons?
I am using Spring JDBCTemplate to perform SQL operations on an apache commons datasource (org.apache.commons.dbcp.BasicDataSource) and when the service is up and running to long, i end up getting this exception: org.springframework.dao.RecoverableDataAccessException: StatementCallback; SQL [SELECT * FROM vendor ORDER BY name]; The last packet successfully received from the server was 64,206,061 milliseconds ago. The last packet sent successfully to the server was 64,206,062 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 64,206,061 milliseconds ago. The last packet sent successfully to the server was 64,206,062 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. at org.springframework.jdbc.support.SQLExceptionSubclassTranslator.doTranslate(SQLExceptionSubclassTranslator.java:98) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:406) at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:455) at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:463) at com.cable.comcast.neto.nse.taac.dao.VendorDao.getAllVendor(VendorDao.java:25) at com.cable.comcast.neto.nse.taac.controller.RemoteVendorAccessController.requestAccess(RemoteVendorAccessController.java:78) I have tried adding the 'autoReconnect=true' to the connection string, but this problem still occurs. Is there another datasource that can be used that will manage the reconnecting for me?
BasicDataSource can manage keeping the connections alive for you. You need to set the following properties : minEvictableIdleTimeMillis = 120000 // Two minutes testOnBorrow = true timeBetweenEvictionRunsMillis = 120000 // Two minutes minIdle = (some acceptable number of idle connections for your server) These will configure the data source to keep continually test your connections, and expire and remove them if they become stale. There's a number of other properties on the basic data source that you may want to consider checking into as well to tweak your connection pooling performance. I've run into some strange problems in the past where I was having issues with my database access and it all came down to how the connection pool was configured.
You can try to C3PO: http://sourceforge.net/projects/c3p0/ <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"destroy-method="close"> <property name="user" value="${db.username}"/> <property name="password" value="${db.password}"/> <property name="driverClass" value="${db.driverClassName}"/> <property name="jdbcUrl" value="${db.url}"/> <property name="initialPoolSize" value="0"/> <property name="maxPoolSize" value="1"/> <property name="minPoolSize" value="1"/> <property name="acquireIncrement" value="1"/> <property name="acquireRetryAttempts" value="0"/> <property name="idleConnectionTestPeriod" value="600"/> <!--in seconds--> </bean> grettings pacovr