I'm currently working on a project involving a distributed Apache Ignite DB on a cluster of Raspberry Pi.
I want to have 2 separate data regions including one with persistence enabled. Here is my custom config :
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="default_region"/>
<!-- Set max cache size to 50 MBytes-->
<property name="maxSize" value="#{50 * 1024 * 1024}"/>
<!-- Specifiying an eviction policy that evicts the latest used data when the data hits 90% of the max storage capacity-->
<property name="pageEvictionMode" value="RANDOM_LRU"/>
<property name="evictionThreshold" value="0.9"/>
</bean>
</property>
<property name="dataRegionConfigurations">
<list>
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="persistence_region"/>
<!-- Set max cache size to 100 MBytes-->
<property name="maxSize" value="#{100 * 1024 * 1024}"/>
<!-- Enable persistent data storage -->
<property name="persistenceEnabled" value="true"/>
</bean>
</list>
</property>
</bean>
</property>
<property name="authenticationEnabled" value="true"/>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="multicastGroup" value="228.10.10.157"/>
</bean>
</property>
</bean>
</property>
</bean>
I built my own Docker image with my custom conf based on the official Dockerfile except that I changed the base image with: FROM arm32v7/openjdk:8-jre-alpine
The troubles begin when I try to start my image... I have some warnings such as :
Nodes started on local machine require more than 80% of physical RAM what can lead to significant slowdown due to swapping (please decrease JVM heap size, data region size or checkpoint buffer size) [required=1300MB, available=924MB]
And then several errors like :
[SEVERE][tcp-disco-msg-worker-[crd]-#2-#46][G] Blocked system-critical thread has been detected. This can lead to cluster-wide undefined behaviour [workerName=disco-notifier-worker, threadName=disco-notifier-worker-#45, blockedFor=11s]
After that, it's impossible for me to activate the cluster (through REST, or the control.sh script) and to query it.
If someone has a config file working on Rpi I'm really interested!
EDIT :
I have tried with #alamar 's suggestion (-Xmx384 and checkpoint page buffer size of 20M) but I still have theses errors when activating :
[SEVERE][rest-#68][GridTcpRestProtocol] Failed to process client request [ses=GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=90 lim=90 cap=8192], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-1, igniteInstanceName=null, finished=false, heartbeatTs=1607710531775, hashCode=9092883, interrupted=false, runner=grid-nio-worker-tcp-rest-1-#39]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, outboundMessagesQueueSizeMetric=null, super=GridNioSessionImpl [locAddr=/127.0.0.1:11211, rmtAddr=/127.0.0.1:59558, createTime=1607710503680, closeTime=1607710511719, bytesSent=2, bytesRcvd=96, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1607710533267, lastSndTime=1607710503680, lastRcvTime=1607710511719, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [marsh=JdkMarshaller [clsFilter=o.a.i.marshaller.MarshallerUtils$1#1eb05], routerClient=false], directMode=false]], accepted=true, markedForClose=true]], msg=GridClientAuthenticationRequest [cred=SecurityCredentials [login=ignite], super=GridClientAbstractMessage [reqId=1, id=b2d7025f-e4a0-4ab7-8c3e-92e2c1c4aea9, destId=null, super=o.a.i.i.processors.rest.client.message.GridClientAuthenticationRequest#18dd064]]]
class org.apache.ignite.IgniteCheckedException: Failed to send message (connection was closed): GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=90 lim=90 cap=8192], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-1, igniteInstanceName=null, finished=false, heartbeatTs=1607710531775, hashCode=9092883, interrupted=false, runner=grid-nio-worker-tcp-rest-1-#39]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, outboundMessagesQueueSizeMetric=null, super=GridNioSessionImpl [locAddr=/127.0.0.1:11211, rmtAddr=/127.0.0.1:59558, createTime=1607710503680, closeTime=1607710511719, bytesSent=2, bytesRcvd=96, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1607710533267, lastSndTime=1607710503680, lastRcvTime=1607710511719, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1#1eb05], routerClient=false], directMode=false]], accepted=true, markedForClose=true]]
at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7589)
at org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:260)
at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:172)
at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1$1.apply(GridTcpRestNioListener.java:296)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1$1.apply(GridTcpRestNioListener.java:293)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
at org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1.apply(GridTcpRestNioListener.java:293)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1.apply(GridTcpRestNioListener.java:261)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
at org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:347)
at org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:335)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:511)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:490)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:467)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2$1.apply(GridRestProcessor.java:187)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2$1.apply(GridRestProcessor.java:184)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
at org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:184)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Failed to send message (connection was closed): GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=90 lim=90 cap=8192], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-1, igniteInstanceName=null, finished=false, heartbeatTs=1607710531775, hashCode=9092883, interrupted=false, runner=grid-nio-worker-tcp-rest-1-#39]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, outboundMessagesQueueSizeMetric=null, super=GridNioSessionImpl [locAddr=/127.0.0.1:11211, rmtAddr=/127.0.0.1:59558, createTime=1607710503680, closeTime=1607710511719, bytesSent=2, bytesRcvd=96, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1607710533267, lastSndTime=1607710503680, lastRcvTime=1607710511719, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1#1eb05], routerClient=false], directMode=false]], accepted=true, markedForClose=true]]
at org.apache.ignite.internal.util.nio.GridNioServer.send0(GridNioServer.java:642)
at org.apache.ignite.internal.util.nio.GridNioServer.send(GridNioServer.java:583)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onSessionWrite(GridNioServer.java:3693)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionWrite(GridNioFilterAdapter.java:121)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onSessionWrite(GridNioCodecFilter.java:96)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionWrite(GridNioFilterAdapter.java:121)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onSessionWrite(GridNioFilterChain.java:269)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onSessionWrite(GridNioFilterChain.java:192)
at org.apache.ignite.internal.util.nio.GridNioSessionImpl.send(GridNioSessionImpl.java:117)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1.apply(GridTcpRestNioListener.java:290)
... 16 more
Result of "control.sh --set-state ACTIVE" :
Control utility [ver. 2.10.0-SNAPSHOT#20201013-sha1:a2fa7ec3]
2020 Copyright(C) Apache Software Foundation
User: root
Time: 2020-12-11T18:19:04.421
This cluster requires authentication.
Connection to cluster failed. Latest topology update failed.
Command [SET-STATE] finished with code: 2
Control utility has completed execution at: 2020-12-11T18:19:56.559
Execution time: 52138 ms
Thank you!
People did start it occasionally on Rpi.
In your case, please try decreasing JVM's Xmx (-Xmx384 will be OK) and also specify checkpoint page buffer size for persistent region explicitly (20M should be OK in your case).
If you still see "threads blocked" exceptions, please share complete log. You may use Apache Ignite userlist for that. Also, please describe what happens when you try to activate.
There are a couple of things here.
First, I don't think you needed to update the Docker file. A Pi3 has 64-bit cores, so the default should work. I created this one and it works fine on a Pi4.
Second, between your JVM and your data region, you're allocating more memory than your Pi has:
Nodes started on local machine require more than 80% of physical RAM what can lead to significant slowdown due to swapping (please decrease JVM heap size, data region size or checkpoint buffer size) [required=1300MB, available=924MB]
You only appear to have 150Mb for off-heap, so I wonder how big you Java heap is? 8Gb should be plenty.
If your Pi is swapping to disk, that might be causing the blocked system threads.
Finally, since you enabled authentication, you need to supply a username and password when you try to activate the cluster. The default appears to be ignite/ignite.
How can I set Quartz cron trigger for run at one specifc date and time for once?
Eg: run something at 12.30pm on 2017-06-30 and never run again
If you want to achieve using CronTrigger try like below
<bean id="newTrigger" class="org.springframework.scheduling.quartz.CronTriggerFactoryBean">
<property name="jobDetail" ref="oneTimeJob"/>
<property name="cronExpression" value="0 30 12 30 6 ? 2017"/>
</bean>
Or As #scary wombat mention Use SimpleTrigger
SimpleTrigger trigger = (SimpleTrigger) newTrigger()
.withIdentity("trigger1", "group1")
.startAt(myStartTime) // some Date date 30.06.2017 12:30
.forJob("job1", "group1") // identify job with name, group strings
.build();
I am very new to Mule.
I am facing a problem. I have a requirement where I need to read data from a CSV file which is in D:\ drive & insert data into PostgreSQL Database using Mule for every 2min.
So i have chosen Quartz.
Here is my code :
<configuration>
<expression-language autoResolveVariables="true">
<import class="org.mule.util.StringUtils" />
<import class="org.mule.util.ArrayUtils" />
</expression-language>
</configuration>
<spring:beans>
<spring:bean id="jdbcDataSource" class=" ... your data source ... " />
</spring:beans>
<jdbc:connector name="jdbcConnector" dataSource-ref="jdbcDataSource">
<jdbc:query key="insertRow"
value="insert into my_table(col1, col2) values(#[message.payload[0]],#[message.payload[1]])" />
</jdbc:connector>
<quartz:connector name="myQuartzConnector" validateConnections="true" doc:name="Quartz">
<receiver-threading-profile maxThreadsActive="1"/>
</quartz:connector>
<flow name="QuartzFlow" processingStrategy="synchronous">
<quartz:inbound-endpoint doc:name="Quartz"
jobName="CronJobSchedule" cronExpression="0 0/2 * * * ?"
connector-ref="myQuartzConnector" repeatCount="1">
<quartz:event-generator-job>
<quartz:payload>quartzSchedular started</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
<flow-ref name="csvFileToDatabase" doc:name="Flow Reference" />
</flow>
<flow name="csvFileToDatabase">
<file:inbound-endpoint path="/tmp/mule/inbox"
pollingFrequency="5000" moveToDirectory="/tmp/mule/processed">
<file:filename-wildcard-filter pattern="*.csv" />
</file:inbound-endpoint>
<!-- Load all file in RAM - won't work for big files! -->
<file:file-to-string-transformer />
<!-- Split each row, dropping the first one (header) -->
<splitter
expression="#[rows=StringUtils.split(message.payload, '\n\r');ArrayUtils.subarray(rows,1,rows.size())]" />
<!-- Transform CSV row in array -->
<expression-transformer expression="#[StringUtils.split(message.payload, ',')]" />
<jdbc:outbound-endpoint queryKey="insertRow" />
</flow>
This is working fine but in D:\ if i keep csv file mule reading the csv file and writing to the database without waiting for 2 min(means Quartz scheduling time here).
If i set polling frequency of file connector with pollingfrequency="120000"(for 2 min 2*60*1000 milliseconds) also quartz is not waiting for 2 min. Within the scheduling time of 2 min., if i place csv file in D:\ mule reading the csv file and writing to the database without waiting for 2 min(means Quartz scheduling time here).
Even if i place csv file in D:\, mule Quartz has to perform action for every 2 min. only, for this Which changes do i need to include here..
Could any one pls help me..
It is because the flow csvFileToDatabase has its own inbound-enpoint that will execute regardless of the quartz flow. You have the file:inbound-enpoint set to poll every 5000 milliseconds.
Theres no need to have both an inbound-endpoint and quartz scheduling the flow.
Either change the file:inbound-endpoint frequency or if you really want to use quartz to trigger the flow take a look at the mule request module that allows you to use an inbound-endpoint mid-flow: https://github.com/mulesoft/mule-module-requester
I run Tomcat with -Duser.timezone=UTC. However Quartz scheduler 2.2.1 seems to run in Europe/Prague which is my OS timezone.
Is there a way to run Quartz in custom timezone or determine which timezone Quartz is using?
If not, is there a way to determine OS timezone programatically?
Quartz by default will use the default system locale and timezone, and it is not programed to pick up the property user.timezone you are providing your app. Remember also that this is only applies to a CronTrigger and not a SimpleTrigger.
If you are using Spring for example:
<bean id="timeZone" class="java.util.TimeZone" factory-method="getTimeZone">
<constructor-arg value="GMT" />
</bean>
<bean id="yourTrigger" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="yourJob" />
<property name="cronExpression" value="0 0 0/1 * * ?" />
<property name="timeZone" ref="timeZone" />
</bean>
If you are using plain java:
Trigger yourTrigger = TriggerBuilder
.newTrigger()
.withIdentity("TRIGGER-ID", "TRIGGER-GROUP")
.withSchedule(CronScheduleBuilder
.cronSchedule("0 0 0/1 * * ?")
.inTimeZone(TimeZone.getTimeZone("GMT")))
).build();
If you are using the XML configuration file, e.g. the quartz-config.xml from Example To Run Multiple Jobs In Quartz of mkyong, you can configure the timezone in the element time-zone:
<schedule>
<job>
<name>JobA</name>
<group>GroupDummy</group>
<description>This is Job A</description>
<job-class>com.mkyong.quartz.JobA</job-class>
</job>
<trigger>
<cron>
<name>dummyTriggerNameA</name>
<job-name>JobA</job-name>
<job-group>GroupDummy</job-group>
<!-- It will run every 5 seconds -->
<cron-expression>0/5 * * * * ?</cron-expression>
<time-zone>UTC</time-zone>
</cron>
</trigger>
</schedule>
See also Java's java.util.TimeZone for to see the ID for several timezones.
You can call setTimeZone() to set the time zone of your choosing for anything in Quartz that inherits BaseCalendar.
Java's TimeZone class has a getDefault() which should aid in determining OS timezone programmatically.
I do a project merging Hibernate and Spring in a Java web application, using Tomcat under Linux environment. Due to the Mysql 8 hours timeout problem, we want to use C3P0 to manage a connection pool with our Mysql database.
But when we use it, we have numerous threads that are created. I figured it out beacause I did on each request a print of all of them with a memory status that show me the increasing memory and that kind of threads:
name: C3P0PooledConnectionPoolManager[identityToken->1hged7o8r13kpj7n1h3ycia|39c446]-HelperThread-#0 daemon: true group! main groupParent: system alive: true interrupted: false
name: C3P0PooledConnectionPoolManager[identityToken->1hged7o8r13kpj7n1h3ycia|17ec0e8]-AdminTaskTimer daemon: true group! main groupParent: system alive: true interrupted: false
It can produce more than 500 threads like these ones, after enough time.
Here is my Hibernate.cfg.xml:
<property name="connection.provider_class">
org.hibernate.connection.C3P0ConnectionProvider</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.idle_test_period">5</property>
<property name="hibernate.c3p0.max_size">100</property>
<property name="hibernate.c3p0.max_statements">100</property>
<property name="hibernate.c3p0.min_size">10</property>
<property name="hibernate.c3p0.timeout">5</property>
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="hibernate.connection.url">jdbc:mysql://localhost:3306/myBase</property>
<property name="hibernate.connection.username">root</property>
<property name="hibernate.connection.password"></property>
<property name="hibernate.hbm2ddl.auto">update</property>
<property name="hibernate.default_schema">myProject</property>
<property name="hibernate.query.factory_class">org.hibernate.hql.classic.ClassicQueryTranslatorFactory</property>
<property name="show_sql">false</property>
<property name="cache.provider_class">
org.hibernate.cache.NoCacheProvider
</property>
I also tried to add a C3P0 propeties file, but except reducing the helper thread number, it don't delete the unsused thread:
c3p0.maxStatements=5
c3p0.maxIdleTime=10
c3p0.numHelperThreads=1
c3p0.testConnectionOnCheckout=true
c3p0.preferredTestQuery=SELECT 1
c3p0.initialPoolSize=1
c3p0.minPoolSize=1
c3p0.maxPoolSize=10
c3p0.acquireIncrement=1
c3p0.idleConnectionTestPeriod=1
Does anyone have an idea of why this happen and how to solve this problem?
Thanks a lot.
if you are seeing a multiplication of c3p0 helper and timer threads, you are somehow creating a multitude of c3p0 DataSources when you want there to be just one. sometimes this happens if you are hot-reloading your app but forgetting to close() your old c3p0 DataSource when you recycle.
effectively it looks like you are "leaking" DataSources. you need to figure out why/where this is happening. for some clues, check out your logs for c3p0 DataSource initialization messages at INFO level. Search for the string "Initializing c3p0 pool", for example.
good luck!
Ok I found a combination of properties to solve my problem, keeping in mind that I don't need a lot of connection at a time:
c3p0.maxStatements=5
c3p0.maxIdleTime=10
c3p0.numHelperThreads=3
c3p0.testConnectionOnCheckout=true
c3p0.preferredTestQuery=SELECT 1
c3p0.initialPoolSize=1
c3p0.minPoolSize=1
c3p0.maxPoolSize=1
c3p0.acquireIncrement=1
c3p0.idleConnectionTestPeriod=1
c3p0.maxAdministrativeTaskTime=1
Thanks to everyone
I would expect it to create a number of threads proportional to c3p0.minPoolSize
and c3p0.maxPoolSize and your maximum is 10.
http://www.mchange.com/projects/c3p0/#other_ds_configuration
"numHelperThreads and maxAdministrativeTaskTime help to configure the behavior of DataSource thread pools. By default, each DataSource has only three associated helper threads. If performance seems to drag under heavy load, or if you observe via JMX or direct inspection of a PooledDataSource, that the number of "pending tasks" is usually greater than zero, try increasing numHelperThreads. maxAdministrativeTaskTime may be useful for users experiencing tasks that hang indefinitely and "APPARENT DEADLOCK" messages. (See Appendix A for more.) "
numHelperThreads defines how many threads per DataSource are used, therefore indeed you will have 10 threads with numHelperThreads=1.
The only way to make sure C3P0 consumes only one Thread is to set c3p0.minPoolSize
and c3p0.maxPoolSize to 1 but this defeats the purpose of connection pooling.