I'm currently working on a project involving a distributed Apache Ignite DB on a cluster of Raspberry Pi.
I want to have 2 separate data regions including one with persistence enabled. Here is my custom config :
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration" id="ignite.cfg">
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="default_region"/>
<!-- Set max cache size to 50 MBytes-->
<property name="maxSize" value="#{50 * 1024 * 1024}"/>
<!-- Specifiying an eviction policy that evicts the latest used data when the data hits 90% of the max storage capacity-->
<property name="pageEvictionMode" value="RANDOM_LRU"/>
<property name="evictionThreshold" value="0.9"/>
</bean>
</property>
<property name="dataRegionConfigurations">
<list>
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="persistence_region"/>
<!-- Set max cache size to 100 MBytes-->
<property name="maxSize" value="#{100 * 1024 * 1024}"/>
<!-- Enable persistent data storage -->
<property name="persistenceEnabled" value="true"/>
</bean>
</list>
</property>
</bean>
</property>
<property name="authenticationEnabled" value="true"/>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="multicastGroup" value="228.10.10.157"/>
</bean>
</property>
</bean>
</property>
</bean>
I built my own Docker image with my custom conf based on the official Dockerfile except that I changed the base image with: FROM arm32v7/openjdk:8-jre-alpine
The troubles begin when I try to start my image... I have some warnings such as :
Nodes started on local machine require more than 80% of physical RAM what can lead to significant slowdown due to swapping (please decrease JVM heap size, data region size or checkpoint buffer size) [required=1300MB, available=924MB]
And then several errors like :
[SEVERE][tcp-disco-msg-worker-[crd]-#2-#46][G] Blocked system-critical thread has been detected. This can lead to cluster-wide undefined behaviour [workerName=disco-notifier-worker, threadName=disco-notifier-worker-#45, blockedFor=11s]
After that, it's impossible for me to activate the cluster (through REST, or the control.sh script) and to query it.
If someone has a config file working on Rpi I'm really interested!
EDIT :
I have tried with #alamar 's suggestion (-Xmx384 and checkpoint page buffer size of 20M) but I still have theses errors when activating :
[SEVERE][rest-#68][GridTcpRestProtocol] Failed to process client request [ses=GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=90 lim=90 cap=8192], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-1, igniteInstanceName=null, finished=false, heartbeatTs=1607710531775, hashCode=9092883, interrupted=false, runner=grid-nio-worker-tcp-rest-1-#39]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, outboundMessagesQueueSizeMetric=null, super=GridNioSessionImpl [locAddr=/127.0.0.1:11211, rmtAddr=/127.0.0.1:59558, createTime=1607710503680, closeTime=1607710511719, bytesSent=2, bytesRcvd=96, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1607710533267, lastSndTime=1607710503680, lastRcvTime=1607710511719, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [marsh=JdkMarshaller [clsFilter=o.a.i.marshaller.MarshallerUtils$1#1eb05], routerClient=false], directMode=false]], accepted=true, markedForClose=true]], msg=GridClientAuthenticationRequest [cred=SecurityCredentials [login=ignite], super=GridClientAbstractMessage [reqId=1, id=b2d7025f-e4a0-4ab7-8c3e-92e2c1c4aea9, destId=null, super=o.a.i.i.processors.rest.client.message.GridClientAuthenticationRequest#18dd064]]]
class org.apache.ignite.IgniteCheckedException: Failed to send message (connection was closed): GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=90 lim=90 cap=8192], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-1, igniteInstanceName=null, finished=false, heartbeatTs=1607710531775, hashCode=9092883, interrupted=false, runner=grid-nio-worker-tcp-rest-1-#39]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, outboundMessagesQueueSizeMetric=null, super=GridNioSessionImpl [locAddr=/127.0.0.1:11211, rmtAddr=/127.0.0.1:59558, createTime=1607710503680, closeTime=1607710511719, bytesSent=2, bytesRcvd=96, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1607710533267, lastSndTime=1607710503680, lastRcvTime=1607710511719, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1#1eb05], routerClient=false], directMode=false]], accepted=true, markedForClose=true]]
at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7589)
at org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:260)
at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:172)
at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1$1.apply(GridTcpRestNioListener.java:296)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1$1.apply(GridTcpRestNioListener.java:293)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
at org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1.apply(GridTcpRestNioListener.java:293)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1.apply(GridTcpRestNioListener.java:261)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
at org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:347)
at org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:335)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:511)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:490)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:467)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2$1.apply(GridRestProcessor.java:187)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2$1.apply(GridRestProcessor.java:184)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
at org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2.body(GridRestProcessor.java:184)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Failed to send message (connection was closed): GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=90 lim=90 cap=8192], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-1, igniteInstanceName=null, finished=false, heartbeatTs=1607710531775, hashCode=9092883, interrupted=false, runner=grid-nio-worker-tcp-rest-1-#39]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, closeSocket=true, outboundMessagesQueueSizeMetric=null, super=GridNioSessionImpl [locAddr=/127.0.0.1:11211, rmtAddr=/127.0.0.1:59558, createTime=1607710503680, closeTime=1607710511719, bytesSent=2, bytesRcvd=96, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1607710533267, lastSndTime=1607710503680, lastRcvTime=1607710511719, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1#1eb05], routerClient=false], directMode=false]], accepted=true, markedForClose=true]]
at org.apache.ignite.internal.util.nio.GridNioServer.send0(GridNioServer.java:642)
at org.apache.ignite.internal.util.nio.GridNioServer.send(GridNioServer.java:583)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onSessionWrite(GridNioServer.java:3693)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionWrite(GridNioFilterAdapter.java:121)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onSessionWrite(GridNioCodecFilter.java:96)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionWrite(GridNioFilterAdapter.java:121)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onSessionWrite(GridNioFilterChain.java:269)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onSessionWrite(GridNioFilterChain.java:192)
at org.apache.ignite.internal.util.nio.GridNioSessionImpl.send(GridNioSessionImpl.java:117)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1.apply(GridTcpRestNioListener.java:290)
... 16 more
Result of "control.sh --set-state ACTIVE" :
Control utility [ver. 2.10.0-SNAPSHOT#20201013-sha1:a2fa7ec3]
2020 Copyright(C) Apache Software Foundation
User: root
Time: 2020-12-11T18:19:04.421
This cluster requires authentication.
Connection to cluster failed. Latest topology update failed.
Command [SET-STATE] finished with code: 2
Control utility has completed execution at: 2020-12-11T18:19:56.559
Execution time: 52138 ms
Thank you!
People did start it occasionally on Rpi.
In your case, please try decreasing JVM's Xmx (-Xmx384 will be OK) and also specify checkpoint page buffer size for persistent region explicitly (20M should be OK in your case).
If you still see "threads blocked" exceptions, please share complete log. You may use Apache Ignite userlist for that. Also, please describe what happens when you try to activate.
There are a couple of things here.
First, I don't think you needed to update the Docker file. A Pi3 has 64-bit cores, so the default should work. I created this one and it works fine on a Pi4.
Second, between your JVM and your data region, you're allocating more memory than your Pi has:
Nodes started on local machine require more than 80% of physical RAM what can lead to significant slowdown due to swapping (please decrease JVM heap size, data region size or checkpoint buffer size) [required=1300MB, available=924MB]
You only appear to have 150Mb for off-heap, so I wonder how big you Java heap is? 8Gb should be plenty.
If your Pi is swapping to disk, that might be causing the blocked system threads.
Finally, since you enabled authentication, you need to supply a username and password when you try to activate the cluster. The default appears to be ignite/ignite.
How can I set Quartz cron trigger for run at one specifc date and time for once?
Eg: run something at 12.30pm on 2017-06-30 and never run again
If you want to achieve using CronTrigger try like below
<bean id="newTrigger" class="org.springframework.scheduling.quartz.CronTriggerFactoryBean">
<property name="jobDetail" ref="oneTimeJob"/>
<property name="cronExpression" value="0 30 12 30 6 ? 2017"/>
</bean>
Or As #scary wombat mention Use SimpleTrigger
SimpleTrigger trigger = (SimpleTrigger) newTrigger()
.withIdentity("trigger1", "group1")
.startAt(myStartTime) // some Date date 30.06.2017 12:30
.forJob("job1", "group1") // identify job with name, group strings
.build();
I am very new to Mule.
I am facing a problem. I have a requirement where I need to read data from a CSV file which is in D:\ drive & insert data into PostgreSQL Database using Mule for every 2min.
So i have chosen Quartz.
Here is my code :
<configuration>
<expression-language autoResolveVariables="true">
<import class="org.mule.util.StringUtils" />
<import class="org.mule.util.ArrayUtils" />
</expression-language>
</configuration>
<spring:beans>
<spring:bean id="jdbcDataSource" class=" ... your data source ... " />
</spring:beans>
<jdbc:connector name="jdbcConnector" dataSource-ref="jdbcDataSource">
<jdbc:query key="insertRow"
value="insert into my_table(col1, col2) values(#[message.payload[0]],#[message.payload[1]])" />
</jdbc:connector>
<quartz:connector name="myQuartzConnector" validateConnections="true" doc:name="Quartz">
<receiver-threading-profile maxThreadsActive="1"/>
</quartz:connector>
<flow name="QuartzFlow" processingStrategy="synchronous">
<quartz:inbound-endpoint doc:name="Quartz"
jobName="CronJobSchedule" cronExpression="0 0/2 * * * ?"
connector-ref="myQuartzConnector" repeatCount="1">
<quartz:event-generator-job>
<quartz:payload>quartzSchedular started</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
<flow-ref name="csvFileToDatabase" doc:name="Flow Reference" />
</flow>
<flow name="csvFileToDatabase">
<file:inbound-endpoint path="/tmp/mule/inbox"
pollingFrequency="5000" moveToDirectory="/tmp/mule/processed">
<file:filename-wildcard-filter pattern="*.csv" />
</file:inbound-endpoint>
<!-- Load all file in RAM - won't work for big files! -->
<file:file-to-string-transformer />
<!-- Split each row, dropping the first one (header) -->
<splitter
expression="#[rows=StringUtils.split(message.payload, '\n\r');ArrayUtils.subarray(rows,1,rows.size())]" />
<!-- Transform CSV row in array -->
<expression-transformer expression="#[StringUtils.split(message.payload, ',')]" />
<jdbc:outbound-endpoint queryKey="insertRow" />
</flow>
This is working fine but in D:\ if i keep csv file mule reading the csv file and writing to the database without waiting for 2 min(means Quartz scheduling time here).
If i set polling frequency of file connector with pollingfrequency="120000"(for 2 min 2*60*1000 milliseconds) also quartz is not waiting for 2 min. Within the scheduling time of 2 min., if i place csv file in D:\ mule reading the csv file and writing to the database without waiting for 2 min(means Quartz scheduling time here).
Even if i place csv file in D:\, mule Quartz has to perform action for every 2 min. only, for this Which changes do i need to include here..
Could any one pls help me..
It is because the flow csvFileToDatabase has its own inbound-enpoint that will execute regardless of the quartz flow. You have the file:inbound-enpoint set to poll every 5000 milliseconds.
Theres no need to have both an inbound-endpoint and quartz scheduling the flow.
Either change the file:inbound-endpoint frequency or if you really want to use quartz to trigger the flow take a look at the mule request module that allows you to use an inbound-endpoint mid-flow: https://github.com/mulesoft/mule-module-requester
I need to schedule a task that will run everyday on 7:00 p.m. in java using quartz. Can someone point out a good tutorial on how to use quartz scheduler in java?
Take a look at quartz API documentation.
Edit:
Working URL for quartz API Documentation
To run everyday at 7pm, configure in your quartz.xml. Here "name" is whatever you want it to be, "job-name" should be same as the job-name you mentioned in between job tags.
<job>
<name>Scheduletracer</name>
<job-class>//here goes package name.program name</job-class>
</job>
<trigger>
<cron>
<name>server1</name>
<job-name>ScheduleTracer</job-name>
<cron-expression>0 0 19 * * ?</cron-expression>
</cron>
</trigger>
I have a simple quartz trigger running in Spring 2.5.6-SEC01.
Trigger definition looks like this:
<bean id="AdvicesCronTrigger" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="AdvicesQuartzJob"/>
<property name="cronExpression" value="0 20/15 * * * ?"/>
</bean>
This is my scheduler factory:
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="triggers">
<list>
<ref bean="AdvicesCronTrigger"/>
</list>
</property>
</bean>
I've read this documentation about firing CRON triggers from Quartz. This is an excerpt:
CronTrigger Example 1 - an expression
to create a trigger that simply fires
every 5 minutes
"0 0/5 * * * ?"
Today I fired my program at 9:40. This is my execution output:
Edit: Bobby is right in his appreciation. I've updatted my execution log:
2010-02-11 09:50:00,000 INFO - START
2010-02-11 10:20:00,000 INFO - START
2010-02-11 10:35:00,000 INFO - START
2010-02-11 10:50:00,000 INFO - START
2010-02-11 11:20:00,000 INFO - START
2010-02-11 11:35:00,000 INFO - START
I expected that this trigger will be fired at
9:50
10:05
10:20
10:35
...
How to accomplish this? Which CRON expression use?
The 20/15 part of the cron expression means every 15 minutes after the 20'th minute of the hour. This means that it will always start at the 20'th minute.
I have never tested it but maybe an expression like this one would be what you are searching for :
0 */15 * * * ?
Not to give you a non-related answer, but sometimes it makes sense to use some services instead of trying to do it yourself :) Take a look at http://www.cronservice.co.uk/new/, http://scheduler.codeeffects.com, or http://www.webbasedcron.com/