Quartz Scheduler NOT STARTED - java

I am creating a spring boot application that is using Quartz this is my quartz.yaml file
org:
quartz:
dataSource:
mySql:
maxIdleTime: '60'
idleConnectionValidationSeconds: '50'
password: test2
user: test2
URL: jdbc:mysql://localhost:3306/schedules?useSSL=false
driver: com.mysql.jdbc.Driver
maxConnections: '10'
validationQuery: select 0 from dual
plugin:
jobHistory:
class: org.quartz.plugins.history.LoggingJobHistoryPlugin
jobToBeFiredMessage: 'Job [{1}.{0}] to be fired by trigger [{4}.{3}], re-fire:
{7}'
jobFailedMessage: 'Job [{1}.{0}] execution failed with exception: {8}'
jobWasVetoedMessage: 'Job [{1}.{0}] was vetoed. It was to be fired by trigger
[{4}.{3}] at: {2, date, dd-MM-yyyy HH:mm:ss.SSS}'
jobSuccessMessage: 'Job [{1}.{0}] execution complete and reports: {8}'
triggerHistory:
class: org.quartz.plugins.history.LoggingTriggerHistoryPlugin
triggerFiredMessage: 'Trigger [{1}.{0}] fired job [{6}.{5}] scheduled at:
{2, date, dd-MM-yyyy HH:mm:ss.SSS}, next scheduled at: {3, date, dd-MM-yyyy
HH:mm:ss.SSS}'
triggerCompleteMessage: 'Trigger [{1}.{0}] completed firing job [{6}.{5}]
with resulting trigger instruction code: {9}. Next scheduled at: {3, date,
dd-MM-yyyy HH:mm:ss.SSS}'
triggerMisfiredMessage: 'Trigger [{1}.{0}] misfired job [{6}.{5}]. Should
have fired at: {3, date, dd-MM-yyyy HH:mm:ss.SSS}'
jobStore:
maxMisfiresToHandleAtATime: '10'
dataSource: mySql
isClustered: 'false'
class: org.quartz.impl.jdbcjobstore.JobStoreTX
useProperties: 'true'
misfireThreshold: '60000'
driverDelegateClass: org.quartz.impl.jdbcjobstore.StdJDBCDelegate
tablePrefix: QRTZ_
threadPool:
threadPriority: '5'
class: org.quartz.simpl.SimpleThreadPool
threadCount: '4'
scheduler:
instanceId: AUTO
instanceName: SampleJobScheduler
idleWaitTime: '10000'
I am trying to use SQL database but its uses ram. This is the error I am getting
Scheduler class: 'org.quartz.core.QuartzScheduler' - running
locally. NOT STARTED. Currently in standby mode. Number of jobs
executed: 0 Using thread pool 'org.quartz.simpl.SimpleThreadPool' -
with 10 threads. Using job-store 'org.quartz.simpl.RAMJobStore' -
which does not support persistence. and is not clustered.
This is my Configuration
#Configuration
public class Config {
#Value("${library.file-path.quartz}")
Resource quartsPath;
#Autowired private ApplicationContext applicationContext;
#Bean
public SchedulerFactoryBean scheduler(JobFactory factory) throws IOException {
SchedulerFactoryBean schedulerFactory = new SchedulerFactoryBean();
schedulerFactory.setQuartzProperties(quartzProperties());
schedulerFactory.setJobFactory(factory);
return schedulerFactory;
}
#Bean
public SpringBeanJobFactory springBeanJobFactory() {
AutoWiringSpringBeanJobFactory jobFactory = new AutoWiringSpringBeanJobFactory();
jobFactory.setApplicationContext(applicationContext);
return jobFactory;
}
public Properties quartzProperties() throws IOException {
PropertiesFactoryBean propertiesFactoryBean = new PropertiesFactoryBean();
propertiesFactoryBean.setLocation(quartsPath);
propertiesFactoryBean.afterPropertiesSet();
return propertiesFactoryBean.getObject();
}
}

I got it working setting the datasource in SchedulerFactoryBean.
dataSource is the DataSource used by the application and inyected by Spring in this configuration class:
#Bean
public SchedulerFactoryBean schedulerFactoryBean() throws IOException {
SchedulerFactoryBean factory = new SchedulerFactoryBean();
factory.setJobFactory(springBeanJobFactory());
factory.setQuartzProperties(quartzProperties());
factory.setDataSource(dataSource);
return factory;
}

Related

Spring Cloud Data Flow datasources overrides spring batch app datasource

I'm setting up an instance of Spring Cloud Data Flow. I've run the following commands:
java -jar spring-cloud-dataflow-server-2.9.2.jar \
--spring.cloud.dataflow.features.streams-enabled=false \
--spring.cloud.dataflow.features.schedules-enabled=true \
--spring.datasource.url=jdbc:postgresql://localhost:5432/batch \
--spring.datasource.username=postgres \
--spring.datasource.password=postgres \
--spring.datasource.driver-class-name=org.postgresql.Driver \
--spring.datasource.initialization_mode=always
I've developed a batch job using spring batch to be deployed in this platform. The job uses two data sources: batch for Spring and task Metadata and app_db for my business logic. When I run the app locally, it persists metadata in batch and my business data in app_db, as expected. The problem is when I try to execute de job inside the Spring Cloud Dataflow. The platform overrides my configured business logic database and uses only the batch database, which is supposed to store metadata only.
application.yaml
spring:
batch:
datasource:
url: jdbc:postgresql://localhost:5432/batch
username: postgres
password: postgres
datasource:
url: jdbc:postgresql://localhost:5432/app_db
username: postgres
password: postgres
DatasourceConfiguration
public class DatasourceConfiguration {
#Bean
#ConfigurationProperties("spring.datasource")
#Primary
public DataSourceProperties dataSourceProperties() {
return new DataSourceProperties();
}
#Bean
#Primary
public DataSource dataSource(DataSourceProperties dataSourceProperties) {
return dataSourceProperties.initializeDataSourceBuilder().build();
}
#Bean(name = "batchDataSourceProperties")
#ConfigurationProperties("spring.batch.datasource")
public DataSourceProperties batchDataSourceProperties() {
return new BatchDataSourceProperties();
}
#Bean(name = "batchDataSource")
public DataSource batchDataSource() {
return batchDataSourceProperties.initializeDataSourceBuilder().build();
}
}
#SpringBootApplication
#EnableTask
#EnableBatchProcessing
public class BatchApplication {
#Bean
public TaskConfigurer taskConfigurer(#Qualifier("batchDataSource") DataSource dataSource) {
return new DefaultTaskConfigurer(dataSource);
}
#Bean
public BatchConfigurer batchConfigurer(#Qualifier("batchDataSource") DataSource dataSource) {
return new DefaultBatchConfigurer(dataSource);
}
public static void main(String[] args) {
SpringApplication.run(BatchApplication.class, args);
}
}
Job
#Bean
public Job startJob(JobBuilderFactory jobBuilderFactory, DataSource dataSource) {
try {
System.out.println(dataSource.getConnection().getMetaData().getURL().toString());
} catch (Exception e) {
//TODO: handle exception
}
}
When I look at the data source,jdbc:postgresql://localhost:5432/app_db will be printed when the batch is executed from local and jdbc:postgresql://localhost:5432/batch will be printed when the batch (task) is executed from SCDF.
I want to know how dataflow is overriding application the spring.datasource even though I am not passing any arguments while executing the task. Please suggest a solution to avoid the overriding of datasource.
One solution I am thinking of is creating AppDatasourceConfiguration(app.datasource) use it. But is there a possibility to use spring.datasource without getting overiddien by SCDF.

Connecting quartz datasource to spring boot bean

I have set up a connection to our database, using a bean in spring boot. This all works correctly in our normal application.
#Bean(name="MoliDBConfig")
#Primary
public DataSource dataSource() throws SQLException {
I would like to connect to the same datasource from quartz, but am getting a JNDI error. (As an aside it is worth noting that I have managed to connect to a datasource from quartz by manually providing the config details. See the commented out code in quartz.properties below.)
2019-03-19T10:51:52.342+00:00 [APP/PROC/WEB/0] [OUT] ERROR 2019-03-19 10:51:52.333 - o.q.u.JNDIConnectionProvider 126 Error looking up datasource: Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file: java.naming.factory.initial javax.naming.NoInitialContextException: Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file: java.naming.factory.initial| at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:662) ~[?:1.8.0_202]| at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:313) ~[?:1.8.0_202]| at javax.naming.InitialContext.getURLOrDefaultInitCtx(InitialContext.java:350) ~[?:1.8.0_202]| at javax.naming.InitialContext.lookup(InitialContext.java:417) ~[?:1.8.0_202]| at org.quartz.utils.JNDIConnectionProvider.init(JNDIConnectionProvider.java:124) [quartz-2.3.0.jar!/:?]| at org.quartz.utils.JNDIConnectionProvider.(JNDIConnectionProvider.java:102) [quartz-2.3.0.jar!/:?]| at org.quartz.impl.StdSchedulerFactory.instantiate(StdSchedulerFactory.java:995) [quartz-2.3.0.jar!/:?]| at org.quartz.impl.StdSchedulerFactory.getScheduler(StdSchedulerFactory.java:1559) [quartz-2.3.0.jar!/:?]| at com.xxx.d3.moli.schedule.QrtzScheduler.scheduler(QrtzScheduler.java:52) [classes/:?]| at com.xxx.d3.moli.schedule.QrtzScheduler$$EnhancerBySpringCGLIB$$aa50aa7b.CGLIB$scheduler$1() [classes/:?]| at com.xxx.d3.moli.schedule.QrtzScheduler$$EnhancerBySpringCGLIB$$aa50aa7b$$FastClassBySpringCGLIB$$374ea1c1.invoke() [classes/:?]| at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228) [spring-core-4.3.22.RELEASE.jar!/:4.3.22.RELEASE]| at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:358) [spring-context-4.3.22.RELEASE.jar!/:4.3.22.RELEASE]| at com.xxx.d3.moli.schedule.QrtzScheduler$$EnhancerBySpringCGLIB$$aa50aa7b.scheduler() [classes/:?]
quartz.properties
# Configure Main Scheduler Properties
org.quartz.scheduler.instanceName = MyClusteredScheduler
org.quartz.scheduler.instanceId = AUTO
# thread-pool
org.quartz.threadPool.class=org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount=2
org.quartz.threadPool.threadPriority = 5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread=true
# job-store
#org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.class = org.springframework.scheduling.quartz.LocalDataSourceJobStore
#org.quartz.jobStore.dataSource = myDS
org.quartz.jobStore.dataSource = managedTXDS
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.useProperties = false
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 20000
# Configure Datasources
org.quartz.dataSource.managedTXDS.jndiURL=java:comp/env/jdbc/MoliDBConfig
org.quartz.dataSource.myDS.driver = oracle.jdbc.driver.OracleDriver
org.quartz.dataSource.myDS.URL = jdbc:oracle:thin:#ldap://oid.xxx.com:389/odsod012,cn=OracleContext,dc=xxx,dc=com
org.quartz.dataSource.myDS.user = MOLI_QRTZ_SCHED
org.quartz.dataSource.myDS.password = MOLI_QRTZ_SCHED
org.quartz.dataSource.myDS.maxConnections = 5
org.quartz.dataSource.myDS.validationQuery=select 0 from dual
# A different classloader is needed to work with Spring Boot dev mode,
# see https://docs.spring.io/spring-boot/docs/current/reference/html/using-boot-devtools.html#using-boot-devtools-known-restart-limitations
# and https://github.com/quartz-scheduler/quartz/issues/221
org.quartz.scheduler.classLoadHelper.class=org.quartz.simpl.ThreadContextClassLoadHelper
And my quartz config file
#Configuration
#Profile({"oracle-cloud","mysql-cloud"})
public class QrtzScheduler {
private static final Logger LOGGER = LogManager.getLogger(QrtzScheduler.class);
#Autowired
private ApplicationContext applicationContext;
#PostConstruct
public void init() {
LOGGER.info("Hello world from Quartz...");
}
#Bean
public SpringBeanJobFactory springBeanJobFactory() {
AutoWiringSpringBeanJobFactory jobFactory = new AutoWiringSpringBeanJobFactory();
LOGGER.debug("Configuring Job factory");
jobFactory.setApplicationContext(applicationContext);
return jobFactory;
}
#Bean
public Scheduler scheduler(Trigger trigger, JobDetail job) throws SchedulerException, IOException {
StdSchedulerFactory factory = new StdSchedulerFactory();
factory.initialize(new ClassPathResource("quartz/quartz.properties").getInputStream());
LOGGER.debug("Getting a handle to the Scheduler");
Scheduler scheduler = factory.getScheduler();
scheduler.setJobFactory(springBeanJobFactory());
if (scheduler.checkExists(job.getKey())){
scheduler.deleteJob(job.getKey());
}
scheduler.scheduleJob(job, trigger);
LOGGER.debug("Starting Scheduler threads");
scheduler.start();
return scheduler;
}
#Bean
public JobDetail jobDetail() {
return JobBuilder.newJob()
.ofType(ScheduledJob.class)
.storeDurably()
.withIdentity(JobKey.jobKey("Qrtz_Job_Detail"))
.withDescription("Invoke Sample Job service...")
.build();
}
#Bean
public Trigger trigger(JobDetail job) {
int frequencyInMin = 5;
LOGGER.info("Configuring trigger to fire every {} minutes", frequencyInMin);
return TriggerBuilder.newTrigger().forJob(job)
.withIdentity(TriggerKey.triggerKey("Qrtz_Trigger"))
.withDescription("Sample trigger")
.withSchedule(simpleSchedule().withIntervalInMinutes(frequencyInMin).repeatForever())
.build();
}
}
What is wrong with my approach? (The documentation at quartz-scheduler.org all appears to be down) :-(
So I changed over to this:
#Configuration
#Profile({"oracle-cloud","mysql-cloud"})
public class QrtzScheduler {
private static final Logger LOGGER = LogManager.getLogger(QrtzScheduler.class);
#Autowired
private ApplicationContext applicationContext;
#Autowired
#Qualifier("MoliDBConfig")
private DataSource dataSource;
#Value("${app.repeatInterval}")
private int repeatInterval;
#Value("${org.quartz.scheduler.instanceName}")
private String instanceName;
#Value("${org.quartz.scheduler.instanceId}")
private String instanceId;
#Value("${org.quartz.threadPool.threadCount}")
private String threadCount;
#Value("${org.quartz.threadPool.class}")
private String threadClass;
#Value("${org.quartz.threadPool.threadPriority}")
private String threadPriority;
#Value("${org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread}")
private String threadsInheritContextClassLoaderOfInitializingThread;
#Value("${org.quartz.jobStore.class}")
private String jobStoreClass;
#Value("${org.quartz.jobStore.driverDelegateClass}")
private String jobStoreDriverDelegateClass;
#Value("${org.quartz.jobStore.useProperties}")
private String jobStoreUseProperties;
#Value("${org.quartz.jobStore.tablePrefix}")
private String jobStoreTablePrefix;
#Value("${org.quartz.jobStore.misfireThreshold}")
private String jobStoreMisfireThreshold;
#Value("${org.quartz.jobStore.isClustered}")
private String jobStoreIsClustered;
#Value("${org.quartz.jobStore.clusterCheckinInterval}")
private String jobStoreClusterCheckinInterval;
#Value("${org.quartz.scheduler.classLoadHelper.class}")
private String schedulerClassLoadHelperClass;
#PostConstruct
public void init() {
LOGGER.info("Hello world from Quartz...");
}
#Bean
public SpringBeanJobFactory jobFactory() {
AutoWiringSpringBeanJobFactory jobFactory = new AutoWiringSpringBeanJobFactory();
LOGGER.debug("Configuring Job factory");
jobFactory.setApplicationContext(applicationContext);
return jobFactory;
}
#Bean
public SchedulerFactoryBean schedulerFactoryBean() {
LOGGER.debug("Configuring schedulerFactoryBean");
SchedulerFactoryBean factory = new SchedulerFactoryBean();
factory.setOverwriteExistingJobs(true);
factory.setJobFactory(jobFactory());
Properties quartzProperties = new Properties();
quartzProperties.setProperty("org.quartz.scheduler.instanceName",instanceName);
quartzProperties.setProperty("org.quartz.scheduler.instanceId",instanceId);
quartzProperties.setProperty("org.quartz.threadPool.threadCount",threadCount);
quartzProperties.setProperty("org.quartz.threadPool.class",threadClass);
quartzProperties.setProperty("org.quartz.threadPool.threadPriority",threadPriority);
quartzProperties.setProperty("org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread",threadsInheritContextClassLoaderOfInitializingThread);
quartzProperties.setProperty("org.quartz.jobStore.class",jobStoreClass);
quartzProperties.setProperty("org.quartz.jobStore.driverDelegateClass",jobStoreDriverDelegateClass);
quartzProperties.setProperty("org.quartz.jobStore.useProperties",jobStoreUseProperties);
quartzProperties.setProperty("org.quartz.jobStore.tablePrefix",jobStoreTablePrefix);
quartzProperties.setProperty("org.quartz.jobStore.misfireThreshold",jobStoreMisfireThreshold);
quartzProperties.setProperty("org.quartz.jobStore.isClustered",jobStoreIsClustered);
quartzProperties.setProperty("org.quartz.jobStore.clusterCheckinInterval",jobStoreClusterCheckinInterval);
quartzProperties.setProperty("org.quartz.scheduler.classLoadHelper.class",schedulerClassLoadHelperClass);
factory.setDataSource(dataSource);
factory.setQuartzProperties(quartzProperties);
factory.setTriggers(moliJobTrigger().getObject());
return factory;
}
#Bean(name = "moliJobTrigger")
public SimpleTriggerFactoryBean moliJobTrigger() {
long minute = 60000;
long repeatIntervalInMin = repeatInterval * minute;
LOGGER.debug("Configuring jobTrigger");
SimpleTriggerFactoryBean factoryBean = new SimpleTriggerFactoryBean();
factoryBean.setJobDetail(moliJobDetails().getObject());
factoryBean.setStartDelay(minute);
factoryBean.setRepeatInterval(repeatIntervalInMin);
factoryBean.setRepeatCount(SimpleTrigger.REPEAT_INDEFINITELY);
factoryBean.setMisfireInstruction(SimpleTrigger.MISFIRE_INSTRUCTION_RESCHEDULE_NEXT_WITH_REMAINING_COUNT);
return factoryBean;
}
#Bean(name = "moliJobDetails")
public JobDetailFactoryBean moliJobDetails() {
LOGGER.debug("Configuring jobDetails");
JobDetailFactoryBean jobDetailFactoryBean = new JobDetailFactoryBean();
jobDetailFactoryBean.setJobClass(ScheduledAutomaticMonitoringJob.class);
jobDetailFactoryBean.setDescription("Moli_Quartz_Description");
jobDetailFactoryBean.setDurability(true);
jobDetailFactoryBean.setName("Moli_Quartz_Name");
return jobDetailFactoryBean;
}
}
application.yml
org:
quartz:
scheduler:
instanceName: MyClusteredScheduler
instanceId: AUTO
classLoadHelper:
class: org.quartz.simpl.ThreadContextClassLoadHelper
threadPool:
class: org.quartz.simpl.SimpleThreadPool
threadCount: 10
threadPriority: 5
threadsInheritContextClassLoaderOfInitializingThread: true
jobStore:
class: org.springframework.scheduling.quartz.LocalDataSourceJobStore
dataSource: app.dataSource
driverDelegateClass: org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
useProperties: false
tablePrefix: COT_QRTZ_
misfireThreshold: 60000
isClustered: true
clusterCheckinInterval: 20000
Also needed this class:
import org.quartz.spi.TriggerFiredBundle;
import org.springframework.beans.factory.config.AutowireCapableBeanFactory;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.scheduling.quartz.SpringBeanJobFactory;
/**
* Adds auto-wiring support to quartz jobs.
*/
public final class AutoWiringSpringBeanJobFactory extends SpringBeanJobFactory implements ApplicationContextAware {
private AutowireCapableBeanFactory beanFactory;
public void setApplicationContext(ApplicationContext applicationContext) {
beanFactory = applicationContext.getAutowireCapableBeanFactory();
}
#Override
protected Object createJobInstance(final TriggerFiredBundle bundle) throws Exception {
final Object job = super.createJobInstance(bundle);
beanFactory.autowireBean(job);
return job;
}
}

Problem rising spring batch version 3.0.7 to 3.0.9

I am working on a Java project using the Spring Batch framework and I have to upgrade it from version 3.0.7 to 3.0.9 but I have a problem :
#Bean
public Step bonjourRetourJpaToX(StepBuilderFactory stepBuilderFactory, TaskExecutor taskExecutor,
ItemProcessor<BonjourRetourGroup, BonjourElementBlocAgent> compBonjourRetourBonjourProcessor,
#Qualifier("promotionListenerBonjourRetour") ExecutionContextPromotionListener promotionListenerBonjourRetour) {
return stepBuilderFactory.get("bonjourRetourJpaToX").<BonjourRetourGroup, BonjourElementBlocAgent>chunk(batchSizeLoadXml)
.reader(bonjourRetourJpaReader)
.processor(compBonjourRetourBonjourProcessor)
.writer(bonjourRetourXmlWriter)
.taskExecutor(taskExecutor)
.listener(promotionListenerBonjourRetour)
.listener(dsBonjourRetourFinalProcessor())
.listener(dsBonjourRetourTemporaryProcessor)
.listener(bonjourRetourBonjourBoucleStepListener())
.throttleLimit(bonjourRetourJobThrottleLimit)
.build();
}
With the maven compilation error :
[ERROR] src/main/java/source/bonjourRetour/batch/BonjourRetourJobConfig.java:[1025,33] cannot find symbol
symbol: method throttleLimit(int)
This step is multi-threaded
For Mahmoud Ben Hassine :
The composite processor ->
#Scope(value = "step", proxyMode = ScopedProxyMode.NO)
#Bean(name = "compBonjourRetourBonjourProcessorX")
public ItemProcessor<BonjourRetourGroup, XElementBlocAgent> compBonjourRetourBonjourProcessorX(#Value("#{stepExecution}") final StepExecution stepExecution) {
CompositeItemProcessor<BonjourRetourGroup, XElementBlocAgent> compositeProcessor = new CompositeItemProcessor<>();
compositeProcessor
.setDelegates(Arrays.asList(dsBonjourRetourXTemporaryProcessor, dsBonjourRetourXinalProcessor()));
return compositeProcessor;
}
I noticed that when I moved the listener : bonjourRetourBonjourBoucleStepListener(); like that it compiles :
#Bean
public Step bonjourRetourJpaToX(StepBuilderFactory stepBuilderFactory, TaskExecutor
taskExecutor,
ItemProcessor<BonjourRetourGroup, BonjourElementBlocAgent> compBonjourRetourBonjourProcessor,
#Qualifier("promotionListenerBonjourRetour") ExecutionContextPromotionListener promotionListenerBonjourRetour) {
return stepBuilderFactory.get("bonjourRetourJpaToX").<BonjourRetourGroup, BonjourElementBlocAgent>chunk(batchSizeLoadXml)
.reader(bonjourRetourJpaReader)
.processor(compBonjourRetourBonjourProcessor)
.writer(bonjourRetourXmlWriter)
.taskExecutor(taskExecutor)
.listener(promotionListenerBonjourRetour)
.listener(dsBonjourRetourFinalProcessor())
.listener(bonjourRetourBonjourBoucleStepListener())
//SWITCHED
.listener(dsBonjourRetourTemporaryProcessor)
.throttleLimit(bonjourRetourJobThrottleLimit)
.build();
I know that dsBonjourRetourTemporaryProcessor return an ItemProcessor and bonjourRetourBonjourBoucleStepListener return a StepExecutionListener
For the other parts of my project with this problem, when I moved a StepExecutionListener, it works
Maybe the problem is here ?

Quartz Scheduler create schedulerFactoryBean Beans without quartz.properties

I have running quartz scheduler with inside my spring app.
right now, i'm using quartz.properties to contain any properties value and use it to create schedulerFactoryBean Bean and it works fine.
this is my QuartzConfiguration..
#Configuration
public class QuartzConfiguration {
public static final String CONTEXT_KEY = "applicationContext";
//#Autowired
//private DataSource dataSource;
#Bean
public SchedulerFactoryBean schedulerFactoryBean() {
SchedulerFactoryBean scheduler = new SchedulerFactoryBean();
scheduler.setApplicationContextSchedulerContextKey(CONTEXT_KEY);
scheduler.setConfigLocation(new ClassPathResource("config/quartz.properties"));
//scheduler.setDataSource(dataSource);
//scheduler.setAutoStartup(true);
scheduler.setWaitForJobsToCompleteOnShutdown(true);
return scheduler;
}
}
My quartz.properties :
org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.useProperties=false
org.quartz.jobStore.dataSource=myDS
org.quartz.dataSource.myDS.driver =oracle.jdbc.OracleDriver
org.quartz.dataSource.myDS.URL = jdbc:oracle:thin:#example:1521:db
org.quartz.dataSource.myDS.user = user
org.quartz.dataSource.myDS.password = password
org.quartz.dataSource.myDS.maxConnections = 5
org.quartz.dataSource.myDS.validationQuery = select 1 from dual
org.quartz.jobStore.isClustered=false
org.quartz.jobStore.tablePrefix = DPPA.QUARTZ_
org.quartz.threadPool.threadCount=1
org.quartz.scheduler.skipUpdateCheck=true
org.quartz.plugin.jobHistory.class=id.co.fifgroup.dpa.batch.BatchHistoryListener
i want to create schedulerFactoryBean without any quartz.properties, because my client dont want to change any database connection inside the war archieve.
is it possible to create schedulerFactoryBean without any quartz.properties ?
You can configure it without properties file in this way;
Properties p = new Properties();
p.put("org.quartz.scheduler.instanceName", "Scheduler_test");
p.put("org.quartz.threadPool.threadCount", 2);
...
StdSchedulerFactory factory = new StdSchedulerFactory(p);

Quartz scheduler with jdbc : **job not saved in database**

I have used the JDBC job store technique for saving the jobs.
My quartz.properties file is here
==========================================
Configure Main Scheduler Properties
===========================================
org.quartz.scheduler.instanceName = ScheduleReport
//its having a method scheduler.scheduleJob(jobDetail, simpleTrigger);
org.quartz.scheduler.instanceId = AUTO
============================================
Configure ThreadPool
============================================
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 25
org.quartz.threadPool.threadPriority = 5
============================================
Configure JobStore
=============================================
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.useProperties = true
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 20000
org.quartz.jobStore.dataSource = qzDS
org.quartz.dataSource.qzDS.driver = oracle.jdbc.driver.OracleDriver
org.quartz.dataSource.qzDS.URL = jdbc:oracle:thin:#localhost:1521:abc
org.quartz.dataSource.qzDS.user = system
org.quartz.dataSource.qzDS.password = abc
org.quartz.dataSource.qzDS.maxConnections = 30
Java file implements jobs :
public class ScheduleReport implements Job {
public void execute(JobExecutionContext context) throws JobExecutionException {
JobDataMap dataMap = context.getJobDetail().getJobDataMap();
exportReportAsType();
}
}
Another java file with a schedule job :
JobDetail jobweek = JobBuilder.newJob(ScheduleReport.class)
.withIdentity(jobname+"_"+jobid, "week").build();
jobweek.getJobDataMap().put("reportid", reportid);
jobweek.getJobDataMap().put("jobid", jobid);
jobweek.getJobDataMap().put("reportname", reportname);
jobweek.getJobDataMap().put("dateTime", dateTime);
jobweek.getJobDataMap().put("dateTimeType", sdf1.parse(dateTime));
jobweek.getJobDataMap().put("schedulerService",
schedulerService);
CronTrigger trigger3 = TriggerBuilder
.newTrigger()
.withIdentity(jobname+"_"+jobid, "week")
.startAt(sdf1.parse(dateTime))
.withSchedule(CronScheduleBuilder.cronSchedule(weekSch))
.endAt(sdf1.parse(enddateTime)).build();
// scheduler.start();
scheduler.scheduleJob(jobweek, trigger3);
This is my configuration but I'm not able to store jobs in DB. When I saw the table in db QRTZ_LOCKS & QRTZ_SCHEDULER_STATE only contain data and other tables are blank.

Categories

Resources