Our use of Quartz so far has been to configure the database backed scheduler and any jobs/triggers in the spring config which is then loaded when the app is run on the cluster. Each server in the cluster then shares the triggers so that the triggers are only run by one of the servers each time.
I now want to dynamically create new triggers for existing jobDetail beans (which are managed by Spring) on any one of the servers, but I need all of the servers in the cluster to be aware of this new Trigger. I also need them to be aware of the trigger being removed by one of the servers.
Using the current set up, will this just work? Does quartz periodically check the database for new triggers?
If not, what other approaches might solve this problem?
I'm fairly new to Quartz so apologies if i've missed something fundamental.
Thanks for your help.
quartz always performs a check against the database when looking for triggers that needs to be executed. so, if one server delete or add a trigger, the other server(s) will automaticly see it.
Related
I have decided to go with quartz with data base option. I configured a simple job and deployed my changes. These changes are reflected in oracle quartz tables. The table QRTZ_TRIGGERS recorded the trigger name and trigger type as SIMPLE. I wanted to update the simple job to a cron job and I made those changes in spring configuration. Local is working as expected. When I deployed the changes to dev, build went successfully. QRTZ_TRIGGERS did not reflect these changes. The trigger type still shows as SIMPLE. I'm expecting this to be updated by spring on load and create a record in QRTZ_CRON_TRIGGERS and delete the entries in QRTZ_SIMPLE_TRIGGERS
This is not happening. Is there a prop that I can add in spring configuration so that these changes will be picked up when the server starts (for first time)?
overwriteExistingJobs set this prop to true in
setOverwriteExistingJobs
public void setOverwriteExistingJobs(boolean overwriteExistingJobs)
Set whether any jobs defined on this SchedulerFactoryBean should overwrite existing job definitions. Default is "false", to not overwrite already registered jobs that have been read in from a persistent job store.
I have Quartz Scheduler running within WebLogic 12.1.3 and backed by a JobStoreCMT, but its behavior doesn't match the configuration (see below). What am I doing wrong?
Background
Quartz has jobs that are loaded from an XML file on startup and run periodically. Some of those jobs spawn one-time jobs. Also, there are one-time jobs that are manually triggered by users. The user-initiated jobs are done from EJBs that have container-managed transactions.
Questions
The transactions in the job classes are not active in the job's execute() method. I have to call begin/commit/rollback. Shouldn't that be taken care of since wrapJobExecutionInUserTransaction is set to true? That's what the documentation says.
The documentation also says that when using XMLSchedulingDataProcessorPlugin with JobStoreCMT, org.quartz.plugin.jobInitializer.wrapInUserTransaction must be set to true. However, when I do that I get a duplicate transaction exception. What's going on?
WebLogic takes forever to shut down whenever Quartz is enabled even though all of the jobs run quickly. From the logs it looks like WebLogic is waiting for the transactions to time out. Is something in the config contributing to this?
Configuration
org.quartz.scheduler.skipUpdateCheck=true
org.quartz.scheduler.instanceName=MyTaskScheduler
org.quartz.scheduler.threadsInheritContextClassLoaderOfInitializer=true
org.quartz.scheduler.instanceId=AUTO
org.quartz.scheduler.wrapJobExecutionInUserTransaction=true
org.quartz.scheduler.userTransactionURL=javax.transaction.UserTransaction
org.quartz.scheduler.idleWaitTime=30000
org.quartz.scheduler.dbFailureRetryInterval=15000
org.quartz.scheduler.batchTriggerAcquisitionMaxCount=1
org.quartz.scheduler.batchTriggerAcquisitionFireAheadTimeWindow=0
org.quartz.scheduler.makeSchedulerThreadDaemon=false
org.quartz.threadPool.class=org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount=20
org.quartz.threadPool.threadPriority=5
org.quartz.threadPool.makeThreadsDaemons=false
org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreCMT
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.oracle.weblogic.WebLogicOracleDelegate
org.quartz.jobStore.dataSource=MyDataSource
org.quartz.jobStore.nonManagedTXDataSource=MyDataSourceNonXA
org.quartz.jobStore.tablePrefix=QRTZ_
org.quartz.jobStore.useProperties=false
org.quartz.jobStore.misfireThreshold=60000
org.quartz.jobStore.isClustered=true
org.quartz.jobStore.clusterCheckinInterval=15000
org.quartz.jobStore.maxMisfiresToHandleAtATime=20
org.quartz.jobStore.txIsolationLevelSerializable=false
org.quartz.jobStore.txIsolationLevelReadCommitted=false
org.quartz.dataSource.MyDataSource.jndiURL=MyDataSource
org.quartz.dataSource.MyDataSource.java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory
org.quartz.dataSource.MyDataSource.java.naming.provider.url=t3://localhost:7003
org.quartz.dataSource.MyDataSourceNonXA.jndiURL=MyDataSourceNonXA
org.quartz.dataSource.MyDataSourceNonXA.java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory
org.quartz.dataSource.MyDataSourceNonXA.java.naming.provider.url=t3://localhost:7003
org.quartz.plugin.jobInitializer.class=org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin
org.quartz.plugin.jobInitializer.fileNames=E:/tasks.xml
org.quartz.plugin.jobInitializer.failOnFileNotFound=true
org.quartz.plugin.jobInitializer.scanInterval=0
org.quartz.plugin.jobInitializer.wrapInUserTransaction=false
Any help is appreciated.
I am using Quartz scheduler in my application and I am also using master-slave replication for my other DB queries. I want to use the master-slave replication for Quartz scheduler as well hence I want to know if there is a way I can make the changes to split write/read queries which quartz makes to master/slave respectively?
I tried changing the "quartz.properties" but then all the calls going to the master node only
org.quartz.dataSource.quartzDS.driver=com.mysql.jdbc.ReplicationDriver
org.quartz.dataSource.quartzDS.URL=jdbc:mysql:replication://localhost:3306,localhost:3307/quartz?useUnicode=true&characterEncoding=utf8&useTimezone=true&serverTimezone=UTC&useLegacyDatetimeCode=false
org.quartz.dataSource.quartzDS.user=root
org.quartz.dataSource.quartzDS.password=root
org.quartz.dataSource.quartzDS.maxConnections=10
org.quartz.dataSource.quartzDS.validationQuery=select 1
You can define multiple datasources in quartz.properties.
quartzDS is a specific datasource. You can add more with different names by tagging the properties with the name you want (like org.quartz.dataSource.quartzDSTheSecond).
See: http://www.quartz-scheduler.org/documentation/quartz-1.x/configuration/ConfigDataSources
You then can get a connection using DBConnectionManager.getInstance().getConnection("quartzDSTheSecond");
I'm trying to implement the failover strategy when executing jbpm6 processes. My setup is the following:
I'm using jbpm6.2.0-Final (latest stable release) with persistence enabled
I'm constructing an instance of org.kie.spring.factorybeans.RuntimeManagerFactoryBean with type SINGLETON to get KSession to start/abort processes and complete/abort work items
all beans are wired by Spring 3.2
DB2 is used a database engine
I use Tomcat 7.0.27
In the positive scenario everything is working as I expect. But I would like to know how to resume the process in the case of server crash. To reproduce it I started my process (described as BPMN2 file), got at some middle step and killed the Tomcat process. After that I see uncompleted process instance in the PROCESS_INSTANCE_INFO table and uncompleted work item in the WORK_ITEM_INFO table. Also there is a session in the SESSION_INFO table.
My question is: could you show me the example of code which would take that remaining process and resume it starting from the last node (if it is possible).
Update
I forgot to mention that i'm not using jbpm-console, but I'm embedding jbpm into my javaee application.
If you initialize your RuntimeManager on init of your application Server it should take care of reloading and resuming the processes.
You need not worry about reloading it again by yourself.
We are using Quartz 2.1.6 for job scheduling on a cluster, and storing the job data in a JDBC jobstore in our MySQL database (MySQL 5.1).
All our Quartz configuration (scheduler, jobs, triggers) is done at startup through Spring. We store the data in the database for clustering purposes.
Problem: We have several jobs that were added and then deleted from the Quartz configuration. They are no longer in the config, but they are still present in the tables. How do we get rid of them? Reading the Quartz documentation, it appears that doing manual edits of the tables is a Very Bad Thing.
We do not appear to be explicitly setting JobDetail.setDurability(true), so I'm not sure why these jobs and triggers are hanging around, but they are.
Anyone have an answer?
Yep, it's not the best idea to do this in the database unless you're very on clear what you're doing. So, try doing it programmatically:
ISchedulerFactory factory = new StdSchedulerFactory();
IScheduler schedulder = factory.GetScheduler();
schedulder.DeleteJob(JobName, JobGroup);