We are using Quartz 2.1.6 for job scheduling on a cluster, and storing the job data in a JDBC jobstore in our MySQL database (MySQL 5.1).
All our Quartz configuration (scheduler, jobs, triggers) is done at startup through Spring. We store the data in the database for clustering purposes.
Problem: We have several jobs that were added and then deleted from the Quartz configuration. They are no longer in the config, but they are still present in the tables. How do we get rid of them? Reading the Quartz documentation, it appears that doing manual edits of the tables is a Very Bad Thing.
We do not appear to be explicitly setting JobDetail.setDurability(true), so I'm not sure why these jobs and triggers are hanging around, but they are.
Anyone have an answer?
Yep, it's not the best idea to do this in the database unless you're very on clear what you're doing. So, try doing it programmatically:
ISchedulerFactory factory = new StdSchedulerFactory();
IScheduler schedulder = factory.GetScheduler();
schedulder.DeleteJob(JobName, JobGroup);
Related
I have a backend spring boot application that has a quartz scheduler with multiple job triggers working all good but now we got a new client who wants the exact same solution so in order to support this we choose multi-tenancy by SEPERATE SCHEMA approach so each client/tenant will now have his own schema but I am not sure on how do we have quartz running in each schema individually.
Any help is greatly appreciated.
Did you explore on the use of the Custom ConnectionProvider Implementations in Quartz, wherein, we can provide a custom connection provider that can look up the database of your application to identify the tenant specific connection string and then return to Quartz to be used.
This will allow you to use multiple connections based on some tenancy data.
More here
This question might look too broad, but I could not find a solution or any idea from existing sources in the web. There might not be exact answers for this, what I need is your suggestions on how to get this implemented.
My application has its own datasource for CRUD operations. Plus I have a new micro service to execute scheduled jobs. The idea is to maintain only a single database for both the services.
The existing service has been configured to use a connection pool for its db transactions. properties like max number of active datasources, max idle datasources at a given time etc have been configured for that service.
Now, my question is about maintaining the pool when quartz connects to the same database to query its own tables. At first place, is this possible? or is it always better to keep a separate database for quartz tables. Or Else, can I configure these two services to access same database flawlessly by doing configurations? (Remember that these two db operations happen in two separate microservices)
I have Quartz Scheduler running within WebLogic 12.1.3 and backed by a JobStoreCMT, but its behavior doesn't match the configuration (see below). What am I doing wrong?
Background
Quartz has jobs that are loaded from an XML file on startup and run periodically. Some of those jobs spawn one-time jobs. Also, there are one-time jobs that are manually triggered by users. The user-initiated jobs are done from EJBs that have container-managed transactions.
Questions
The transactions in the job classes are not active in the job's execute() method. I have to call begin/commit/rollback. Shouldn't that be taken care of since wrapJobExecutionInUserTransaction is set to true? That's what the documentation says.
The documentation also says that when using XMLSchedulingDataProcessorPlugin with JobStoreCMT, org.quartz.plugin.jobInitializer.wrapInUserTransaction must be set to true. However, when I do that I get a duplicate transaction exception. What's going on?
WebLogic takes forever to shut down whenever Quartz is enabled even though all of the jobs run quickly. From the logs it looks like WebLogic is waiting for the transactions to time out. Is something in the config contributing to this?
Configuration
org.quartz.scheduler.skipUpdateCheck=true
org.quartz.scheduler.instanceName=MyTaskScheduler
org.quartz.scheduler.threadsInheritContextClassLoaderOfInitializer=true
org.quartz.scheduler.instanceId=AUTO
org.quartz.scheduler.wrapJobExecutionInUserTransaction=true
org.quartz.scheduler.userTransactionURL=javax.transaction.UserTransaction
org.quartz.scheduler.idleWaitTime=30000
org.quartz.scheduler.dbFailureRetryInterval=15000
org.quartz.scheduler.batchTriggerAcquisitionMaxCount=1
org.quartz.scheduler.batchTriggerAcquisitionFireAheadTimeWindow=0
org.quartz.scheduler.makeSchedulerThreadDaemon=false
org.quartz.threadPool.class=org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount=20
org.quartz.threadPool.threadPriority=5
org.quartz.threadPool.makeThreadsDaemons=false
org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreCMT
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.oracle.weblogic.WebLogicOracleDelegate
org.quartz.jobStore.dataSource=MyDataSource
org.quartz.jobStore.nonManagedTXDataSource=MyDataSourceNonXA
org.quartz.jobStore.tablePrefix=QRTZ_
org.quartz.jobStore.useProperties=false
org.quartz.jobStore.misfireThreshold=60000
org.quartz.jobStore.isClustered=true
org.quartz.jobStore.clusterCheckinInterval=15000
org.quartz.jobStore.maxMisfiresToHandleAtATime=20
org.quartz.jobStore.txIsolationLevelSerializable=false
org.quartz.jobStore.txIsolationLevelReadCommitted=false
org.quartz.dataSource.MyDataSource.jndiURL=MyDataSource
org.quartz.dataSource.MyDataSource.java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory
org.quartz.dataSource.MyDataSource.java.naming.provider.url=t3://localhost:7003
org.quartz.dataSource.MyDataSourceNonXA.jndiURL=MyDataSourceNonXA
org.quartz.dataSource.MyDataSourceNonXA.java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory
org.quartz.dataSource.MyDataSourceNonXA.java.naming.provider.url=t3://localhost:7003
org.quartz.plugin.jobInitializer.class=org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin
org.quartz.plugin.jobInitializer.fileNames=E:/tasks.xml
org.quartz.plugin.jobInitializer.failOnFileNotFound=true
org.quartz.plugin.jobInitializer.scanInterval=0
org.quartz.plugin.jobInitializer.wrapInUserTransaction=false
Any help is appreciated.
I am using Quartz scheduler in my application and I am also using master-slave replication for my other DB queries. I want to use the master-slave replication for Quartz scheduler as well hence I want to know if there is a way I can make the changes to split write/read queries which quartz makes to master/slave respectively?
I tried changing the "quartz.properties" but then all the calls going to the master node only
org.quartz.dataSource.quartzDS.driver=com.mysql.jdbc.ReplicationDriver
org.quartz.dataSource.quartzDS.URL=jdbc:mysql:replication://localhost:3306,localhost:3307/quartz?useUnicode=true&characterEncoding=utf8&useTimezone=true&serverTimezone=UTC&useLegacyDatetimeCode=false
org.quartz.dataSource.quartzDS.user=root
org.quartz.dataSource.quartzDS.password=root
org.quartz.dataSource.quartzDS.maxConnections=10
org.quartz.dataSource.quartzDS.validationQuery=select 1
You can define multiple datasources in quartz.properties.
quartzDS is a specific datasource. You can add more with different names by tagging the properties with the name you want (like org.quartz.dataSource.quartzDSTheSecond).
See: http://www.quartz-scheduler.org/documentation/quartz-1.x/configuration/ConfigDataSources
You then can get a connection using DBConnectionManager.getInstance().getConnection("quartzDSTheSecond");
Our use of Quartz so far has been to configure the database backed scheduler and any jobs/triggers in the spring config which is then loaded when the app is run on the cluster. Each server in the cluster then shares the triggers so that the triggers are only run by one of the servers each time.
I now want to dynamically create new triggers for existing jobDetail beans (which are managed by Spring) on any one of the servers, but I need all of the servers in the cluster to be aware of this new Trigger. I also need them to be aware of the trigger being removed by one of the servers.
Using the current set up, will this just work? Does quartz periodically check the database for new triggers?
If not, what other approaches might solve this problem?
I'm fairly new to Quartz so apologies if i've missed something fundamental.
Thanks for your help.
quartz always performs a check against the database when looking for triggers that needs to be executed. so, if one server delete or add a trigger, the other server(s) will automaticly see it.