I have application having 4 nodes. Sometimes all my jobs will be stuck waiting for lock from 'Select * from QRTZ_LOCKS where lock_name='TRIGGER_ACCESS' for update'
While reading some of the articles someone suggested to turn off global lock using this property
org.quartz.jobStore.lockOnInsert=false
Has anyone tried to run Quartz in cluster mode with lockoninsert=false?
I am planning to use following configuration
#============================================================================
# Configure Main Scheduler Properties
#============================================================================
org.quartz.scheduler.instanceName = StandardScheduler
org.quartz.scheduler.instanceId = AUTO
#============================================================================
# Configure ThreadPool
#============================================================================
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 3
org.quartz.threadPool.threadPriority = 5
#============================================================================
# Configure JobStore
#============================================================================
org.quartz.jobStore.misfireThreshold = 300000
org.quartz.jobStore.class=org.springframework.scheduling.quartz.LocalDataSourceJobStore
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.useProperties=false
org.quartz.jobStore.tablePrefix=QRTZ_
org.quartz.jobStore.isClustered=true
org.quartz.jobStore.lockOnInsert=false
org.quartz.jobStore.acquireTriggersWithinLock=true
org.quartz.jobStore.lockHandler.class=org.quartz.impl.jdbcjobstore.UpdateLockRowSemaphore
org.quartz.scheduler.jmx.export=true
A found some comment in the 1.8.6' source
public boolean isLockOnInsert() {
return lockOnInsert;
}
/**
* Whether or not to obtain locks when inserting new jobs/triggers.
* Defaults to <code>true</code>, which is safest - some db's (such as
* MS SQLServer) seem to require this to avoid deadlocks under high load,
* while others seem to do fine without.
*
* <p>Setting this property to <code>false</code> will provide a
* significant performance increase during the addition of new jobs
* and triggers.</p>
*
* #param lockOnInsert
*/
Related
I defined a job with Quartz that calls a web service. I have a trigger with Cron expression which run every 50 minutes as 0 0/50 * ? * * * .
I have a requirement that execute the job in startup application and after that every 50 minutes.
the job factory is:
Trigger trigger = newTrigger().withIdentity(name, "our.trigger")
.withSchedule(CronScheduleBuilder.cronSchedule("0
0/50 * ? * * *")).startNow().build();
JobDetail jobDetail = newJob(jobClass).withIdentity(name, "our.job").build();
Set<Trigger> triggers = new HashSet<>();
triggers.add(trigger);
stdScheduler.scheduleJob(jobDetail, triggers, true);
stdScheduler.start();
How do I solve this issue?
I solved the problem with the following code:
Trigger startupTrigger = newTrigger().withIdentity(name+".startup", "trigger")
.withSchedule(SimpleScheduleBuilder.repeatSecondlyForTotalCount(1)).startNow().build();
Thanks for #biiyamn
I've been working on annotated websockets lately, with the Jetty API (9.4.5 release) , and made a chat with it.
However i got an issue, after 5 minutes (which i believe is the default timer), the session is closed (it is not due to an error).
The only solution I've found yet, is to notify my socket On closing event and reopen the connection in a new socket.
However i've read on stackOverflow, that by setting IdleTimeOut in the WebsocketPolicy, i could avoid the issue:
I've tried setting to 3600000 for instance, but the behavior does not change at all
I also tried to set it to -1 but i get the following error: IdleTimeout [-1] must be a greater than or equal to 0
private ServletContextHandler setupWebsocketContext() {
ServletContextHandler websocketContext = new AmosContextHandler(ServletContextHandler.SESSIONS | ServletContextHandler.SECURITY);
WebSocketHandler socketCreator = new WebSocketHandler(){
#Override
public void configure(WebSocketServletFactory factory){
factory.getPolicy().setIdleTimeout(-1);
factory.getPolicy().setMaxTextMessageBufferSize(MAX_MESSAGE_SIZE);
factory.getPolicy().setMaxBinaryMessageBufferSize(MAX_MESSAGE_SIZE);
factory.getPolicy().setMaxTextMessageSize(MAX_MESSAGE_SIZE);
factory.getPolicy().setMaxBinaryMessageSize(MAX_MESSAGE_SIZE);
factory.setCreator(new UpgradedSocketCreator());
}
};
ServletHolder sh = new ServletHolder(new WebsocketChatServlet());
websocketContext.addServlet(sh, "/*");
websocketContext.setContextPath("/Chat");
websocketContext.setHandler(socketCreator);
websocketContext.getSessionHandler().setMaxInactiveInterval(0);
return websocketContext;
}
I've also tried to change the policy directly in the OnConnect event, by using the call session.getpolicy.setIdleTimeOut(), but I haven't noticed any results.
Is this an expected behavior or am I missing something? Thanks for your help.
EDIT:
Log on the closure:
Client Side:
2017-07-03T12:48:00.552 DEBUG HttpClient#179313750-scheduler Ignored idle endpoint SocketChannelEndPoint#2fb4b627{localhost/127.0.0.1:5080<->/127.0.0.1:53835,OPEN,fill=-,flush=-,to=1/300000}{io=0/0,kio=0,kro=1}->WebSocketClientConnection#e0198ece[ios=IOState#3ac0ec79[CLOSING,in,!out,close=CloseInfo[code=1000,reason=null],clean=false,closeSource=LOCAL],f=Flusher[queueSize=0,aggregateSize=0,failure=null],g=Generator[CLIENT,validating],p=Parser#65c4d838[ExtensionStack,s=START,c=0,len=187,f=null]]
Server side:
2017-07-03T12:48:00.595 DEBUG Idle pool thread onClose WebSocketServerConnection#e0033d54[ios=IOState#10d40dca[CLOSED,!in,!out,finalClose=CloseInfo[code=1000,reason=null],clean=true,closeSource=REMOTE],f=Flusher[queueSize=0,aggregateSize=0,failure=null],g=Generator[SERVER,validating],p=Parser#317213f3[ExtensionStack,s=START,c=0,len=2,f=CLOSE[len=2,fin=true,rsv=...,masked=true]]]<-SocketChannelEndPoint#690dfbfb'{'/127.0.0.1:53835<->/127.0.0.1:5080,CLOSED,fill=-,flush=-,to=1/360000000}'{'io=0/0,kio=-1,kro=-1}->WebSocketServerConnection#e0033d54[ios=IOState#10d40dca[CLOSED,!in,!out,finalClose=CloseInfo[code=1000,reason=null],clean=true,closeSource=REMOTE],f=Flusher[queueSize=0,aggregateSize=0,failure=null],g=Generator[SERVER,validating],p=Parser#317213f3[ExtensionStack,s=START,c=0,len=2,f=CLOSE[len=2,fin=true,rsv=...,masked=true]]]
2017-07-03T12:48:00.595 DEBUG Idle pool thread org.eclipse.jetty.util.thread.Invocable$InvocableExecutor#4f13dee2 invoked org.eclipse.jetty.io.ManagedSelector$$Lambda$193/682154970#551e133a
2017-07-03T12:48:00.595 DEBUG Idle pool thread EatWhatYouKill#6ba355e4/org.eclipse.jetty.io.ManagedSelector$SelectorProducer#7b1559f1/PRODUCING/0/1 produce exit
2017-07-03T12:48:00.595 DEBUG Idle pool thread ran EatWhatYouKill#6ba355e4/org.eclipse.jetty.io.ManagedSelector$SelectorProducer#7b1559f1/PRODUCING/0/1
2017-07-03T12:48:00.595 DEBUG Idle pool thread run EatWhatYouKill#6ba355e4/org.eclipse.jetty.io.ManagedSelector$SelectorProducer#7b1559f1/PRODUCING/0/1
2017-07-03T12:48:00.595 DEBUG Idle pool thread EatWhatYouKill#6ba355e4/org.eclipse.jetty.io.ManagedSelector$SelectorProducer#7b1559f1/PRODUCING/0/1 run
2017-07-03T12:48:00.597 DEBUG Idle pool thread 127.0.0.1 has disconnected !
2017-07-03T12:48:00.597 DEBUG Idle pool thread Disconnected: 127.0.0.1 (127.0.0.1) (statusCode= 1,000 , reason=null)
Annotated WebSockets have their own timeout settings in the annotation.
#WebSocket(maxIdleTime=30000)
The annotation #WebSocket has option:
int maxIdleTime() default -2;
In fact it's not clear what does it mean.
If you check implementation, you can find:
if (anno.maxIdleTime() > 0)
{
this.policy.setIdleTimeout(anno.maxIdleTime());
}
method implementation:
/**
* The time in ms (milliseconds) that a websocket may be idle before closing.
*
* #param ms
* the timeout in milliseconds
*/
public void setIdleTimeout(long ms)
{
assertGreaterThan("IdleTimeout",ms,0);
this.idleTimeout = ms;
}
and finally:
/**
* The time in ms (milliseconds) that a websocket may be idle before closing.
* <p>
* Default: 300000 (ms)
*/
private long idleTimeout = 300000;
Conclusion: negative value apply default behavior (300000 ms). You need to configure 'idleTimeout' according your business value.
PS: solved my case with:
#WebSocket(maxIdleTime = Integer.MAX_VALUE)
I have set a trigger using cronScheduler with misfireInstruction like follows
trigger = newTrigger().withIdentity("autoLockTrigger", "autoLockGroup").startNow() .withSchedule(cronSchedule(croneExpression).withMisfireHandlingInstructionFireAndProceed())
.forJob("autoLockJob","autoLockGroup")
.build();
my quartz.properties is like follows
org.quartz.scheduler.instanceName =MyScheduler
# Configuring ThreadPool
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 1
org.quartz.threadPool.threadPriority = 9
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.dataSource = myDS
org.quartz.jobStore.tablePrefix = QRTZ_
#org.quartz.dataSource.myDS.jndiURL = jdbc/vikas
org.quartz.dataSource.myDS.driver = com.mysql.jdbc.Driver
org.quartz.dataSource.myDS.URL = jdbc:mysql://staging:3307/facao
org.quartz.dataSource.myDS.user = root
org.quartz.dataSource.myDS.password = toor
org.quartz.dataSource.myDS.maxConnections = 30
#org.quartz.jobStore.nonManagedTXDataSource = myDS
#to store data in string format (name-value pair)
org.quartz.jobStore.useProperties=true
org.quartz.jobStore.misfireThreshold = 60000
In my code if I set some trigger at particular time and if server is in running state then scheduler runs properly but if server is down for the time in which scheduler is suppose to be run and then started after some time then scheduler should run the misfired instruction. But in my case the misfired instruction is not running all the time it runs some time not always so my purpose is not fulfilled. Please give some solution. Thank you in advance.
I am not sure about the cron triggers but for simple triggers yeah,
if the end time of the trigger has been passed then some of the provided misfire instruction
will not work. See the javadoc snippet for more info.
I guess the same would be the case with cron trigger too.
So, it totally depends on what cron expression you use.
I am porting our code from IAS to JBoss AS.
There is strange behavior where quartz does not trigger any event at all, and no errors appear at quartz logs. I have also noticed that the Quartz tables are not populated (QRTZ_JOB_DETAILS, QRTZ_TRIGGERS etc.).
I am using JOBStoreCMT with quartz version 1.5.2. datasource is well declared. Jobs and triggers worked well in IAS and declared within the code.
quartz properties:
#============================================================================
# Configure Main Scheduler Properties
#============================================================================
org.quartz.scheduler.instanceName = bitbandScheduler
org.quartz.scheduler.instanceId = AUTO
#============================================================================
# Configure ThreadPool
#============================================================================
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 15
org.quartz.threadPool.threadPriority = 5
#============================================================================
# Configure JobStore
#============================================================================
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreCMT
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.useProperties = false
org.quartz.jobStore.dataSource = bitband_pluginDS
org.quartz.jobStore.nonManagedTXDataSource = bitband_pluginDSTX
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = false
org.quartz.jobStore.clusterCheckinInterval = 20000
#============================================================================
# Configure Datasources
#============================================================================
org.quartz.dataSource.bitband_pluginDS.jndiURL=java:bitband_pluginDS
org.quartz.dataSource.bitband_pluginDSTX.jndiURL=java:bitband_pluginDS
oracle-ds.xml:
<xa-datasource>
<jndi-name>bitband_pluginDS</jndi-name>
<!-- uncomment to enable interleaving <interleaving/> -->
<isSameRM-override-value>false</isSameRM-override-value>
<xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class>
<xa-datasource-property name="URL">jdbc:oracle:thin:#ord-rtv063.orca.ent:1521:DB11g</xa-datasource-property>
<xa-datasource-property name="User">RIGHTV7_VS</xa-datasource-property>
<xa-datasource-property name="Password">RIGHTV7_VS</xa-datasource-property>
<max-pool-size>100</max-pool-size>
<min-pool-size>20</min-pool-size>
<valid-connection-checker-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleValidConnectionChecker</valid-connection-checker-class-name>
<check-valid-connection-sql>select 1 from dual</check-valid-connection-sql>
<!-- Uses the pingDatabase method to check a connection is still valid before handing it out from the pool -->
<!--valid-connection-checker-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleValidConnectionChecker</valid-connection-checker-class-name-->
<!-- Checks the Oracle error codes and messages for fatal errors -->
<exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name>
<!-- Oracles XA datasource cannot reuse a connection outside a transaction once enlisted in a global transaction and vice-versa -->
<no-tx-separate-pools/>
<!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml (optional) -->
<metadata>
<type-mapping>Oracle9i</type-mapping>
</metadata>
</xa-datasource>
What am I missing?
PS, When using JobStoreTX, everything works well, so I guess it is something related to the container transaction manager.
After hanging around the problem for the last couple of days I found a solution.
Adding the property below to quartz.properties file. As simple as that.
org.quartz.jobStore.dontSetAutoCommitFalse=false
Setting this parameter to true tells Quartz not to call setAutoCommit(false) on connections obtained from the DataSource(s). This can be helpful in a few situations, such as if you have a driver that complains if it is called when it is already off. This property defaults to false, because most drivers require that setAutoCommit(false) is called.
For some reason JBoss overrides the default value, so I had to add it explicitly.
The credit goes to unknown user at:
http://osdir.com/ml/java.quartz.user/2007-10/msg00123.html
I am using Quartz Enterprise Job Scheduler (1.8.3). The job configuration comes from several xml files and we have a special job that detects changes in these xml files and re-schedules jobs. This works dandy, but the problem is that I also need this "scheduler job" to re-schedule itself. Once this job re-schedules itself, for some reason, I see that it gets executed many times. I don't see any exceptions, though.
I have replicated and isolated the problem. This would be the entry-point:
public class App {
public static void main(final String[] args) throws ParseException, SchedulerException {
// get the scheduler from the factory
final Scheduler scheduler = StdSchedulerFactory.getDefaultScheduler();
// start the scheduler
scheduler.start();
// schedule the job to run every 20 seconds
final JobDetail jobDetail = new JobDetail("jobname", "groupname", TestJob.class);
final Trigger trigger = new CronTrigger("triggername", "groupname", "*/20 * * * * ?");
// set the scheduler in the job data map, so the job can re-configure itself
jobDetail.getJobDataMap().put("scheduler", scheduler);
// schedule job
scheduler.scheduleJob(jobDetail, trigger);
}
}
And this would be the job class:
public class TestJob implements Job {
private final static Logger LOG = Logger.getLogger(TestJob.class);
private final static AtomicInteger jobExecutionCount = new AtomicInteger(0);
public void execute(final JobExecutionContext context) throws JobExecutionException {
// get the scheduler from the data map
final Scheduler scheduler = (Scheduler) context.getJobDetail().getJobDataMap().get("scheduler");
LOG.info("running job! " + jobExecutionCount.incrementAndGet());
// buid the job detail and trigger
final JobDetail jobDetail = new JobDetail("jobname", "groupname", TestJob.class);
// this time, schedule it to run every 35 secs
final Trigger trigger;
try {
trigger = new CronTrigger("triggername", "groupname", "*/50 * * * * ?");
} catch (final ParseException e) {
throw new JobExecutionException(e);
}
trigger.setJobName("jobname");
trigger.setJobGroup("groupname");
// set the scheduler in the job data map, so this job can re-configure itself
jobDetail.getJobDataMap().put("scheduler", scheduler);
try {
scheduler.rescheduleJob(trigger.getName(), jobDetail.getGroup(), trigger);
} catch (final SchedulerException e) {
throw new JobExecutionException(e);
}
}
}
I've tried both with scheduler.rescheduleJob and with scheduler.deleteJob then scheduler.scheduleJob. No matter what I do, this is the output I get (I'm using log4j):
23:22:15,874 INFO SchedulerSignalerImpl:60 - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
23:22:15,878 INFO QuartzScheduler:219 - Quartz Scheduler v.1.8.3 created.
23:22:15,883 INFO RAMJobStore:139 - RAMJobStore initialized.
23:22:15,885 INFO QuartzScheduler:241 - Scheduler meta-data: Quartz Scheduler (v1.8.3)
'MyScheduler' with instanceId '1'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 3 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
23:22:15,885 INFO StdSchedulerFactory:1275 - Quartz scheduler 'MyScheduler' initialized from default resource file in Quartz package: 'quartz.properties'
23:22:15,886 INFO StdSchedulerFactory:1279 - Quartz scheduler version: 1.8.3
23:22:15,886 INFO QuartzScheduler:497 - Scheduler MyScheduler_$_1 started.
23:22:20,018 INFO TestJob:26 - running job! 1
23:22:50,004 INFO TestJob:26 - running job! 2
23:22:50,010 INFO TestJob:26 - running job! 3
23:22:50,014 INFO TestJob:26 - running job! 4
23:22:50,016 INFO TestJob:26 - running job! 5
...
23:22:50,999 INFO TestJob:26 - running job! 672
23:22:51,000 INFO TestJob:26 - running job! 673
Notice how at 23:22:20,018, the job runs fine. At this point, the job re-schedules itself to run every 50 seconds. The next time it runs (at 23:22:50,004), it gets scheduled hundreds of times.
Any ideas on how to configure a job while executing that job? What am I doing wrong?
Thanks!
Easy.
First off you have a couple misunderstandings about Cron Expressions. "*/20 * * * * ?" is every twenty seconds as the comment implies, but only because 60 is evenly divisible by 20. "/50 ..." is not every fifty seconds. it is seconds 0 and 50 of every minute. As another example, "/13 ..." is seconds 0, 13, 26, 39, and 52 of every minute - so between second 52 and the next minute's 0 second, there is only 8 seconds, not 13. So with */50 you'll get 50 seconds between every other firing, and 10 seconds between the others.
That however is not the cause of your rapid firing of the job. The problem is that the current second is "50" and you are scheduling the new trigger to fire on second "50", so it immediately fires. And then it is still second 50, and the job executes again, and it schedules another trigger to fire on second 50, and so on, as many times as it can during the 50th second.
You need to set the trigger's start time into the future (at least one second) or it will fire on the same second you are scheduling it, if the schedule matches the current second.
Also if you really need every "N" seconds type of schedule, I suggest SimpleTrigger rather than CronTrigger. SimpleTrigger can do "every 35 seconds" or "every 50 seconds" no problem. CronTrigger is meant for expressions like "on seconds 0, 15, 40 and 43 of minutes 15 and 45 of the 10 o'clock hour on every Monday of January".