There are few tables that quartz scheduler uses for scheduling jobs and to identify which job is running currently. It uses the following tables :
qrtz_fired_triggers
qrtz_simple_triggers
qrtz_simprop_triggers
qrtz_cron_triggers
qrtz_blob_triggers
qrtz_triggers
qrtz_job_details
qrtz_calendars
qrtz_paused_trigger_grps
qrtz_locks
qrtz_scheduler_state
So what is the purpose of each of these tables and what does it siginifies?
Thanks in advance.
I had the chance to work on quartz recently. I'm myself not 100% clear on this topic and I'm going to try my best to answer your question from my personal experience.
You must remember this basic flow-
1. Create a job.
2. Create a Trigger.
3. Scheduler(job, trigger)
All the above tables are based on the above 3 steps.
qrtz_triggers is where general information of a trigger is saved.
qrtz_simple_triggers, qrtz_simprop_triggers, qrtz_crons_triggers, qrtz_blob_triggers have a foreign key relation to qrtz_triggers which save those specific details. Ex. Cron has cron expression which is unique to it.
qrtz_job_details is simply the task to be executed.
qrtz_fired_triggers is a log of all the triggers that were fired.
qrtz_paused trigger is to save the information about triggers which are not active.
Calendars are useful for excluding blocks of time from the the trigger’s firing schedule. For instance, you could create a trigger that fires a job every weekday at 9:30 am, but then add a Calendar that excludes all of the business’s holidays. (taken from website. I havent' worked on it)
I honestly haven't worked in qrtz_locks, qrtz_scheduler_sate tables.
Check out this image which I reverse engineered using MySQL workbench.
I can provide some inputs for qrtz_lock and qrtz_scheduler_sate tables:
qrtz_lock stores the value of the instance name executing the job, to avoid the sceanario of multiple nodes executing the same job
qrtz_scheduler_state is for capturing the node state so that if in any case one node gets down or failed to execute one of the job then the other instance running in clustering mode can pick the misfired job.
Related
I have a job done with the quartz plugin, but I would like to run it only if my User table contains users with active status.
Thank you
I have a job done with the quartz plugin, but I would like to run it
only if my User table contains users with active status.
There isn't enough information here to know for sure what the right thing to do is but one thing to consider is you can schedule the Quartz job to run at whatever frequency you like and have the job query the database regarding user active status, and then the job can react accordingly.
I think what you want is a webhook; after a put/post/delete (ie idempotent call), you call another endpoint.
I am working on a spring boot project, the task is: I should lock editing capability of product for 15 minutes after creation, so basically if the user create a product, this product will be locked for editing for 15 minutes, after that it can be changed or deleted from the DB.
My question is: what is the best approach to achieve that:
1- Should I add a field to the DB table called lastUpdate and then check if the time of 15 minutes exceed.
2- Should I save all the newly created products in array and clear this array every 15 minutes.
or there is any better ways in regard to performance and best practice??
I am using springboot with JPA & mysql.
Thanks.
You should not use the locking available in InnoDB.
Instead, you should have some column in some table that controls the lock. It should probably be a TIMESTAMP so you can decide whether the 15 minutes has been used up.
If the 'expiration' and 'deletion' and triggered by some db action (attempt to use the item, etc), check it as part of that db action. The expiration check (and delete) should be part of the transaction that includes that action; this will use InnoDB locking, but only briefly.
If there is no such action, then use either a MySQL EVENT or an OS "cron job" to run around every few minutes to purge anything older than 15 minutes. (There will be a slight delay in purging, but that should not matter.
If you provide the possible SQL statements that might occur during the lifetime of the items, I may be able to be more specific.
you can make some check in your update method and delete method. If there are many methods, you can use AOP.
You can make use of both the functionalities you have mentioned.
First its good to have a lastUpdated field in tables which would help you in future also with other functionalities.
And then you can have an internal cache (map which has time and object reference), store objects in that and restrict editing for them. You can run a scheduler to check every minute and clear objects from you map and make them available for updating.
You could put your new products in an "incoming_products" table and put a timestamp column in that table that you set to date_add(now(), INTERVAL 15 MINUTE).
Then have a #Scheduled method in our Boot application run every minute to check if there are incoming products where the timestamp column is < now() and insert them as products and delete the corresponding incoming-products record.
I have several datetime column in my MySQL DB. I want to trigger a java function when the date is reached. At worst, trigger a MySQL function can do the job as well. How to have a trigger datetime based on MySQL without doing cron job on every minute ?
Even a trigger wouldn't do the job, there must be a process to check (in your case if the date was reached)
Like Thomas said job or a task (CRON) that sets the trigger or an application to do what you wish with the database.
It is not ideal to do this in database, but if no better choice, you can achieve it by creating a MySQL event, which is a scheduled task.
You need to add a insert and/or update trigger to the database table and create the event based on the datetime value of the column
You can create the event in the way that drops itself after it is executed at the specified time.
One of my Java application's functionality is to read and parse very frequently (almost every 5 minutes) an xml file and populate a database table. I have created a cron job to do that. Most of the columns' values remain the same but for certain columns there may be a frequent update on the value. I was wondering what is the most efficient way of doing that:
1) Delete the table every time and re-create it or
2) Update the table data and specifically the column where a change in the source file has appeared.
The number of rows parsed and persisted every time is about 40000-50000.
I would assume that around 2000-3000 rows need to update on every cron job run.
I am using JPA to persist data to a mysql server and I have gone for the first option so far.
Obviously for both options the job would execute as a single transaction.
Any ideas which one is better and possibly any optimization suggestions?
I would suggest scheduling your jobs using something more sophisticated than cron. For instance, Quartz.
Can anyone please suggest me the best approach for my requirement? I need to automatically update the table value after some specified time, using Java and MySQL as the database.
Using Quartz scheduler you can achieve this. You need to create one job and run that job at the required time so that it will fetch the data from the database and according to that you will do what's needed.
Quartz Scheduler Tutorial
Create a timer in Java.
Add a task to timer that updates value to MySQL.
Start timer
Example of Java Timer API: here