how to set expiry to form data in java? - java

I want to do timer operation on After submitting a form if data has been approved within 6 hrs then only it will be updated in database otherwise delete the record. How do I do it?

Maybe you can add a column about submitting time. Before someone approve submit, you can verification time column is not expired. In addition, you can add a column to describe which row is approved.

I would store a timestamp with each record, and write the code to purge all the records older than six ours. I would run that code regularily (say every 15 minutes). At the record creation, I could also schedule a task for 6 hours later that would run the code.

Related

Best approach to lock editing certain record in DB

I am working on a spring boot project, the task is: I should lock editing capability of product for 15 minutes after creation, so basically if the user create a product, this product will be locked for editing for 15 minutes, after that it can be changed or deleted from the DB.
My question is: what is the best approach to achieve that:
1- Should I add a field to the DB table called lastUpdate and then check if the time of 15 minutes exceed.
2- Should I save all the newly created products in array and clear this array every 15 minutes.
or there is any better ways in regard to performance and best practice??
I am using springboot with JPA & mysql.
Thanks.
You should not use the locking available in InnoDB.
Instead, you should have some column in some table that controls the lock. It should probably be a TIMESTAMP so you can decide whether the 15 minutes has been used up.
If the 'expiration' and 'deletion' and triggered by some db action (attempt to use the item, etc), check it as part of that db action. The expiration check (and delete) should be part of the transaction that includes that action; this will use InnoDB locking, but only briefly.
If there is no such action, then use either a MySQL EVENT or an OS "cron job" to run around every few minutes to purge anything older than 15 minutes. (There will be a slight delay in purging, but that should not matter.
If you provide the possible SQL statements that might occur during the lifetime of the items, I may be able to be more specific.
you can make some check in your update method and delete method. If there are many methods, you can use AOP.
You can make use of both the functionalities you have mentioned.
First its good to have a lastUpdated field in tables which would help you in future also with other functionalities.
And then you can have an internal cache (map which has time and object reference), store objects in that and restrict editing for them. You can run a scheduler to check every minute and clear objects from you map and make them available for updating.
You could put your new products in an "incoming_products" table and put a timestamp column in that table that you set to date_add(now(), INTERVAL 15 MINUTE).
Then have a #Scheduled method in our Boot application run every minute to check if there are incoming products where the timestamp column is < now() and insert them as products and delete the corresponding incoming-products record.

How to efficiently track unprocessed records in database?

I have a table in a database that is continuously being populated with new records that have to be simply sent to Elasticsearch.
Every 15 minutes the table accrues about 15000 records. My assignment is to create a #Scheduled job that every 15 minutes gathers unprocessed records and post them to Elasticsearch.
My question is what is the most efficient way to do it? How to track unprocessed records efficiently?
My suggestion is to employ a column INSERTED_DATE that is already in this table and each time persist the last processed INSERTED_DATE in an auxiliary table. Nevertheless, it can happen that two or more records were inserted simultaneously but only one of them was processed? Surely there must be other corner cases that discard my approach.
Could you share any thoughts about it? For me it looks like a typical problem for Data Intensive application but I face it for the 1st time in a real life.

firebase real time database validating rules

In realtime database I have child bus_id & its nodes are bus_no, bus_id, bus_time and creationDate as a timestamp. Already I have sorting data using orderBychild(timestamp). I have also implement create/add bus_id if bus_id not exists using rule !data.exists && newData.exists().
Now I want To implement that, update child bus_id if it's created 10 minutes before or update child bus_id if it having timestamp 10 minutes before.
And i will updating data by ref.child(bus_id).setValue(busInfo);
So is there query for invalidating above problem by using ternary operator?
What I can make out from your question is that, you want to update bus_id child if it has been created 10 mins before otherwise update that bus_id which was created 10 mins ago.
I can suggest you two ways to do so, first is by adding a new timestamp with name timeCreated for every bus_id, then you can retrieve their value, and check if it is 10 mins old or not.
This can let you update the bus_id which is 10 minutes older.
Another way is by by altering the Firebase rules according to your need, as #JeremyW said in the comments.
Some useful resources for that will be, this video, which you should skip to the time 22:55.
Firebase docs regarding this. and this stack overflow question.

Populate database table on a frequent basis using JPA

One of my Java application's functionality is to read and parse very frequently (almost every 5 minutes) an xml file and populate a database table. I have created a cron job to do that. Most of the columns' values remain the same but for certain columns there may be a frequent update on the value. I was wondering what is the most efficient way of doing that:
1) Delete the table every time and re-create it or
2) Update the table data and specifically the column where a change in the source file has appeared.
The number of rows parsed and persisted every time is about 40000-50000.
I would assume that around 2000-3000 rows need to update on every cron job run.
I am using JPA to persist data to a mysql server and I have gone for the first option so far.
Obviously for both options the job would execute as a single transaction.
Any ideas which one is better and possibly any optimization suggestions?
I would suggest scheduling your jobs using something more sophisticated than cron. For instance, Quartz.

Incremental update of millions of records

I have a table that contains approx 10 million rows. This table is periodically updated (few times a day) by an external process. The table contains information that, if not in the update, should be deleted. Of course, you don't know if its in the update until the update has finished.
Right now, we take the timestamp of when the update began. When the update finishes, anything that has an "updated" value less than the start timestammp is wiped. This works for now, but is problematic when the updater process crashes for whatever value - we have to start again with a new timestamp value.
It seems to be that there must be something more robust as this is a common problem. Any advice?
Instead of a time stamp, use an integer revision number. Increment it ONLY when you have a complete update, and then delete the elements with out of date revisions.
If you use a storage engine that supports transactions, like InnoDb (you're using MySql right?), you can consider using transactions, so if the update process crashes, the modifications are not commited.
Here is the official documentation.
We don't know anything about your architecture, and how you do this update (pure SQL, webservice?), but you might already have a transaction management layer.

Categories

Resources