In realtime database I have child bus_id & its nodes are bus_no, bus_id, bus_time and creationDate as a timestamp. Already I have sorting data using orderBychild(timestamp). I have also implement create/add bus_id if bus_id not exists using rule !data.exists && newData.exists().
Now I want To implement that, update child bus_id if it's created 10 minutes before or update child bus_id if it having timestamp 10 minutes before.
And i will updating data by ref.child(bus_id).setValue(busInfo);
So is there query for invalidating above problem by using ternary operator?
What I can make out from your question is that, you want to update bus_id child if it has been created 10 mins before otherwise update that bus_id which was created 10 mins ago.
I can suggest you two ways to do so, first is by adding a new timestamp with name timeCreated for every bus_id, then you can retrieve their value, and check if it is 10 mins old or not.
This can let you update the bus_id which is 10 minutes older.
Another way is by by altering the Firebase rules according to your need, as #JeremyW said in the comments.
Some useful resources for that will be, this video, which you should skip to the time 22:55.
Firebase docs regarding this. and this stack overflow question.
Related
This question already has answers here:
FireStore date query not working as expected
(2 answers)
Firestore query documents by birthday date, like only query by day and month exclude year?
(1 answer)
Closed 1 year ago.
I created an android application that shows me my firestore database that I created but my problem is that I want to show the data by date in a specific way.
For example: for today (26-12-2021) when I open the app I want to see ONLY the data created today in an activity.
And I want to show the old data (25-12-2021) and before that in another activity.
I looked over some solutions but didn't find exactly what I wanted,
any help would be effective for me. Thank you all in advance.
Since you mentioned that your data is in the Firestore database, you must be able to render data via queries. If your document has a field "created_on" which is the date field, then you can filter using it. You can do something like this:
private final FirebaseFirestore db = FirebaseFirestore.getInstance();
db.collection(collection).whereEqualTo(field, value).get();
field can be "created_on" and value can be today's date. There is also a provision to use other comparisons such as greater than or less than. The below link explains how to structure the queries:
Perform simple and compound queries in Cloud Firestore
Please note that the query capabilities/number of times that you can use will depend on the plan you are with on Firebase. The free plan (Spark) allows you to retrieve only 3 results with compound queries. The paid plan (Blaze) has no limit.
I am working on a spring boot project, the task is: I should lock editing capability of product for 15 minutes after creation, so basically if the user create a product, this product will be locked for editing for 15 minutes, after that it can be changed or deleted from the DB.
My question is: what is the best approach to achieve that:
1- Should I add a field to the DB table called lastUpdate and then check if the time of 15 minutes exceed.
2- Should I save all the newly created products in array and clear this array every 15 minutes.
or there is any better ways in regard to performance and best practice??
I am using springboot with JPA & mysql.
Thanks.
You should not use the locking available in InnoDB.
Instead, you should have some column in some table that controls the lock. It should probably be a TIMESTAMP so you can decide whether the 15 minutes has been used up.
If the 'expiration' and 'deletion' and triggered by some db action (attempt to use the item, etc), check it as part of that db action. The expiration check (and delete) should be part of the transaction that includes that action; this will use InnoDB locking, but only briefly.
If there is no such action, then use either a MySQL EVENT or an OS "cron job" to run around every few minutes to purge anything older than 15 minutes. (There will be a slight delay in purging, but that should not matter.
If you provide the possible SQL statements that might occur during the lifetime of the items, I may be able to be more specific.
you can make some check in your update method and delete method. If there are many methods, you can use AOP.
You can make use of both the functionalities you have mentioned.
First its good to have a lastUpdated field in tables which would help you in future also with other functionalities.
And then you can have an internal cache (map which has time and object reference), store objects in that and restrict editing for them. You can run a scheduler to check every minute and clear objects from you map and make them available for updating.
You could put your new products in an "incoming_products" table and put a timestamp column in that table that you set to date_add(now(), INTERVAL 15 MINUTE).
Then have a #Scheduled method in our Boot application run every minute to check if there are incoming products where the timestamp column is < now() and insert them as products and delete the corresponding incoming-products record.
I want to do timer operation on After submitting a form if data has been approved within 6 hrs then only it will be updated in database otherwise delete the record. How do I do it?
Maybe you can add a column about submitting time. Before someone approve submit, you can verification time column is not expired. In addition, you can add a column to describe which row is approved.
I would store a timestamp with each record, and write the code to purge all the records older than six ours. I would run that code regularily (say every 15 minutes). At the record creation, I could also schedule a task for 6 hours later that would run the code.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm new to MongoDB and trying to figure out some solutions for basic requirements.
Here is the scenario:
I have an application which saves info to MongoDB through a scheduler. Each time when the scheduler runs, i need to fetch the last inserted document from db to get the last info id which i should send with my request to get the next set of info. Also when i save an object, i need to validate it whether a specific value for a field is already there in db, if it is there, i just need to update a count in that document or else will save a new one.
My Questions are:
I'm using MongoDB java driver. Is it good to generate the object id myself, or is it good to use what is generated from the driver or MongoDB itself?
What is the best way to retrieve the last inserted document to find the last processed info id?
If i do a find for the specific value in a column before each insert, i'm worried about the performance of the application since it is supposed to have thousands of records. What is the best way to do this validation?
I saw in some places where they are talking about two writes when doing inserts, one to write the document to the collection, other one to write it to another collection as last updated to keep track of the last inserted entry. Is this a good way? We don't normally to this with relational databases.
Can i expect the generated object ids to be unique for every record inserted in the future too? (Have a doubt whether it can repeat or not)
Thanks.
Can i expect the generated object ids to be unique for every record inserted in the future too? (Have a doubt whether it can repeat or not)
I'm using MongoDB java driver. Is it good to generate the object id myself, or is it good to use what is generated from the driver or MongoDB itself?
If you don't provide a object id for the document, mongo will automatically generate it for you. All documents must definitely have _id. Here is the relevant reference.
The relevant part is
ObjectId is a 12-byte BSON type, constructed using:
a 4-byte value representing the seconds since the Unix epoch,
a 3-byte machine identifier,
a 2-byte process id, and
a 3-byte counter, starting with a random value.
http://docs.mongodb.org/manual/reference/object-id/
I guess this is more than enough randomization(although a poor hash choice, since dates/time increase monotonically)
If i do a find for the specific value in a column before each insert, i'm worried about the performance of the application since it is supposed to have thousands of records. What is the best way to do this validation?
Index the field(there are no columns in mongodb, 2 documents can have different fields) first.
db.collection_name.ensureIndex({fieldName : 1})
What is the best way to retrieve the last inserted document to find the last processed >info id?
Fun fact. We don't need info id field if we are using it once and then deleting the document. The _id field is datewise sorted. But if the document is regularly updated, then we need to modify the document atomically with the find and modify operation.
http://api.mongodb.org/java/2.6/com/mongodb/DBCollection.html#findAndModify%28com.mongodb.DBObject,%20com.mongodb.DBObject,%20com.mongodb.DBObject,%20boolean,%20com.mongodb.DBObject,%20boolean,%20boolean%29
Now you have updated the date of the last inserted/modified document. Make sure this field is indexed. Now using the above link, look for the sort parameter and populate it with
new BasicDbObject().put("infoId", -1); //-1 is for descending order.
I saw in some places where they are talking about two writes when doing inserts, one to write the document to the collection, other one to write it to another collection as last updated to keep track of the last inserted entry. Is this a good way? We don't normally to this with relational databases.
Terrible Idea ! Welcome to Mongo. You can do better than this.
I have a table that contains approx 10 million rows. This table is periodically updated (few times a day) by an external process. The table contains information that, if not in the update, should be deleted. Of course, you don't know if its in the update until the update has finished.
Right now, we take the timestamp of when the update began. When the update finishes, anything that has an "updated" value less than the start timestammp is wiped. This works for now, but is problematic when the updater process crashes for whatever value - we have to start again with a new timestamp value.
It seems to be that there must be something more robust as this is a common problem. Any advice?
Instead of a time stamp, use an integer revision number. Increment it ONLY when you have a complete update, and then delete the elements with out of date revisions.
If you use a storage engine that supports transactions, like InnoDb (you're using MySql right?), you can consider using transactions, so if the update process crashes, the modifications are not commited.
Here is the official documentation.
We don't know anything about your architecture, and how you do this update (pure SQL, webservice?), but you might already have a transaction management layer.