Delete firestore document based on rule - java

I created an app where people can upload some images of some themselves.
Now, In order to deal with cases where people can upload inappropriate images, I created a reporting system. What I'm doing is basically every time someone reports the image, the ID of the reporting person is added to an array in firestore like this:
db
.collection( "Reports" )
.document( ImageID )
.update( "Reports", FieldValue.arrayUnion( UID ) );
Now, I want to set a rule that if for example, the size of the array is 5 (5 different people reports the image) that it will automatically delete that image from the cloud.
Is there any way I can do it instead of every time reading the array and check ist size?
Thank you

You can create a trigger on your Reports collection.
export const updateReportTrigger = functions.firestore
.document('Reports/{ImageID}')
.onUpdate(onUpdate)
async function onUpdate({ before, after }, context) {
const newData = after.data()
if (newData && newData.Reports && newData.Reports.length > 5) {
// put your delete logic here
// you can access the document id through context.params.ImageID
}
}
https://firebase.google.com/docs/functions/firestore-events

The most common way to do that would be through Cloud Functions, which are server-side code that is automatically triggered when (for this use-case) something is written to Firestore. See the documentation on Cloud Functions and the page on Firestore triggers.
An alternate (but more involved) would be to secure access through a query and security rules. In this scenario you'd:
Add a reportCount to the document, and ensure this gets updated in sync with the reports.
From the application code use a query to only request images with a reportCount less than 5.
Use security rules to only allow queries with that clause in it.
As said, this is more involved in the amount of code you write, but the advantage is that no server-side code is involved, and that documents are immediately blocked when too many reports come in for them.

Related

Android: Best way to store large amount of sensor datas over long time

I'm fairly new to Android-Development and I got a general question about How-To:
My App gets Sensor-Data from Step-Detector (Detected steps gets added up).
Now I need to store those Steps (which will be a lot of Data).
The steps should be stored like this:
If Todays
steps are stored on per Hour basis.
Else
steps are stored on per Day basis
SharedPreferences falls out of this as it only stores KeyValues.
But can SQLite handle this? Or is there any other way?
A future feature could be to sync those data with a Server.
I mean this could end up in thousands of Entries, and the app will also support other large data sets which need to get stored in similar way.
Try using Realm noSql database for it. The point is, you can save entire database on sd card as separate file for each day and process it later. It is native and work very fast with large amount of data. You can process all your readings later on - open database, transform readings (perhaps interpolate values for older to shring data in size) and then upload it to the cloud and delete database file.
But, anyways, a database is just implementation details, consider abstracting out all your operations so you can replace db later on.
As far as I know, sqLite stores all tables in a single file, so you will need column for a date and all records will be stored in single table. Realm is more flexible for this task.
SQL Lite can be used , it will be there as long as your application exist in the device, however if you want you can use Cloud Service, Azure provides simple and easy to use App Service , which have easy tables , in which you can directly call the APIs and internally it takes care of making connection and inserting the data into table.You can use Free Tier of App Service to test the concept.

Split a big Jira-Rest-Request

I'm looking for an opportunity to split a big request like:
rest/api/2/search?jql=(project in (project1, project2, project3....project10)) AND issuetype = Bug AND (component not in (projectA, projectB) OR component = EMPTY). The result will containe > 500 Bugs -> It's very very slow. I want to get them with different requests (methode to performe the request will be annotated with #Asynchronous) but the jql needs to be the same. I don't want to search separately for project1, project2...project10. Would be nice if someone has an idea to resolve my problem.
Thank you :)
You need to calculate pagination. First get the metadata.
rest/api/2/search?jql=[complete search query]&fields=*none&maxResults=0
you should get something like this:
{"startAt":0,"maxResults":0,"total":100,"issues":[]}
so completely without fields, just pagination metadata.
Than create search URI like this.
rest/api/2/search?jql=[complete search query]&startAt=0&maxResults=10
rest/api/2/search?jql=[complete search query]&startAt=10&maxResults=10
..etc
Beware data should change so you should be prepared that you won't recieve all the data and also pagination metadata if calculation is expensive (exspecially "total") should not be presented. More Paged API
Can you not break into 2 parts? If you are displaying in a web page ( display what you can without performance hit. If its a report then get all objects gradually and show once completed.
Get the count in total for JQL & just get the minimum information needed for step 2 - assume its 900
Use the pagination feature (maxResults=100) make multiple calls.
Work on each request.
If you don't want to run the two requests at once and need paging of bugs by user request, you can:
Make a request with the 'maxResults' property set to how much you need.
On the next request set the 'maxResults' property and the 'startAt' with the same value.
If you need to fetch more data, make new request with the same 'maxResults' but update 'startAt' to be the count of bugs you fetched in the previous requests.

How to I make sure my N1QL Query considers recent changes?

My situation is that, given 3 following methods (I used couchbase-java-client 2.2 in Scala. And Version of Couchbase server is 4.1):
def findAll() = {
bucket.query(N1qlQuery.simple(select("*").from(i(DatabaseBucket.USER))))
.allRows().toList
}
def findById(id: UUID) = {
Option(bucket.get(id.toString, classOf[RawJsonDocument])).map(i => read[User](i.content()))
}
def upsert(i: User) = {
bucket.async().upsert(RawJsonDocument.create(i.id.toString, write(i)))
}
Basically, they are insert, find one by id and findAll. I did an experiment where :
I insert a User, then find one by findById right after that, I got a user that I have inserted correctly.
I insert and then I use findAll right after that, it returns empty.
I insert, put 3 seconds delay and then I use findAll, I can find the one that I have inserted.
By that, I suspected that N1qlQuery only search over cached layer rather than "persist" layer. So, how can I force to let it search on "persist" layer?
In Couchbase 4.0 with N1QL, there are different consistency levels you can specify when querying which correspond to different cost for updates/changes to propagate through index recalculation. These aren't tied to whether or not data is persisted, but rather it's an option when you issue the query. The default is "not bounded" and to make sure that your upsert request is taken into consideration, you'll want to issue this query as "request plus".
To get the effect you're looking for, you'll want to add N1qlPararms on your creation of the N1qlQuery by using another form of the simple() method. Add a N1qlParams with ScanConsistency.REQUEST_PLUS. You can read more about this in Couchbase's Developer Guide. There's a Java API example of this. With that change, you won't need to have a sleep() in there, the system will automatically service the query request once the index recalculation has gotten to your specified level.
Depending on how you're using this elsewhere in your application, there are times you may want either consistency level.
You need stronger scan consistency. Add a N1qlParam to the query, using consistency(ScanConsistency.REQUEST_PLUS)

Reading all the documents from a bucket

Is there a way to read all the documents from a bucket? It is an active bucket an I want to access newly created document as well.
Few people suggested to use to view to query against a bucket. How can I create a View which will be updated with new or updated documents?
Newly created view's map function:
function (doc, meta) {
emit(doc);
}
Reduce function is empty. When I query the view like this bucket.query(ViewQuery.from("test1", "all")).totalRows() it returns 0 results back.
For returning zero results issue, did you promote the view to a production view? This is a common mistake. Development views only look at a small subset of data so as to not possibly overwhelm the server. Try this first.
Also, never emit the entire document if you can help it, especially if you are looking over all documents in a bucket. You want to emit the IDs of the documents and then if you need to get the content of those objects, do a get operation or bulk operation. I would give you a direct link for the bulk operations, but you have not said what SDK you are using and those are SDK specific. Here is the one for Java, for example.
All that being said, I have questions about why you are doing the equivalent of select * from bucket. What are you planning to do with this data once you have? What are you really trying to do? There are lots of options on how to solve this of course.
A view is just a predefined query over a bucket. New or changed documents will be shown in the view.
You can check the results of your View when you create it by clicking the Show Results button in the Web UI, so if 0 documents show up there, it should be no surprise you get 0 from the SDK.
If you are running Couchbase Server 4+ and the latest SDK, you could use N1QL and create a primary index on your bucket, then do a regular Select * from bucket to get all the documents.

XPages: Navigating arround a document collection

I create a document collection and am able to put the docid of the second doc in the first doc, third in second and so till the last Document which enable me to navigate from first to second document when the user approved a job and so on, but i want to be also able to go from second back to first when the user reject the task but i have not be able to store the docid of the first in the second documnet. Below is the code i am currently using
Document nextJob= null;
Document thisJob =null;
DocumentCollection col = lookup.getAllDocumentsByKey(ID, true);
if (col != null){
Job= col.getFirstDocument();
while (job!= null) {
thisJob.createDocument()
thisJob =Job;
thisJob.replaceItemValue("DocID",thisJob.getUniversalID());
thisJob.save(true);
if(nextJob!= null){
nextJob.replaceItemValue("TaskSuccessor",thisJob.getUniversalID());
nextJob.save(true);
}
nextJob= thisJob
tmpDoc = Job;
Job = col.getNextDocument(Job);
}
}
To echo Frantisek and others, updating the documents is not best practice. The key to how to achieve it is to consider a number of questions:
What you mean first next and previous job?
What is the numbers of jobs involved?
How are save conflicts going to be minimised / resolved by you / users?
How are deletions being handled, to ensure referential integrity?#
What happens when you need to archive data?
If it's for all users and next on date created, create a view based on date created. It will be quicker to create, completely negate the issue of save conflicts or deletes and not have a significant performance hit unless you're dealing with very large numbers of jobs (in which case you should be considering archiving).
If it's a small number of jobs, store them in a Java Map. But you need to handle deletions. Because you'll be loadingn the map when the app loads, archiving is not a problem.
If it's next / previous per user, a better method would be storing the order in a document per person in the database. If replicas are not involved, Note IDs can be used and will be shorter. It will negate save conflicts. But it may cause problems with large numbers of jobs - you will probably need to create new fields programmatically and also handle deletions.
DonMaro's suggestion fits with a graph database approach of edges (the third documents) between the vertices (the jobs).
In most cases, views will be the easiest and most recommended approach. IBM have included view index enhancements in 9.0.1 FP3 and will allow view indexes to be stored outside the NSF in the next point release.
Even if you're confident that you can build a better indexing system than what is already included in Domino, there are other aspects like save conflicts that need to be handled and you're decision may not allow future functional requirements like security, deletion, archiving etc.
Well, despite pointing out to really consider Frantisek Kossuth's comment (as UNIDs get changed in case you have might have to copy/paste a document back into the database, e.g. for backup; try considering generating unique values by using #Unique):
just create a third document object "prevJob" and store the previous document there when/before changing to the next one.
Then you can access the UNID just as you already do by "prevJob.getUniversalID()" and store it in the document you're currently processing.

Categories

Resources