In the firestore security settings you can set conditions for writing/reading data.
Currently I have this piece:
service cloud.firestore {
match /databases/{database}/documents {
match /{document=**} {
allow read: if request.auth != null && request.time <
resource.data.timeCreated + duration.value(1, 'h');
allow write: if request.auth != null;
}
}
}
Now I want to limit the writing; a user should only be able to send data every 5 minutes. How can I achieve this?
There is not a native way to do this.
You cannot for non-signed in users in any way that is not trivially bypassed. You could use Cloud Functions to achieve this for signed in users though.
Each user has a profile with next time they can write, along with the id of the document to write next.
Use Rules on writes to check that the id doesn't already exist and it's >= the allowed time
Use Cloud Functions on write to update the user profile document with a new allowed time and unique id for the next write
Related
I'm writing an android app that supports exporting the app database to various formats. I don't want to run out of memory, but I want to page the results easily without receiving updates when it changes. So I put it in a service, and came up with the following method of paging.
I use a limit clause in my query to limit the number of results returned and I'm sorting on the primary key. So it should be fast. I use a set of nested for loops to execute the series of queries until no results are returned, and walk through the given results, so that's linear. It's in a service, so it doesn't matter that I'm using immediate result things here.
I feel like I might be doing something bad here. Am I?
// page through all results
for (List<CountedEventType> typeEvents = dao.getEventTypesPaged2(0);
typeEvents.size() > 0;
typeEvents = dao.getEventTypesPaged2(typeEvents.get(typeEvents.size() - 1).uid)
) {
for (CountedEventType type : typeEvents) {
// Do something for every result.
}
}
Here's my dao method.
#Dao
interface ExportDao {
#Query("SELECT * FROM CountedEventType WHERE uid > :lastUid ORDER BY uid ASC LIMIT 4")
List<CountedEventType> getEventTypesPaged2(int lastUid);
}
I have a function that use lettuce to talk to a redis cluster.
In this function, I insert data into a stream data structure.
import io.lettuce.core.cluster.SlotHash;
...
public void addData(Map<String, String> dataMap) {
var sync = SlotHash.getSlot(key).sync()
sync.xadd(key, dataMap);
}
I also want to set the ttl when I insert a record for the first time. It is because part of the user requirement is expire the structure after a fix length of time. In this case it is 10 hour.
Unfortunately the XADD function does not accept an extra parameter to set the TTL like the SET function.
So now I am setting the ttl this way:
public void addData(Map<String, String> dataMap) {
var sync = SlotHash.getSlot(key).sync()
sync.xadd(key, dataMap);
sync.expire(key, 60000 /* 10 hours */);
}
What is the best way to ensure the I will set the expiry time only once (i.e. when the stream structure is first created)? I should not set TTL multiple times within the function because every call to xadd will also follow by a call of expire which effectively postpone the expiry time indefinitely.
I think I can always check the number of items in the stream data structure but it is an overhead. I don't want to keep flags in the java application side because the app could be restarted and this information will be removed from the memory.
You may want to try lua script, sample script below which sets the expiry only if it's not set for key, works with any type of key in redis.
eval "local ttl = redis.call('ttl', KEYS[1]); if ttl == -1 then redis.call('expire', KEYS[1], ARGV[1]); end; return ttl;" 1 mykey 12
script also returns the actual expiry time left in seconds.
I have the following data structure in Firebase Firestore to represent a many to many relationship between clients and users:
Clients
clientId1 {
users (object): {
userId1: true
userId2: true
}
}
clientId2 {
users (object): {
userId1: true
}
}
I query it on Android using the following query:
db.collection("clients").whereEqualTo("users."+uid, true);
For userId2, the query should only return clientId1.
If I set the rule to (allow read: if true;) when I execute the query above I get the correct clients returned.
I would also like to set up a database rule to prevent userId2 from seeing clientId2.
I tried this rule but I get no results returned:
match /clients/{clientId} {
//Allow read if the user exists in the user collection for this client
allow read: if users[request.auth.uid] == true;
}
I also tried:
match /clients/{clientId} {
//Allow read if the user exists in the user collection for this client
allow read: if resource.data.users[request.auth.uid] == true;
}
But neither of the above rules returns any clients.
How do I write the rule?
I am going to answer my own question as I was just doing something silly.
My data structure is fine and the correct syntax for my rule is this one:
match /clients/{clientId} {
//Allow read if the user exists in the user collection for this client
allow read: if resource.data.users[request.auth.uid] == true;
}
Given this:
Cloud Firestore evaluates a query against its potential result set
instead of the actual field values for all of your documents. If a
query could potentially return documents that the client does not have
permission to read, the entire request fails.
This Android query does correctly implement the right filter for the rule:
db.collection("clients").whereEqualTo("users."+uid, true);
I am yet to implement my adapter properly. I wanted to see if I could get the correct data structure / rules / query working first. I was calling it from another listener that was listening on the entire client collection (which fails the rule) and therefore this query was not being called. Earlier when I set the rule to (allow read: if true;) the initial listener was executing my query and returning the correct results. This lead me to believe my rule was incorrect, when it wasn't.
As per the official documentation regarding Firestore Security Rules:
When writing queries to retrieve documents, keep in mind that security rules are not filters—queries are all or nothing. To save you time and resources, Cloud Firestore evaluates a query against its potential result set instead of the actual field values for all of your documents. If a query could potentially return documents that the client does not have permission to read, the entire request fails.
So you cannot filter the documents that exist in your database using security rules.
I'm playing with Firebase Realtime Database and after a while I start wondering if there are best practice to structure the database for privacy.
I mean, I see best practice for performance like database fan-out
Map updatedUser = new HashMap();
newPost.put("name", "Shannon");
newPost.put("username": "shannonrules");
Firebase ref = new Firebase("https://<YOUR-FIREBASE-APP>.firebaseio.com/");
Map fanoutObject = new HashMap();
fanoutObject.put("/users/1", updatedUser);
fanoutObject.put("/usersWhoAreCool/1", updatedUser);
fanoutObject.put("/usersToGiveFreeStuffTo/1", updatedUser);
ref.updateChildren(fanoutObject); // atomic updating goodness
But I did found nothing about privacy polices.
I know there are Database ACL that I can use to, for example, restrict access to users not authenticate or users there are not the "owner" of a particular node... but for those nodes that are readable someone could be, if he would, access the entire children of those nodes.
Suggestions?
EDIT: Database rules are not descendant so if I let users read a node they alway can read all nodes below:
{
"rules": {
"foo": {
// allows read to /foo/*
".read": "data.child('baz').val() === true",
"bar": {
/* ignored, since read was allowed already */
".read": false
}
}
}
}
You can secure your database using Firebase Realtime Database Rules.
Firebase Realtime Database Rules determine who has read and write access to your database, how your data is structured, and what indexes exist. These rules live on the Firebase servers and are enforced automatically at all times. Every read and write request will only be completed if your rules allow it. By default, your rules are set to allow only authenticated users full read and write access to your database. This is to protect your database from abuse until you have time to customize your rules or set up authentication.
All your requirements can be met using security rules.
If you need ACL style security, take a look at Custom Auth Claims - using Cloud Functions you can add your own properties to a user's JWT auth token, e.g. to say which groups they belong to or which products they have purchased. Then your security rules can look at those properties upon the user and decide if they can access a particular node.
I am using DynamoDBMapper for a class, let's say "User" (username being the primary key) which has a field on it which says "Status". It is a Hash+Range key table, and everytime a user's status changes (changes are extremely infrequent), we add a new entry to the table alongwith the timestamp (which is the range key). To fetch the current status, this is what I am doing:
DynamoDBQueryExpression expr =
new DynamoDBQueryExpression(new AttributeValue().withS(userName))
.withScanIndexForward(false).withLimit(1);
PaginatedQueryList<User> result =
this.getMapper().query(User.class, expr);
if(result == null || result.size() == 0) {
return null;
}
for(final User user : result) {
System.out.println(user.getStatus());
}
This for some reason, is printing all the statuses a user has had till now. I have set scanIndexForward to false so that it is in descending order and I put limit of 1. I am expecting this to return the latest single entry in the table for that username.
However, when I even look into the wire logs of the same, I see a huge amount of entries being returned, much more than 1. For now, I am using:
final String currentStatus = result.get(0).getStatus();
What I am trying to understand here is, what is whole point of the withLimit clause in this case, or am I doing something wrong?
In March 2013 on the AWS forums a user complained about the same problem.
A representative from Amazon sent him to use the queryPage function.
It seems as if the limit is not preserved for elements but rather a limit on chunk of elements retrieved in a single API call, and the queryPage might help.
You could also look into the pagination loading strategy configuration
Also, you can always open a Github issue for the team.