Always getting whole document while any changes found on Firestore. How to get data that are newly updated only?
This is my data:
I need msg and sendby in order of object(for eg: 2018_09_17_30_40) inside chat on first load and get only the new msg and sendby if data updated
while on get() getting whole document without any order
Note: Code for Android app.
You can store the last fetched date, and only get the objects with date greater than last updated date.
db.collection('groups')
.where('participants', 'array-contains', 'user123')
.where('lastUpdated', '>', lastFetchTimestamp)
.orderBy('lastUpdated', 'desc')
Always getting whole document while any changes found on fire store.
The new Cloud Firestore database has different concepts than Firebase real-time database and should not be confused. There are no field-level permissions or access to a document. It's the entire document, or nothing. Cloud Firestore listeners fire on the document level. There is no way to get triggered with just newly updated data in a document.
Seeing your database, I can say that you haven't chosen the right schema by adding those messages under the chat property.
According to the official documentation:
Cloud Firestore is optimized for storing large collections of small documents.
You should consider create a new collection named messages and store each message as a separate document. In the link above, there are examples of how you can achieve that.
Related
In my android app, I'm using firebase database. For offline work, the following code is used :
FirebaseDatabase.getInstance().setPersistenceEnabled(true);
Data in a reference will remain same. While offline, data in that reference loads without any delay, but while online the same data takes time(around 8 seconds) to get loaded.
As there is no change in data in that reference, I want those data to be loaded without delay (may be from cache). How can I do this?
What I want is it'll load data from cache, but if there is any update in the database reference, only then it'll load data from online
So this is what Firebase does. The whole idea is that you are synchronized with Firebase servers and once new data is added your onDataChange() is triggered with a DataSnapshot object that contains the new data. If you don't want this feature you can explicitly tell Firebase to goOffline():
Manually disconnect the Firebase Database client from the server and disable automatic reconnection.
So you can use this feature, if this is what you need it.
I have a document that has a data model corresponding to a user.
The user has an adresses array, a phone array and an email array.
I make CRUD operations on theses data using the Java SDK for Couchbase.
I have a constraint: I need to get all the document data in order to display data associated to the user. On the UI I can modify everything except the data contained by the arrays (phone, email and adresses).
How can I do to update only those data when I update the document ?
When I try to use the JsonIgnore annotation on arrays method when serializing the user object, it removes them from the document when the JAVA couchbase method replace took place.
Is there a way to update partially documents with the JAVA SDK for couchbase ?
Is there a way to read all the documents from a bucket? It is an active bucket an I want to access newly created document as well.
Few people suggested to use to view to query against a bucket. How can I create a View which will be updated with new or updated documents?
Newly created view's map function:
function (doc, meta) {
emit(doc);
}
Reduce function is empty. When I query the view like this bucket.query(ViewQuery.from("test1", "all")).totalRows() it returns 0 results back.
For returning zero results issue, did you promote the view to a production view? This is a common mistake. Development views only look at a small subset of data so as to not possibly overwhelm the server. Try this first.
Also, never emit the entire document if you can help it, especially if you are looking over all documents in a bucket. You want to emit the IDs of the documents and then if you need to get the content of those objects, do a get operation or bulk operation. I would give you a direct link for the bulk operations, but you have not said what SDK you are using and those are SDK specific. Here is the one for Java, for example.
All that being said, I have questions about why you are doing the equivalent of select * from bucket. What are you planning to do with this data once you have? What are you really trying to do? There are lots of options on how to solve this of course.
A view is just a predefined query over a bucket. New or changed documents will be shown in the view.
You can check the results of your View when you create it by clicking the Show Results button in the Web UI, so if 0 documents show up there, it should be no surprise you get 0 from the SDK.
If you are running Couchbase Server 4+ and the latest SDK, you could use N1QL and create a primary index on your bucket, then do a regular Select * from bucket to get all the documents.
I've never used CouchDB/MongoDB/Couchbase before and am evaluating them for my application. Generally speaking, they seem to be a very interesting technology that I would like to use. However, coming from an RDBMS background, I am hung up on the lack of transactions. But at the same time, I know that there is going to be much less a need for transactions as I would have in an RDBMS given the way data is organized.
That being said, I have the following requirement and not sure if/how I can use a NoSQL DB.
I have a list of clients
Each client can have multiple files
Each file must be sequentially number for that specific client
Given an RDBMS this would be fairly simple. One table for client, one (or more) for files. In the client table, keep a counter of last filenumber, and increment by one when inserting a new record into the file table. Wrap everything in a transaction and you are assured that there are inconsistencies. Heck, just to be safe, I could even put a unique constraint on a (clientId, filenumber) index to ensure that there is never the same filenumber used twice for a client.
How can I accomplish something similar in MongoDB or CouchDB/base? Is it even feasible? I keep reading about two-phase commits, but I can't seem to wrap my head around how that works in this kind of instance. Is there anything in Spring/Java that provides two-phase commit that would work with these DBs, or does it need to be custom code?
Couchdb is transactional by default. Every document in couchdb contains a _rev key. All updates to a document are performed against this _rev key:-
Get the document.
Send it for update using the _rev property.
If update succeeds then you have updated the latest _rev of the document
If the update fails the document was not recent. Repeat steps 1-3.
Check out this answer by MrKurt for a more detailed explanation.
The couchdb recipies has a banking example that show how transactions are done in couchdb.
And there is also this atomic bank transfers article that illustrate transactions in couchdb.
Anyway the common theme in all of these links is that if you follow the couchdb pattern of updating against a _rev you can't have an inconsistent state in your database.
Heck, just to be safe, I could even put a unique constraint on a (clientId, filenumber) index to ensure that there is never the same filenumber used twice for a client.
All couchdb documents are unique since the _id fields in two documents can't be the same. Check out the view cookbook
This is an easy one: within a CouchDB database, each document must have a unique _id field. If you require unique values in a database, just assign them to a document’s _id field and CouchDB will enforce uniqueness for you.
There’s one caveat, though: in the distributed case, when you are running more than one CouchDB node that accepts write requests, uniqueness can be guaranteed only per node or outside of CouchDB. CouchDB will allow two identical IDs to be written to two different nodes. On replication, CouchDB will detect a conflict and flag the document accordingly.
Edit based on comment
In a case where you want to increment a field in one document based on the successful insert of another document
You could use separate documents in this case. You insert a document, wait for the success response. Then add another document like
{_id:'some_id','count':1}
With this you can set up a map reduce view that simply counts the results of these documents and you have an update counter. All you are doing is instead of updating a single document for updates you are inserting a new document to reflect a successful insert.
I always end up with the case where a failed file insert would leave the DB in an inconsistent state especially with another client successfully inserting a file at the same time.
Okay so I already described how you can do updates over separate documents but even when updating a single document you can avoid inconsistency if you :
Insert a new file
When couchdb gives a success message -> attempt to update the counter.
Why this works?
This works because because when you try to update the update document you must supply a _rev string. You can think of _rev as a local state for your document. Consider this scenario:-
You read the document that is to be updated.
You change some fields.
Meanwhile another request has already changed the original document. This means the document now has a new _rev
But You request couchdb to update the document with a _rev that is stale that you read in step #1.
Couchdb will generate an exception.
You read the document again get the latest _rev and attempt to update it.
So if you do this you will always have to update against the latest revision of the document. I hope this makes things a bit clearer.
Note:
As pointed out by Daniel the _rev rules don't apply to bulk updates.
Yes you can do the same with MongoDB, and Couchbase/CouchDB using proper approach.
First of all in MongoDB you have unique index, this will help you to ensure a part of the problem:
- http://docs.mongodb.org/manual/tutorial/create-a-unique-index/
You also have some pattern to implement sequence properly:
- http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/
You have many options to implement a cross document/collection transactions, you can find some good information about this on this blog post:
http://edgystuff.tumblr.com/post/93523827905/how-to-implement-robust-and-scalable-transactions (the 2 phase commit is documented in detail here: http://docs.mongodb.org/manual/tutorial/perform-two-phase-commits/ )
Since you are talking about Couchbase, you can find some pattern here too:
http://docs.couchbase.com/couchbase-devguide-2.5/#providing-transactional-logic
My app uses a SQLite database for the information. I have a function that checks to see if the folder and database are already present, if they aren't it will go on the internet ( currently I am using dropbox to store the db file ) and download the database and store it on the sd card, then I it will open the database. The database is writable as it lets the user rate an object. I have two questions.
1.) I would love to provide updates to the database and then have my app update the database if the version number is higher and replace the existing one. I have done some research and from what I have found it is possible to store an xml or json file with the version number of and the just parse the information and if the version number is higher download the new database.
Can someone provide an example of how this is accomplished and whether it is better to use xml or json for this task?
2.) Is there a way to save the rating in the new version of the database when the new is downloaded and accessed?
Thanks
two nights ago I wrote something like that.
pack your database structure as an array in a webservice method by reading field names and field types. the structure of array is arbitrary.
call web service method and you must receive a string that represent a JSONArray object, if you sent it as json with json_encode() method in php.
read structure and make CREATE DB query string with for loops.
execute query, so you must have database.
also you can send alot of information with arrays.
introducing each part is hard, so for each part google it.
don't forget to convert field types to match SQLite types such as VARCHAR=>TEXT, smallint=>INTEGER , ...