I'm trying to use the Increment function to keep track of how many alerts of a specific container are stored on Firestore. To do that, on each API call that contains an alert I'm updating this counter by doing this:
db.collection("container")
.document(entryDataDto.getCollectionId())
.update("alarmCount", FieldValue.increment(1));
I have also tried like this:
Map<String, Object> update = new HashMap<>();
update.put("alarmCount", FieldValue.increment(1));
db.collection("container")
.document(entryDataDto.getCollectionId())
.set(update, SetOptions.merge());
These two snippets of code works seemingly the same but they run into the same problem, they're quite unpredictable.
From what I've tested 4 things may occur:
If the field don't already exist it will be created with value 1;
If the field already exists, it can be replaced with the same field with value 1 (by replaced I literally mean the old field is deleted and a new one take it's place);
If the field already exists, it can be incremented but the field is deleted right after the value is incremented;
And finally the rarest of them, if the field already exists, it is incremented by 1.
These 4 behaviors are quite random and I couldn't figure out a pattern among them. I've only seen number 4 happens one or two times.
Some pictures of the Firestore data that may be helpful to visualize the issue (sorry I wasn't able to screenshot the events):
Behavior 2
Behavior 3
Behavior 4
Posting this as a Community Wiki, as the issue has already been fixed by the OP himself.
Since you already know that this field is going to be used anyway, initializing the document with 'alarmCount' = 0 is a good practice, that way you can make checks with a value instead of a check for null values and this might be a good way to mitigate these unexpected behaviors.
That being said, most of the behaviours you experienced are indeed a bit strange, and you could open a case in Google's Issue Tracker to figure out why this is happening.
Related
I am stuck, I had this working last week now I have changed something and it will not work!
I have a simple flow service as follows:
pub.file.getFile
pub.flatFile.convertToValues
pub.document.sortDocuments
But the sortDocuments stage is not doing anything.
The recordWithNoID document list is perfect and all the fields are correct (so the schema and dictionary are working as intended), but when I try to sort it on the key "Field1" the sort is not doing anything, the documents are not changing order at all.
See two attached screenshots:
Screenshot 1 shows the pipeline during pub.document.sortDocuments step
key variable is: Field1
order variable is:ascending
Screenshot 2 shows the recordwithNoID after running the flow service. As you can see the Field1 column has not been ordered correctly.(it's still in the original document order) I have also tried mapping the results to other document types with the same result.
As I said above I had this working last week and now cannot seem to get it to work. I have even started the whole process from scratch and it still will not work. Any help would be very much appreciated!
Screenshot1
Screenshot 2
EDIT:
I resolved this issue by mapping to the Document Type created from the Schema.
It appears that you map the ffValues document (IData) and not the recordWithNoID document list (IData array) inside it, which would be the wrong level.
Please map the recordWithNoID instead and let us know if that solves the issue.
While not related to the question, it seems that some "clutter" is on the pipeline. I always recommend to people that they drop variable as early as possible. Mostly to improve readability but also for performance.
I am not sure but maybe this is the problem: on screenshoot1 we can see that you sort ffValues but you are mapping it to the document. (because you are using invoke, it is done automatically)
Is screen number two is showing ffValues or document variable ?
Maybe you are checking wrong, not sorted variable?
I am also want to suggest to use Map and transformer rather than invoke, because using map gives you power to control the pipeline.
While using invoke each variable is save to the pipeline (having varaible with the same name on pipeline ale on the output of the service will result with overwrite on pipeline variable).
Doubtlessly, this question is asked already (may be many times) but I could not find the correct keywords to find them.
Basically, my question is about the object references. What I know is that the object references points the objects physical location on the memory. However, when I debug my code and every time when I debug, I get a difference object reference for the same object.
For example, when I firstly debugged my code and the reference of a button looks like
INFO [sysout] [AWT-EventQueue-0]
[Ljava.awt.event.ComponentListener;#28be012c
at the second time, it is
INFO [sysout] [AWT-EventQueue-0]
[Ljava.awt.event.ComponentListener;#31a056d8
My related questions are;
1.Is the part after (#) symbol (a.k.a #28be012c) reference to the object, if yes, it is something like ip address, which changes continiously?
2.Is there a way to obtain an address, which does not change over time (like a Mac-address)
Any answer or link related to these questions will be highly appreciated.
Edit
I am debugging in this scenario. There is a button and everytime when this button is clicked, the debugger stops at this point. That is to say, the program is not started from the beginning.
Is the part after (#) symbol (a.k.a #28be012c) reference to the object, if yes, it is something like ip address, which changes
continiously?
The part after the # is Integer.toHexString(hashCode());. The hashCodemethod is not designed to return the same value every time it is invoked for different runs (even if the object being created has the same value). It is also not mandatory that the returned value is related to the memory. JVM spec specifies that a unique value should be returned, but it doesn't specify "how".
Is there a way to obtain an address, which does not change over time
(like a Mac-adress)
No. Each run of the JVM will almost always give different hashcodes (unless you override the hashCode method to return something else.
I have a problem debugging someone else's Java code in Eclipse and I have narrowed the problem down to a certain entry in a java.util.Map. At some stage, a certain key is put into the map, which is causing the problem. I have already checked all "put()" and "putAll()" calls to this map object, but haven't found the location at which the erroneous entry is created.
So, the question is: How can I monitor this Map object for insertions of a certain key? Basically, I would like the code execution to stop whenever key x is inserted or updated on this map. Is this possible?
Cheers,
Martin
In Eclipse you can create a conditional breakpoint. This breakpoint will only trigger when your specified condition takes place. This will allow you to monitor the put methods on the map.
Step 1: Select the breakpoint properties after right clicking on your breakpoint:
Step 2: Select the conditional checkbox and enter your condition:
Step 3: Run your app in debug mode.
Who does instantiate the map? If you can set it, then provide your custom implementation that throws exception if the searched value is passed. And then you will see in stacktrace, who and when insert this value.
After speaking to one of our seniors guys, it turns out the problem was not so much the Map, bur rather an underlying XML structure that is used to populate the map. Looks like I was following a red herring. Nevertheless, thanks a lot to everyone who replied!
I'm trying to remove a set of child elements from a parent element using VTD-XML.
Unfortunately after removing an element, it leaves behind the new line that the removed element previously occupied. This behaviour is also observed by a reader of an article on VTD-XML by the VTD-XML author here. I'm trying to work out how to remove this new line.
I managed to achieve a modicum of success by manipulating the length value stored in the underlying 64-bit VTD token to cover the new line character (additional 2 bytes). Code snippet is as follows:
// XMLModifier modifier
modifier.remove(vn.getElementFragment()+0x200000000L);
I've tested that this works well on the old_cd.xml provided in ex_16 of the VTD-XML Examples.
However when I try this same approach on my working file, a ModifyException error is thrown when I attempt to call modifier.output(), specifically it is thrown by modifier.check2().
Questions
1. Why would the above approach cause check2() to fail? I don't think I'm overflowing the bits on the VTD token, file is < 2MB. See Update.
2. Is there a better approach to remove the remaining new line?
I'm still fairly new to VTD-XML so I would greatly appreciate any advice and insight and learn from more experienced users.
Thanks for your help.
Update
Wow, in the process of writing this question I realise that I forgot to consider the different character encodings and updating the adjusting long value to 1 byte fixed the check2() problem! (another reason to take the time to pause and rethink/write out the problem).
I'd still like to know from more experienced users if there are better approaches to this.
To answer your question, I think this needs to be done at the API level and it needs to take care a few extra details, like the options to remove all surrounding white spaces or none of the white spaces. It needs to be done in the next release...
I'm querying data in the Facebook Graph API explorer:
access_token="SECRET"
GET https://graph.facebook.com/me/home?limit=20&until=1334555920&fields=id
result:
{
"data": [
]
}
I was shocked since there are many feeds on my "home".
Then I tried to set the limit to 100, then I got a feed list.
What's going on here? Does the "limit" parameter affect the graph api's result?
I tried to increase the limit to 25 and query again, there is one feed.
So what's the relationship between "limit" and "until"?
Facebook's API can be a little weird sometimes because of the data you're trying to access and there's a few parts to this question.
Limits
The limits are applied when data is returned, but before permissions and access controls are generated, which is explained with this blog post from last year: Limits in the Graph API.
Permissions
More importantly, even if you give yourself a token with every FB permission possible, you still won't be able to access everything that you created. Say you post something on a Friend's feed, but their feed is not set to Public privacy. Any queries against that friend's feed with your token will never return data (Or at least that was the case around a year ago).
API Itself
One of the most awesome bugs I found in the Graph API when I was working with it last year is the way it handles paging. The Graph API allows three filters: limit, offset, and since/until. Somewhere Facebook recommends (and rightly so) that you make use of the since/until dates exclusively whenever possible for paging. Ignoring debates as to why you would do that vs. offsets on a theoretical basis, on a practical one the following query used to degrade over time:
// This obviously isn't valid as written, but you the params change as described
limit=fixed-value&offset=programmatic-increase&since=some-fixed-date-here
The reason: Date ranges and offsets don't behave well with each other. As an example, say I made the following initial query:
// My example query
limit=20&since=1334555920
--> {#1,#2, ... #20}
Naturally you would want to page more data. The result would be something like this (I can't remember the exact pattern, but the top n would be repeats and the list of results would be truncated by n/2 or something similar):
// My example query
limit=20&since=1334555920&offset=20
---> {#10, #11 ... #25}
I never figured out why it happened, but eventually the query would taper off to return nothing and you would only get around 50-100 unique values. If you paged using dates exclusively however, you could go on for as long as the data would let you.
This is with the caveat that this was a bug and this was from a while ago. The main lesson here is I never would have found this bug without modifying my query to make things that should come out exactly the same (A particular date range based on posts #10-30 compared with a limit=20, offset=10) but the results were quite different.