I have a bunch of sensors scattered around.
These sensors transmit their status whenever they detect a change in their environment.
The data goes to a server(built using Java) where it processes that information then inserts it into a mongoDB.
My meteor App is essentially a dashboard for this information. I would like to do further processing on those entries as soon as they come in (analytics).
I started to use Collection-Hooks which works really nicely when the Meteor App makes changes to the database but not when the mongo Java-Driver does.
I need collection-hooks to detect new documents added to my mongoDB from the Java-driver. I'm also not married to collection-hooks, any other services suggested are welcomed.
What you want to use is an Observer on the cursor returned from a query:
https://docs.meteor.com/api/collections.html#Mongo-Cursor-observe
myCollection.find().observe({
added(document) {
// Do something with new document
},
changed(document) {
// Update analytics in response to change
},
removed(oldDocument) {
// Update analytics in response to change
}
});
This will depend on the contents of the actual database, unlike collection hooks that only operate when Meteor code is called
It's also worth noting that these hooks also track the specific query that was passed to find(). So if you only want to call these hooks for a specific subset of data, pass in the query like this this example from #scriptkid:
var date = moment().utc().format("YYYY-MM-DD HH:mm:ss.SSS");
log.find({ createdAt: { $gte: date } }).observe({
added(document) {
console.log("new document added!");
},
});
Related
I want to know if there is a built-in solution for something i need in Elasticsearch.
I want that everytime that my document is being replaced by a newer version of itself (with the same ID), the older version will not be deleted, but moved to an history index.
In that history index, I dont want replacments of older versions, but accumulation of them.
Do you know if there is a built-in solution for this, or will I need to program it myself in my API?
Thank you.
As there is no in built method for your use-case, you need to do it yourself in your application, I don't think Elasticsearch is best suited for creating the history of a document as as soon as you update the document in the history_index you will loose its previous history and if I understand correctly you want to have the complete history of a document.
I guess best is to use any RDBMS or NoSQL where you create a new history entry of a document (document_id of Elasticsearch and its version number will help you to construct the complete history of your Elasticsearch document).
Above DB you can update as soon as you get a update on Elasticsearch document .
There does not appear to be any built-in functionality for this. The easiest approach might be to copy the old version to the history index with the _reindex API, then write the new version:
POST /_reindex
{
"source": {
"index": "your_index",
"query": {
"ids": {
"values": ["<id>"]
}
}
},
"dest": {
"index": "your_history_index"
},
"script": {
"source": "ctx.remove(\"_id\")"
}
}
PUT /your_index/_doc/<id>
{
...
}
Note the script ctx.remove("_id") done as part of the _reindex operation, which ensures Elasticsearch will generate a new ID for the document instead of reusing the existing ID. This way, your_history_index will have one copy for each version of the document. Without this script, _reindex would preserve the ID and overwrite older copies.
I assume that the documents contain an identifier that can be used to search your_history_index for all versions of a document, even though _id is reset.
Here is my data structure:
I have an ios app that is attempting to access data from Cloud Firestore. I have been successful in retrieving full documents and querying for documents. However I need to access specific fields from specific documents. How would I make a call that retrieves me just the value of one field from Firestore in swift? Any Help would be appreciated.
There is no API that fetches just a single field from a document with any of the web or mobile client SDKs. Entire documents are always fetched when you use getDocument(). This implies that there is also no way to use security rules to protect a single field in a document differently than the others.
If you are trying to minimize the amount of data that comes across the wire, you can put that lone field in its own document in a subcollection of the main doc, and you can request that one document individually.
See also this thread of discussion.
It is possible with server SDKs using methods like select(), but you would obviously need to be writing code on a backend and calling that from your client app.
This is actually quite simple and very much achievable using the built in firebase api.
let docRef = db.collection("users").document(name)
docRef.getDocument(source: .cache) { (document, error) in
if let document = document {
let property = document.get(field)
} else {
print("Document does not exist in cache")
}
}
There is actually a way, use this sample code provided by Firebase itself
let docRef = db.collection("cities").document("SF")
docRef.getDocument { (document, error) in
if let document = document, document.exists {
let property = document.get('fieldname')
print("Document data: \(dataDescription)")
} else {
print("Document does not exist")
}
}
I guess I'm late but after some extensive research, there is a way in which we can fetch specific fields from firestore. We can use the select keyword, your query would be somthing like (I'm using a collection for a generalized approach):
const result = await firebase.database().collection('Users').select('name').get();
Where we can access the result.docs to further retrieved the returned filtered result. Thanks!
//this is code for javascript
var docRef = db.collection("users").doc("ID");
docRef.get().then(function(doc) {
if (doc.exists) {
//gives full object of user
console.log("Document data:", doc.data());
//gives specific field
var name=doc.get('name');
console.log(name);
} else {
// doc.data() will be undefined in this case
console.log("No such document!");
}
}).catch(function(error) {
console.log("Error getting document:", error);
});
I am currently taking a course in app development and I am trying to use Facebooks API for GET requests on certain events. My goal is the get a JSON file containing all comments made on a certain event.
However some events return only a an "id" key with an id number such as this:
{
"id": "116445769058883"
}
That happends with this event:
https://www.facebook.com/events/116445769058883/
However other events such as (https://www.facebook.com/events/1964003870536124/) : returns only the latest comment for some reason.
I am experementing with facebook explore API:
https://developers.facebook.com/tools/explorer/
This is the following GET requests that I have been using in the explorer:
GET -> /v.10/facebook-id/?fields=comments
Any ideas? It's really tricky to understand the response since both events have the privacy set to OPEN.
Starting from v2.4 of the API, the API is now declarative which means you'll need to specify what fields you want the API to return.
For example, if you want first name and second name of the user, then you make a GET request to /me?fields=first_name,last_name else you will only get back the default fields which are id and name.
If you want to see what fields are available for a given endpoint, use metadata field. e.g. GET /me?metadata=true
I am trying to create a framework using selenium and TestNG. As a part of the framework i am trying to implement Data Parameterization. But i am confused about optimized way of implementing Data parameterization. Here is the following approaches i made.
With Data Providers (from excel i am reading and storing in object[][])
With testng.xml
Issues with Data Providers:
Lets say if my Test needs to handle large volumes of data , say 15 different data, then i need to pass 15 parameters to it. Alternative , if i try to create a class TestData to handle this parameters and maintain in it , then for every Test there will be different data sets. so my TestData class will be filled with more than 40 different params.
Eg: In a Ecom Web site , There will be many different params exists like for Accounts , cards , Products , Rewards , History , store Locations etc., for this we may need atleast 40 different params need to declared in Test Data.Which i am thinking not a suggestable solution. Some other tests may need sometimes 10 different test data, some may need 12 . Even some times in a single test one iteration i need only 7 params in other iteration i need 12 params .
How do i manage it effectively?
Issues with Testng.xml
Maintaining 20 different accounts , 40 different product details , cards , history all in a single xml file and configuring test suite like parallel execution , configuring only particular classes to execute etc., all together will mess the testng.xml file
So can you please suggest which is a optimized way to handle data in Testing Framework .
How in real time the data parameterization , iterations with different test datas will be handled
Assuming that every test knows what sort of test data it is going to be receiving here's what I would suggest that you do :
Have your testng suite xml file pass in the file name from which data is to be read to the data provider.
Build your data provider such that it receives the file name from which to read via TestNG parameters and then builds a generic map as test data iteration (Every test will receive its parameters as a key,value pair map) and then work with the passed in map.
This way you will just one data provider which can literally handle anything. You can make your data provider a bit more sophisticated by having it deal with test methods and then provide the values accordingly.
Here's a skeleton implementation of what I am talking about.
public class DataProviderExample {
#Test (dataProvider = "dp")
public void testMethod(Map<String, String> testdata) {
System.err.println("****" + testdata);
}
#DataProvider (name = "dp")
public Object[][] getData(ITestContext ctx) {
//This line retrieves the value of <parameter name="fileName" value="/> from within the
//<test> tag of the suite xml file.
String fileName = ctx.getCurrentXmlTest().getParameter("fileName");
List<Map<String, String>> maps = extractDataFrom(fileName);
Object[][] testData = new Object[maps.size()][1];
for (int i = 0; i < maps.size(); i++) {
testData[i][0] = maps.get(i);
}
return testData;
}
private static List<Map<String, String>> extractDataFrom(String file) {
List<Map<String, String>> maps = Lists.newArrayList();
maps.add(Maps.newHashMap());
maps.add(Maps.newHashMap());
maps.add(Maps.newHashMap());
return maps;
}
}
I'm actually currently trying to do the same (or similar) thing. I write automation to validate product data on several eComm sites.
My old method
The data comes in Excel sheet format that I process slightly to get in a format that I want. I run the automation that reads from Excel and executes the runs sequentially.
My new method (so far, WIP)
My company recently started using SauceLabs so I started prototyping ways to take advantage of X # of VMs in parallel and see the same issues as you. This isn't a polished or even a finished solution. It's something I'm currently working on but I thought I would share some of what I'm doing to see if it will help you.
I started reading SauceLabs docs and ran across the sample code below which started me down the path.
https://github.com/saucelabs-sample-scripts/C-Sharp-Selenium/blob/master/SaucePNUnit_Test.cs
I'm using NUnit and I found in their docs a way to pass data into the test that allows parallel execution and allows me to store it all neatly in another class.
https://github.com/nunit/docs/wiki/TestFixtureSource-Attribute
This keeps me from having a bunch of [TextFixture] tags stacked on top of my script class (as in the demo code above). Right now I have,
[TestFixtureSource(typeof(Configs), "StandardBrowsers")]
[Parallelizable]
public class ProductSetupUnitTest
where the Configs class contains an object[] called StandardBrowsers like
public class Configs
{
static object[] StandardBrowsers =
{
new object[] { "chrome", "latest", "windows 10", "Product Name1", "Product ID1" },
new object[] { "chrome", "latest", "windows 10", "Product Name2", "Product ID2" },
new object[] { "chrome", "latest", "windows 10", "Product Name3", "Product ID3" },
new object[] { "chrome", "latest", "windows 10", "Product Name4", "Product ID4" },
};
I actually got this working this morning so I know now the approach will work and I'm working on ways to further tweak and improve it.
So, in your case you would just load up the object[] with all the data you want to pass. You will probably have to declare a string for each of the possible fields you might want to pass. If you don't need that particular field in this run, then pass empty string.
My next step is to load the object[] by loading the data from Excel. The pain for me is how to do logging. I have a pretty mature logging system in my existing sequential execution script. It's going to be hard to give that up or setting for something with reduced functionality. Currently I write everything to a CSV, load that into Excel, and then I can quickly process failures using Excel filtering, etc. My current thought is to have each script write it's own CSV and then pull them all together after all the runs are complete. That part is still theoretical right now though.
Hope this helps. Feel free to ask me questions if something isn't clear. I'll answer what I can.
I am using MongoDB with Clojure and Congomongo and I am trying to execute some basic Java-Script while inserting documents.
There are two use cases were I want to execute some Java-Script during the an insert:
a) writing a last-modifed Timestamp
b) creating a version tag as ObjectId
Both are very similar, here are some MongoDB Shell examples.
db.test.update({_id:ObjectId("4fe3304fc2e61906ccdd7364")}, {$set: {created:Date()}}, false, false)
or
db.test.insert({version:ObjectId(), foo:"Bar"})
Does anyone have an idea how I can do it using Congomongo or the plain Java-Driver?
I tried
org.bson.types.Code
org.bson.types.CodeWScope
and got something like:
{ "_id" : ObjectId("4fe32998c2e61906ccdd735f"), "version" : function cf__14_anon() { return ObjectId(); } }
which is interesting but not helpful. Unfortunately it's not possible to create the timestamps/version/ObjectId on the client, because I can't make sure that the clocks of the clients are synchronous. We use the version synchronize/replicate data between server and the clients and creating versions/timestamps in the past does jeopardize this process.
Thanks for your help in advance ....