I have an Saleforce app that allows me to execute REST API calls, and I need to retrieve orders (/services/data/v47.0/sobjects/Order) by status.
I've found some manual that describes similar filtering on another entitiy (https://developer.salesforce.com/docs/atlas.en-us.api_placeorder.meta/api_placeorder/sforce_placeorder_rest_api_standalone.htm).
However when trying to execute followin request, it seems that all statuses returned:
GET /services/data/v47.0/sobjects/Order?order.status='ddd'
I also tried some variations of query params. Is this functionality supported?
/sobjects service will let you learn dynamically what fields (standard and custom) exist in Order table (or any other really), what types they are, picklist values...
To retrieve actual data you can use query resource. (Salesforce uses a dialect of SQL, called SOQL. If you've never used it before it'll look bit weird the moment you want to do any JOINs, would be nice if a SF developer would fill you in)
This might be a good start
/services/data/v47.0/query/?q=SELECT Id, Name, OrderNumber FROM Order WHERE Status = 'Draft' LIMIT 10
Never seen the API you've linked to, interesting stuff. But I don't see anything obvious that would let you filter by status there so the more generic "query anything you wish" might work better for you. Play a bit and perhaps https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_query.htm will suit your needs more?
Related
I have a Quarkus application where we use Hibernate ORM with Panache to build and query the database. In some situations, we want to use an List or rather a Set to filter from a table of "requests". The Request entity, which has a different name in practice, has a status property, which is an enum that can have three values: PENDING, APPROVED or DENIED. In the web front-end, we want to use a checkbox-style filter which then during the HTTP request sends this as an array to the Quarkus application, where we then want to give it to Hibernate somehow, preferably as a Set to easily filter duplicates.
I've done something extremely similar within the NodeJS/MongoDB ecosystem in the past, which looks like this, as a step of a bigger aggregate pipeline:
aggregatePipeline.push({
$match: {
status: {
$ne: status //Array of strings
}
}
});
How would something like this be done within Hibernate? I've tried some googling, but the results are largely cluttered by people asking how you get an Arraylist out of the cursor from a standard find-query.
Thanks in advance.
Edit: Trying this line
List<Publisher> publishers = Publisher.find("name", Arrays.asList("Books", "Publishing")).list();
Gives this error:
org.postgresql.util.PSQLException: ERROR: operator does not exist: character varying = record
Hint: No operator matches the given name and argument types. You might need to add explicit type casts.
Nevermind. I tried thinking about another search term which wouldn't result in a flood of irrelevant results I mentioned. I tried "Hibernate find in set", which led me to find this other post. This led me to try this line (with the Set values hardcoded for FAAFO purposes):
List<Publisher> publishers = Publisher.find("name IN ?1", new HashSet<>(Arrays.asList("Indiana University Press", "Harvard University Press"))).list();
No errors, and returns two entries that were imported into the dev db through the import.sql file. It's the "IN ?1" part that does it, not that I used a Set for my query instead of a List this time. It works just as well with a List, as long as the "IN ?1" stays there.
I have a drop down which lets you select the name of the country and then in the other bottom the province/state is populated based on the country selected.
When the user selects the country, a query is made and then province is updated respectively
I am running into a scenario where there's a race condition between the query response arrival, which ends up displaying the incorrect data . How should I handle this ?
eg: User selects country A, query is fired, network is slow, meanwhile user changes the country to B and then another request is fired, The response of B comes back quickly, but response of A comes after, Now the screen is in a state where country says B but the province are of A
Notice: I don't want to block the country selector while the query response is being awaited
any suggestions on resolving this ?
You may want to add a simple filter on your query response, such
that the code to display runs if the response matches the current
query. Configure your callback code carefully, and you can have only the latest
response displayed, since you know which country is currently selected.
(Admittedly, this might not the best way to do this, but it should work.)
More generally, you would have to develop this with some combination of network request(s) and caching/local database.
If the size of your dataset is not too large, or if it is not going to be updated too often, you might simply want to put it into a local database, e.g. Room DB.
If your data doesn't match this specification and it is subject to future updates in the back-end, but the overall dataset is pretty small, try caching the dataset (countries and provinces) at app start. This way your network calls are light, you always have the latest data, your UI is no longer dependent on asynchronous operations, and your code is simpler. You could also use data binding here, if appropriate.
If your dataset is very large such that fetching it all at app start is an unacceptable overhead, only then should you work purely with network request(s) each time the user changes a filter option. One option is that your callback code is configured to ignore any response that doesn't relate to the currently selected option. Another option is that your HTTP-related schema, for request and/or response, can contain some information about which "country" generated the HTTP request so that you can check for it in the response object. (The latter option can be considered overkill in most cases, but sometimes in complex scenarios, you might have to do such stuff for one or more reasons, such as functional definition, simplicity, neatness, etc.)
P.S.: I assume you are already using appropriate libraries for HTTP calls.
My situation is that, given 3 following methods (I used couchbase-java-client 2.2 in Scala. And Version of Couchbase server is 4.1):
def findAll() = {
bucket.query(N1qlQuery.simple(select("*").from(i(DatabaseBucket.USER))))
.allRows().toList
}
def findById(id: UUID) = {
Option(bucket.get(id.toString, classOf[RawJsonDocument])).map(i => read[User](i.content()))
}
def upsert(i: User) = {
bucket.async().upsert(RawJsonDocument.create(i.id.toString, write(i)))
}
Basically, they are insert, find one by id and findAll. I did an experiment where :
I insert a User, then find one by findById right after that, I got a user that I have inserted correctly.
I insert and then I use findAll right after that, it returns empty.
I insert, put 3 seconds delay and then I use findAll, I can find the one that I have inserted.
By that, I suspected that N1qlQuery only search over cached layer rather than "persist" layer. So, how can I force to let it search on "persist" layer?
In Couchbase 4.0 with N1QL, there are different consistency levels you can specify when querying which correspond to different cost for updates/changes to propagate through index recalculation. These aren't tied to whether or not data is persisted, but rather it's an option when you issue the query. The default is "not bounded" and to make sure that your upsert request is taken into consideration, you'll want to issue this query as "request plus".
To get the effect you're looking for, you'll want to add N1qlPararms on your creation of the N1qlQuery by using another form of the simple() method. Add a N1qlParams with ScanConsistency.REQUEST_PLUS. You can read more about this in Couchbase's Developer Guide. There's a Java API example of this. With that change, you won't need to have a sleep() in there, the system will automatically service the query request once the index recalculation has gotten to your specified level.
Depending on how you're using this elsewhere in your application, there are times you may want either consistency level.
You need stronger scan consistency. Add a N1qlParam to the query, using consistency(ScanConsistency.REQUEST_PLUS)
I have found the Jquery datatables plug in extremely useful for simple, read only applications where I'd like to give the user pagination, sorting and searching of very large sets of data (millions of rows using server side processing).
I have a system for reusing this code but I end up doing the same thing over and over alot. I'd like to write a very generalized api that I essentially just need to configure the sql needed to retrieve the data used in the table. I am looking for a good design pattern/approach to do this. I've seen articles like this http://www.codeproject.com/Articles/359750/jQuery-DataTables-in-Java-Web-Applications and have a complete understanding of how server side processing works (have done it in java and asp.net many times). For someone to answer you will probably need to have a deep understanding of how server side processing works in java but here are some issues that come up with attempting to do this:
I generally run three separate queries. A count without the search clause, a count with the clause included, the query for the actual data. I haven't found an efficient way to do all 3 at once and doing so requires a lot of extra data to come back from db (ie counts over and over). The api needs to support behavior based on these three different queries and complex queries at that. I generally row number () over an index for the pagination to be relatively speedy with large data.
*where clause changes dynamically (user can search over a variable number of rows).
*order by clause changes for the same reason.
overall, each case is often pretty specific to the data we need. Is there a good way to abstract this so that I can do minimal work when I want to use the plug in server side.
So, the steps are as follows in most projects:
*extract the params the plug on sends to the server (alot of times my own are added, mostly date ranges)
*build the unfiltered count query (this is rarely dynamic).
*build the filtered count query (is dynamic)
*build the data query
*construct a model object of the table and return it as json.
A lot of the issues occur setting the prepared statements with a variable number of parameters. Dynamically generating the sql in a general way (say based on just column names) seems unlikely. I am wondering if someone else has created something they are using for this or if it sounds like a specific pattern is applicable. It has just occurred to me that creating a reusable filter may be helpful in java. Any advice would be greatly appreciated. Feel free to be language agnostic as the architecture is what I'm trying to figure out.
We have base search criteria where all request parameters relevant to DataTables are mapped onto class properties (fields) and custom search criteria class that extends base and contains specific to business logic fields for sutom search. Also on server side we have repository class that takes custom search criteria as an argument and makes queries to database.
If you are familiar with C#, you could check out custom binding code and example of usage.
You could do such custom binding in your Java code as well.
I have a simple data model that includes
USERS: store basic information (key, name, phone # etc)
RELATIONS: describe, e.g. a friendship between two users (supplying a relationship_type + two user keys)
COMMENTS: posted by users (key, comment text, user_id)
I'm getting very poor performance, for instance, if I try to print the first names of all of a user's friends. Say the user has 500 friends: I can fetch the list of friend user_ids very easily in a single query. But then, to pull out first names, I have to do 500 back-and-forth trips to the Datastore, each of which seems to take on the order of 30 ms. If this were SQL, I'd just do a JOIN and get the answer out fast.
I understand there are rudimentary facilities for performing two-way joins across un-owned relations in a relaxed implementation of JDO (as described at http://gae-java-persistence.blogspot.com) but they sound experimental and non-standard (e.g. my code won't work in any other JDO implementation).
Worse yet, what if I want to pull out all the comments posted by a user's friends. Then I need to get from User --> Relation --> Comments, i.e. a three-way join, which isn't even supported experimentally. The overhead of 500 back-and-forths to get a friend list + another 500 trips to see if there are any comments from a user's friends is already enough to push runtime >30 seconds.
How do people deal with these problems in real-world datastore-backed JDO applications? (Or do they?)
Has anyone managed to extract satisfactory performance from JDO/Datastore in this kind of (very common) situation?
-Bosh
First of all, for objects that are frequently accessed (like users), I rely on the memcache. This should speedup your application quite a bit.
If you have to go to the datastore, the right way to do this should be through getObjectsById(). Unfortunately, it looks like GAE doesn't optimize this call. However, a contains() query on keys is optimized to fetch all the objects in one trip to the datastore, so that's what you should use:
List myFriendKeys = fetchFriendKeys();
Query query = pm.newQuery(User.class, ":p.contains(key)");
query.execute(myFriendKeys);
You could also rely on the low-level API get() that accept multiple keys, or do like me and use objectify.
A totally different approach would be to use an equality filter on a list property. This will match if any item in the list matches. So if you have a friendOf list property in your user entity, you can issue a single Query friendOf == theUser. You might want to check this: http://www.scribd.com/doc/16952419/Building-scalable-complex-apps-on-App-Engine
You have to minimize DB reads. That must be a huge focus for any GAE project - anything else will cost you. To do that, pre-calculate as much as you can, especially oft-read information. To solve the issue of reading 500 friends' names, consider that you'll likely be changing the friend list far less than reading it, so on each change, store all names in a structure you can read with one get.
If you absolutely cannot then you have to tweak each case by hand, e.g. use the low-level API to do a batch get.
Also, rather optimize for speed and not data size. Use extra structures as indexes, save objects in multiple ways so you can read it as quickly as possible. Data is cheap, CPU time is not.
Unfortunately Phillipe's suggestion
Query query = pm.newQuery(User.class, ":p.contains(key)");
is only optimized to make a single query when searching by primary key. Passing in a list of ten non-primary-key values, for instance, gives the following trace
alt text http://img293.imageshack.us/img293/7227/slowquery.png
I'd like to be able to bulk-fetch comments, for example, from all a user's friends. If I do store a List on each user, this list can't be longer than 1000 elements long (if it's an indexed property of the user) as described at: http://code.google.com/appengine/docs/java/datastore/overview.html .
Seems increasingly like I'm using the wrong toolset here.
-B
Facebook has 28 Terabytes of memory cache... However, making 500 trips to memcached isn't very cheap either. It can't be used to store a gazillion pieces of small items. "Denomalization" is the key. Such applications do not need to support ad-hoc queries. Compute and store the results directly for the few supported queries.
in your case, you probably have just 1 type of query - return data of this, that and the others that should be displayed on a user page. You can precompute this big ball of mess, so later one query based on userId can fetch it all.
when userA makes a comment to userB, you retrieve userB's big ball of mess, insert userA's comment in it, and save it.
Of course, there are a lot of problems with this approach. For giant internet companies, they probably don't have a choice, generic query engines just don't cut it. But for others? Wouldn't you be happier if you can just use the good old RDBMS?
If it is a frequently used query, you can consider preparing indexes for the same.
http://code.google.com/appengine/articles/index_building.html
The indexed property limit is now raised to 5000.
However you can go even higher than that by using the method described in http://www.scribd.com/doc/16952419/Building-scalable-complex-apps-on-App-Engine
Basically just have a bunch of child entities for the User called UserFriends, thus splitting the big list and raising the limit to n*5000, where n is the number of UserFriends entities.