I'm getting pretty familiar with using Asyn tasks to fetch data from API endpoints now. I can easily hit a url and parse the JSON data that returns.
However I've run into a problem in which this API has a lot of pages to it.
What's the best way to deal with an API that has a lot of pages, and has no option to change the results per page?
My particular endpoint has 40+ pages of data (12 results per page). I feel as if spinning up a new async task per each page endpoint is a bit ridiculous.
Any ideas?
Unfortunatly as everyone suggest there is no way around the api if it does not support a results per page argument. You could prefetch one or two pages and join them in one AsyncTask that way you minimize the amount of async task that fork from the main thread and have a strategy when you need to load more pages.
I would definitely suggest you, use retrofit HTTP client. I had the same issue almost 260+ calls and Retrofit work fine for me.Check it here
Related
I am calling an REST API endpoint which is hosting a very large amount of data. The data quantity is too much that my chrome tab crashes as well (It displays the data for a short time and it's loading even more data causing the tab to crash). Even postman fails to get the data and instead would only return 200 OK code without displaying any response body.
I'm trying to write a java program to consume the response from the API. Is there a way to consume the response without using a lot of memory?
Please let me know if the question is not clear. Thank you !!
A possibility is to use a JSON streaming parser like Jackson Streaming API (https://github.com/FasterXML/jackson-docs/wiki/JacksonStreamingApi) for example code see https://javarevisited.blogspot.com/2015/03/parsing-large-json-files-using-jackson.html
For JS there is https://github.com/DonutEspresso/big-json
If data is really so large then better to split task:
Download full data via ordinary http client to disk
Make bulk processing , using some streaming approach, similar to SAX parsing for XML:
JAVA - Best approach to parse huge (extra large) JSON file
With such split, you will not deal with possible network errors during processing and will keep data consistency.
I have an API that returns a list of ~30,000 SKUs. I then need to insert each SKU into the query parameter URL of another API to validate the response of this second API.
I know that something like this is possible with Jmeter where you could possibly do this via a CSV file. How can I accomplish this via REST Assured? An example/sample would be greatly appreciated!
Similar question also applies to using outputs from an API to use as input in body content...
Thanks.
Short answer.You can't as it comes to query parameters. You can have no more than 2000 characters in your URL. Explanation here : What is the maximum length of a URL in different browsers?
With respect of POST method you don't have constraints.
If you are open to evaluating alternatives to REST-assured, Karate allows you to easily achieve such a data-driven test and it is based on Cucumber as well.
Disclaimer: I am the dev.
In the demos you will find a number of examples that use dynamic JSON data to drive a loop making an HTTP call. Yes, you can dynamically use the data in HTTP responses also in future steps.
I'm working on a project which uses Facebook Graph Api to download information about users of our application and their Facebook friends. There is a need to update this data, but as I understand Real-Time Update is not an option. For example I would like to have update of profile feed of friends of our app user, and I don't see a way to do this with Real-Time Update.
Could someone give me some advice on this update mechanism? I need to update app users, their friend connections and profile feeds of users and their friends. I understand I'll have to poll Facebook servers to retrieve this data. What I'm trying to find out is some good practices when doing these things. Update frequency? Way to recognize that data has changed? If anyone has experience with this kind of things every advice would mean a lot.
Thanks.
You can use the since= query string parameter of the Graph API call. Here's some pseudocode to help you along
var usersLastPostDate = GetLastPostDateFromDataStore(userId);
if(usersLastPostDate not populated) {
streamItems = GraphApiGet(userId, "me/feed")
lastStreamItemDate = GetNewestStreamItemDate(streamItems)
StoreLastPostDateIntoDataStore(userId, lastStreamItemDate )
}
else {
streamItems = GraphApiGet(userId, "me/feed?since=" + usersLastPostDate )
}
Not massively useful for your use case (as you're wanting to get data which changes frequently), but worth pointing out that the Graph API now supports ETags - https://developers.facebook.com/blog/post/627/.
ETags will tell you if the data has changed since the last time you requested it. This won't stop you from hitting Facebooks API throttling limits, but is a quick and easy way to tell if the data has changed since you last asked for it.
There is no one answer to your question, as it depends on what your application is doing. How often do you need to get the updated information? If your data is stale for 5 minutes, is that really a problem? Can you grab the data from Facebook lazily, when some user action requires that you have it?
If you do need to do a lot of polling try and use non-blocking IO, especially if you're expecting to have a lot of open HTTP requests to Facebook whilst you're polling. Build a reliable queueing mechanism and HTTP poker to ensure requests are being made as expected. Without any idea of what technology stack you're using it's hard to be more specific than that.
HTH
What about this: Open Graph Subscription system ?
what I am trying to do is reading the latest statuses from all my facebook friends
using graph api ,it takes too long , I am getting all my friends for facebook as json and I read the latest statuses from them , what I am getting is timeout , I know it will take too long to do , but what the efficient way to handle such thing?
break that into batches and probably do it in separate threads (since most of this will be IO work).
I have a small application in java which searches images using bing image search. The problem I am facing is that, its getting only first 20 images. May be because when we search on bing.com it populates first 20 images first and then its an infinite scrolling feature.
Is there any way to search more than 20 images using bing?
Cheers :)
I'm guessing this is because this site uses ajax to populate the "infinite" scrolling list as you call it.
You probably send an http request and get the initial page (btw on my browser I got 6 images accross x 4 down, i.e. 24 not 20; thinking about it maybe my client also got 20 only at first and got the last 4 w/ ajax...), and you'd need to do the paging trough by way of ajax requests.
At a glance, the xhtml and associated javascript of the page is very dense and somewhat obfuscated, It would take a while to get oriented... An alternative to analyzing this page is to instead use a packet sniffer (such as wireshark) and to capture the requests which take place when you scroll down.
Essentially this will likely expose some form of ajax request, which you can then easily emulate with java. Typically the ajax response is easy to parse whatever its nature (xml, jason, gzip...).
A possible snags to this well laid out plan is if the returned data in the ajax response is encrypted, for example where the extra images are bundled in some sort of envelope for which you'll then need to discover the format.
Depending on the actual task at hand, you may try alternatives such as automations within GreaseMonkey (on Firefox) or similar tools.
What of Bing API ?
Note that all the above approaches are akin to screen-scraping and hence quite sensitive to even minute changes in the Bing application, and, depending on effective usage and context, this could put the project in a legal grey area... A better approach may be to register and obtain a proper application ID with MS/Bing and to use the Bing API.
You are simulating a browser? Doesn't the Bing engine have an entry point for programs instead - a web service or so - which would make your task much easier.
EDIT: SDK appears to be here: http://msdn.microsoft.com/en-us/library/cc980922.aspx
Just wanted to post a direct answer to the question:
Bing uses Ajax (of course) for the infinite scroll. Each "tick" is based on a simple ajax get request, which accuires new images.
For instance, this url returns 30 results (121-151) in a "htmlraw" format based on the query "max payne".
http://www.bing.com/images/async?q=max+payne&format=htmlraw&first=121
Edit:
It works with the original url too, just add &first=NUMBER to the querystring. Example:
www.bing.com/images/search?q=payne&go=&form=QBLH&scope=images&filt=all&first=10
I am building my own bulk image collector (for a "learning project" for myself) and I found out that it is paginated like this.
FYI, Google and Bing are easy, Yahoo and Altavista (redundant, since their results are from Yahoo) are far more problematic - they don't post the directlink to the original image.
Have fun! :)
This can be done by using count parameter. For example, I tried GET "https://api.cognitive.microsoft.com/bing/v7.0/images/search?q=shoes&mkt=en-us&count=30" call and it returns 30 images.