I want to read a JSON list from a webservice with Java. The webservice returns a list of authors from luxemburg, e.g. sorted by the year. That's the web-site:
http://www.autorenlexikon.lu/page/periods/1919-1945/1/1/DEU/index.html
So far, I know that I can receive a JSON document with a request like this:
http://www.autorenlexikon.lu/mmp/json.document_list/DEU/0?search_since=1919&search_until=1945
But I only get the first 20 entries. How can I get the next 20 entries? I think the solution is in the JavaScript-code of the web-site, but I am pretty new in JavaScript (also in JSON).
EDIT:
There isn't any official API.
I have already tried:
http://www.autorenlexikon.lu/mmp/json.document_list/DEU/0?pageSize=1000&search_since=1919&search_until=1945
http://www.autorenlexikon.lu/mmp/json.document_list/DEU/0?page_Size=1000&search_since=1919&search_until=1945
...and many more. Who does the JavaScript-code receive all entries? Couldn't I copy this mechanism?
You should check their API and look for a parameter that let's you define the page or the range of results you want to get.
Edit Seems like you'd have to make a POST request and add the start index as well as the page size as post parameters. For more information see #matthijs koevoets' answer.
It depends on how the Webservice has been coded. Nothing to do with JSON specifically. From the results you can see it says
"pageSize":20,
You just have to figure out how to call the Web service with a page size. It may not allow you to query it with a different page size. That's up to the Web service API coded by their developers
their service seems to accept POST parameters only: sort=year&dir=asc&startIndex=0&results=100
Related
Is there a specific scenario where we use a POST instead of GET, to implement the functionality of get operation ?
GET is supposed to get :) and POST is used to mainly add something new or sometimes often used for updates as well (although PUT is recommended in such scenarios). There is no specific scenario where we use a POST instead of a GET, if we require this, that means we are probably doing it wrong, although nothing stops you doing this but this is bad design and you should take a step back and plan your API carefully.
There are 2 important cases for a POST i.e. POST is more secure than a GET and POST can send large amount of data but even with this I won't recommend why one will use POST to simulate a GET behaviour.
Lets understand usage of get and post :
What is GET Method?
It appends form-data to the URL in name/ value pairs. The length of the URL is limited by 2048 characters. This method must not be used if you have a password or some sensitive information to be sent to the server. It is used for submitting the form where the user can bookmark the result. It is better for data that is not secure. It cannot be used for sending binary data like images or word documents. It also provides $_GET associative array to access all the sent information using the GET method.
What is POST Method?
It appends form-data to the body of the HTTP request in such a way that data is not shown in the URL. This method does not have any restrictions on data size to be sent. Submissions by form with POST cannot be bookmarked. This method can be used to send ASCII as well as binary data like image and word documents. Data sent by the POST method goes through HTTP header so security depends on the HTTP protocol. You have to know that your information is secure by using secure HTTP. This method is a little safer than GET because the parameters are not stored in browser history or in web server logs. It also provides $_POST associative array to access all the sent information using the POST method.
Source: https://www.edureka.co/blog/get-and-post-method/
So both the methods have their specific usage.
POST method is used to send data to a server to create or update a resource.
GET method is used to request data from a specified resource.
If you want to fetch some data you can use the GET method. But if you want to update an existing resource or create any new resource you should use POST. GET will not help you to create/update resources. So exposing the api should be specific to your needs.
UPDATE
So your main question is in what scenario we can use POST to implement the functionality of GET.
To answer that, as you understand what GET and POST does, so with GET request you will only fetch the resource. But with POST request you are creating or updating the resource and also can send the response body containing the form data in the same request response scenario. So suppose you are creating a new resource and the same resource you want to see, instead of making a POST call first and making a GET call again to fetch the same resource will cost extra overhead. You can skip the GET call and see your desired response from the POST response itself. This is the scenario you can use POST instead of making an extra GET call.
I have an API that returns a list of ~30,000 SKUs. I then need to insert each SKU into the query parameter URL of another API to validate the response of this second API.
I know that something like this is possible with Jmeter where you could possibly do this via a CSV file. How can I accomplish this via REST Assured? An example/sample would be greatly appreciated!
Similar question also applies to using outputs from an API to use as input in body content...
Thanks.
Short answer.You can't as it comes to query parameters. You can have no more than 2000 characters in your URL. Explanation here : What is the maximum length of a URL in different browsers?
With respect of POST method you don't have constraints.
If you are open to evaluating alternatives to REST-assured, Karate allows you to easily achieve such a data-driven test and it is based on Cucumber as well.
Disclaimer: I am the dev.
In the demos you will find a number of examples that use dynamic JSON data to drive a loop making an HTTP call. Yes, you can dynamically use the data in HTTP responses also in future steps.
According to the answer in here, using Gson we can programmatically achieve to retrieve the result that Google will return to a query. Nonetheless, yet there are 2 questions are remaining in my mind:
How can we do similar thing for Bing?
How can we get more than 4 results based on the referred answer? Because the results.getResponseData().getResults().get(n).getUrl() for n>4 returns exception.
As #Niklas noted, google search api is deprecated, thus you should not use it for your project. Currently the only solution would be to get search result by http request to get a html search results and than parse it yourself.
In case of Bing, there is a search API, but it has a limited number of calls for free users. If you need to make a lot of requests, than you will have to pay for it. https://datamarket.azure.com/dataset/5BA839F1-12CE-4CCE-BF57-A49D98D29A44
I have my client's e-shop, which is created by another company. I want to parse all the products and put them in an xml. I know how to get to the first page of each "brand" but I have difficulties passing the argument to change the page for the paginated results.
This is the e-shop "http://www.gialia.net.gr/ProductCatalog/20/CAR.aspx" that points to one brand.
When I user tamper-data on firefox I see that when you want to press the second-page of the results is posts the :
"__EVENTTARGET=ctl00%24wpmMain%24wp131820866%24wp512420601%24dpgTop%24ctl01%24ctl01"
the last string: "ct101" means go to page 2, If I change it to ct102 it goes to page 3 etc.
BUT i'm trying to create it as a GET request so I can create these parameters dynamically in my Java code and parse each responce. But when I create the url as:
http://www.gialia.net.gr/ProductCatalog/20/CAR.aspx?__EVENTTARGET=ctl00$wpmMain$wp131820866$wp512420601$dpgTop$ctl01$ctl02
I get no results.
Can someone please take a look and give me some suggestions?
The site you give us here is very poor in design concerning the search engines (SEO), and so the parse of the page one by one is too difficult.
To change page is make post back, and with javascript only. So you must do the same to move to the next page of the catalog, you need to make a full post back of the page with all the parameters.
Now, the page is so bad designed that the programmer have disable the __EVENTVALIDATION of the controls probably because he not let him do wrong things, so when you can tamper the data, but still you need to make post back. By simple type on the url one only parametre the code behind did not understand that is post back. You need to send and at least the Viewstate and the rest hidden parameters.
But isn't more easy to just get from your client access direct to the database and reads them from there ?
I have a small application in java which searches images using bing image search. The problem I am facing is that, its getting only first 20 images. May be because when we search on bing.com it populates first 20 images first and then its an infinite scrolling feature.
Is there any way to search more than 20 images using bing?
Cheers :)
I'm guessing this is because this site uses ajax to populate the "infinite" scrolling list as you call it.
You probably send an http request and get the initial page (btw on my browser I got 6 images accross x 4 down, i.e. 24 not 20; thinking about it maybe my client also got 20 only at first and got the last 4 w/ ajax...), and you'd need to do the paging trough by way of ajax requests.
At a glance, the xhtml and associated javascript of the page is very dense and somewhat obfuscated, It would take a while to get oriented... An alternative to analyzing this page is to instead use a packet sniffer (such as wireshark) and to capture the requests which take place when you scroll down.
Essentially this will likely expose some form of ajax request, which you can then easily emulate with java. Typically the ajax response is easy to parse whatever its nature (xml, jason, gzip...).
A possible snags to this well laid out plan is if the returned data in the ajax response is encrypted, for example where the extra images are bundled in some sort of envelope for which you'll then need to discover the format.
Depending on the actual task at hand, you may try alternatives such as automations within GreaseMonkey (on Firefox) or similar tools.
What of Bing API ?
Note that all the above approaches are akin to screen-scraping and hence quite sensitive to even minute changes in the Bing application, and, depending on effective usage and context, this could put the project in a legal grey area... A better approach may be to register and obtain a proper application ID with MS/Bing and to use the Bing API.
You are simulating a browser? Doesn't the Bing engine have an entry point for programs instead - a web service or so - which would make your task much easier.
EDIT: SDK appears to be here: http://msdn.microsoft.com/en-us/library/cc980922.aspx
Just wanted to post a direct answer to the question:
Bing uses Ajax (of course) for the infinite scroll. Each "tick" is based on a simple ajax get request, which accuires new images.
For instance, this url returns 30 results (121-151) in a "htmlraw" format based on the query "max payne".
http://www.bing.com/images/async?q=max+payne&format=htmlraw&first=121
Edit:
It works with the original url too, just add &first=NUMBER to the querystring. Example:
www.bing.com/images/search?q=payne&go=&form=QBLH&scope=images&filt=all&first=10
I am building my own bulk image collector (for a "learning project" for myself) and I found out that it is paginated like this.
FYI, Google and Bing are easy, Yahoo and Altavista (redundant, since their results are from Yahoo) are far more problematic - they don't post the directlink to the original image.
Have fun! :)
This can be done by using count parameter. For example, I tried GET "https://api.cognitive.microsoft.com/bing/v7.0/images/search?q=shoes&mkt=en-us&count=30" call and it returns 30 images.