I am currently trying developing some API tests using rest-assured. More specifically, I am trying to test Stripe's payment gateway (3DS). The way it works is that I subscribe a user using our API endpoint, which has a response including an external link to Stripe's website where the user fills out a form for their payment details.
This part is fine and dandy.
The problem lies ahead. I need to be able to send a POST call to this form with the payload of the form. However, the POST payload that Stripe is expecting includes other elements that are not part of the form such as muid, guid, sid, key, and payment-user-agent. After doing some research, I see that these elements are given when the user first loads the form. However, they are given are given at an intermediate API call during the loading of the page. Meaning, if I try to call a GET on the form page, I will not retrieve these elements. The loading of the page calls other endpoints sequentially, where at some point, it makes a call to another endpoint with an encrypted request payload, which I cannot reproduce for my automated tests, which returns the muid, guid, and others.
I was wondering if it were possible to get the response of the intermediate API call somehow?
I tried doing research of methods that can possibly achieve this, but I have yet to come up with something that is API testing related.
I cannot do a Selenium script to push the form for me, as I need it to be API testing related, and with no GUI browser included. I could potentially run Selenium in headless maybe, but I was wondering if there was anything more lightweight or a simple way to retrieve the response of the intermediate api call before I proceed with that solution.
Related
I know what I am asking is somehow weird. There is a web application (which we don't have access to its source code), and we want to expose a few of its features as web services.
I was thinking to use something like Selenium WebDriver, so I simulate web clicks on the application according to the web service request.
I want to know whether this is a better solution or pattern to do this.
I shall mention that the application is written using Java, Spring MVC (it is not SPA) and Spring Security. And there is a CAS server providing SSO.
There are multiple ways to implement it. In my opinion Selenium/PhantomJS is not the best option as if the web is properly designed, you can interact with it only using the provided HTML or even some API rather than needing all the CSS, and execute the javascript async requests. As your page is not SPA it's quite likely that an "API" already exists in form of GET/POST requests and you might be lucky enough that there's no CSRF protection.
First of all, you need to solve the authentication against the CAS. There are multiple types of authentication in oAuth, but you should get an API token that enables you access to the application. This token should be added in form of HTTP Header or Cookie in every single request. Ideally this token shouldn't expire, otherwise you'll need to implement a re-authentication logic in your app.
Once the authentication part is resolved, you'll need quite a lot of patience, open the target website with the web inspector of your preferred web browser and go to the Network panel and execute the actions that you want to run programmatically. There you'll find your request with all the headers and content and the response.
That's what you need to code. There are plenty of libraries to achieve that in Java. You can have a look at Jsop if you need to parse HTML, but to run plain GET/POST requests, go for RestTemplate (in Spring) or JAX-RS/Jersey 2 Client.
You might consider implementing a cache layer to increase performance if the result of the query is maintained over the time, or you can assume that in, let's say 5 minutes, the response will be the same to the same query.
You can create your app in your favourite language/framework. I'd recommend to start with SpringBoot + MVC + DevTools. That'd contain all you need + Jsoup if you need to parse some HTML. Later on you can add the cache provider if needed.
We do something similar to access web banking on behalf of a user, scrape his account data and obtain a credit score. In most cases, we have managed to reverse-engineer mobile apps and sniff traffic to use undocumented APIs. In others, we have to fall back to web scraping.
You can have two other types of applications to scrape:
Data is essentially the same for any user, like product listings in Amazon
Data is specific to each user, like in a banking app.
In the firs case, you could have your scraper running and populating a local database and use your local data to provide the web service. In the later case, you cannot do that and you need to scrape the site on user's request.
I understand from your explanation that you are in this later case.
When web scraping you can find really difficult web apps:
Some may require you to send data from previous requests to the next
Others render most data on the client with JavaScript
If any of these two is your case, Selenium will make your implementation easier though not performant.
Implementing the first without selenium will require you to do lots of trial an error to get the thing working because you will be simulating the requests and you will need to know what data is expected from the client. Whereas if you use selenium you will be executing the same interactions that you do with the browser and hence sending the expected data.
Implementing the second case requires your scraper to support JavaScript. AFAIK best support is provided by selenium. HtmlUnit claims to provide fair support, and I think JSoup provides no support to JavaScript.
Finally, if your solution takes too much time you can mitigate the problem providing your web service with a notification mechanism, similar to Webhooks or Resthooks:
A client of your web service would make a request for data providing a URI they would like to get notified when the results are ready.
Your service would respond immediatly with an id of the request and start scraping the necessary info in the background.
If you use skinny payload model, when the scraping is done, you store the response in your data store with an id identifying the original request. This response will be exposed as a resource.
You would execute an HTTPPOST on the URI provided by the client. In the body of the request you would add the URI of the response resource.
The client can now GET the response resource and because the request and response have the same id, the client can correlate both.
Selenium isn't a best way to consume webservices. Selenium is preferably an automation tool largely used for testing the applications.
Assuming the services are already developed, the first thing we need to do is authenticate user request.
This can be done by adding a HttpHeader with key as "Authorization" and value as "Basic "+ Base64Encode(username+":"+password)
If the user is valid (Users login credentials match with credentials in server) then generate a unique token, store the token in server by mapping with the user Id and
set the same token in the response header or create a cookie containing token.
By doing this we can avoid validating credentials for the following requests form the same user by just looking for the token in the response header or cookie.
If the services are designed to chcek login every time the "Authorization" header needs to be set in request every time when the request is made.
I think it is a lot of overhead using a webdriver but it depends on what you really want to achieve. With the info you provided I would rather go with a restTemplate implementation sending the appropriate http messages to the existing webapp, wrap it with a nice #service layer and build your web service (rest or soap) on top of it.
The authentication is a matter of configuration, you can pack this in a microservice with #EnableOAuth2Sso and your restTemplate bean, thanks to spring boot, will handle the underlining auth part for you.
May be overkill..... But RPA? http://windowsitpro.com/scripting/review-automation-anywhere-enterprise
I've been working in pubsub where I can successfully pull the data from a particular topic under a particular project in java. If I've to show these data in html, first I've to call the servlet method, then the servlet will call pubsub api to get the data, then I've to include that data in the response.
Since it involves an additional one layer (java) to access the data, is it possible to directly fecth the data in javascript call by skipping the java call..? Is there any api's available in google pubsub to serve for that purpose?
The api can be used in javascript (with ajax) since its just https calls,
https://cloud.google.com/pubsub/reference/rest/
however its a bad idea because you would need to expose your server account access token and the client could then abuse it.
So, I'm currently developing an app for a service which has a json-based (unfortunately) read only API. Retrieving content is no problem at all, however the only way to post content is using a form on their site which location is a PHP script. The service is open source so I know which fields the form expects, but whatever I send, it always results in a BAD REQUEST.
I captured the network traffic inside my browser and as far as I can see, the browser constructs a multipart form request, however when I copy the request and invoke it again using a REST client, a BAD REQUEST gets returned.
Is there a way to construct a http request in Android that simulates a form post?
If it's readonly I think you wouldn't be able to make requests with POST (it's assume for editing or adding things).
If you let me make you an advise, I recommend you using this project as a Library.
https://github.com/matessoftwaresolutions/AndroidHttpRestService
It makes you easy deal with apis, control network problems etc.
You can find a sample of use there.
You only have to:
Build your URL
Tell the component to execute in POST mode
Build your JSON
As I told you, I don't know even if it will work.
I hope it helps!!!
I'm running a webapp that checks if a user is logged in with UserService, then shows them their homepage if they are, or redirects them to a login screen if not. Once on the page, I would like to be able to update specific portions using AJAX when they click certain elements. Now, I have already written a REST API in the same GAE project using Cloud Endpoints that gets all the information I want, and so in the spirit of DRY I would rather use my own API than write new servlets to handle these requests.
The problem is that I need to generate an OAuth token in order to access the API. I can easily do this from the Google API JavaScript Client Library, but then my user needs to re-authenticate for the rest API, which is not only bad from a UX perspective, but more importantly exposes my client id in the page's javascript and passes a token through HTTP (non-SSL) headers.
The only option I see is to write a servlet for each request and have duplicate work. But conceptually, I'm already logged in to Google, so I should just be able to access the API. How does one usually go about this? Am I thinking about it all wrong?
UserService and OAuth are two different authentication (and authorisation) mechanisms and you can not combine them.
If you do need OAuth to access some of the APIs than also use server side OAuth. This way you can access APIs and replace UserService all in one go.
Nowadays many websites contain some content loaded by ajax(e.g,comments in some video websites). Normally we can't crawl these data and what we get is just some js source code. So here is the question: in what ways can we execute the javascript code after we get the html response and get to the final page we want?
I know that HtmlUnit has the ability to execute background js,yet some many bugs and errors are there. Are there any else tools can help me with it?
Some people tell me that I can crawl the ajax request url, analyze its parameters and send request again so as to gain the data. If things can't work out according to the way I mention above, can anyone tell me how to extract the ajax url and send the request in correct format?
By the way,if the language is java,it would be the best
Yes, Netwoof can crawl Ajax easily. Its API and bot builder let you do it without a line of code.
Thats the great thing about HTTP you don't even need java. My goto tool for debugging AJAX is the chrome extension Postman. I start by looking at the request in the chrome debugger and identifying the salient bits(url or form encoded params etc.)
Then it can be as simple as opening a tab and launch requests at the server with Postman. As long as its all in the same browser context all of your cookies(for authentication, etc.) will be shipped along too.