Can WireMock play back requests from multiple domains? - java

I am building a Dockerised record-playback system to help me record websites, so I can design scrapers aginst a local version rather than the real thing. This means that I do not swamp a website with automated requests, and has the added advantage that I do not need to be connected to the web to work.
I have used the Java-based WireMock internally, which records from a queue of site scrapes using Wget. I am using the WireMock API to read various pieces information from the mappings it records.
However, I have spotted from a mapping response that domain information does not seem to be recorded (except where it is in response headers by accident). See the following response from __admin/mappings:
{
"result": {
"ok": true,
"list": [
{
"id": "794d609f-99b9-376d-b6b8-04dab161c023",
"uuid": "794d609f-99b9-376d-b6b8-04dab161c023",
"request": {
"url": "/robots.txt",
"method": "GET"
},
"response": {
"status": 404,
"bodyFileName": "body-robots.txt-j9qqJ.txt",
"headers": {
"Server": "nginx/1.0.15",
"Date": "Wed, 04 Jan 2017 21:04:40 GMT",
"Content-Type": "text/html",
"Connection": "keep-alive"
}
}
},
{
"id": "e246fac2-f9ad-3799-b7b7-066941408b8b",
"uuid": "e246fac2-f9ad-3799-b7b7-066941408b8b",
"request": {
"url": "/about/careers/",
"method": "GET"
},
"response": {
"status": 200,
"bodyFileName": "body-about-careers-GhVqy.txt",
"headers": {
"Server": "nginx/1.0.15",
"Date": "Wed, 04 Jan 2017 21:04:35 GMT",
"Content-Type": "text/html",
"Last-Modified": "Wed, 04 Jan 2017 12:52:12 GMT",
"Connection": "keep-alive",
"X-CACHE-URI": "/about/careers/",
"Accept-Ranges": "bytes"
}
}
},
{
"id": "def378f5-a93c-333e-9663-edcd30c936d7",
"uuid": "def378f5-a93c-333e-9663-edcd30c936d7",
"request": {
"url": "/about/careers/feed/",
"method": "GET"
},
"response": {
"status": 200,
"bodyFileName": "body-careers-feed-Fd2fO.xml",
"headers": {
"Server": "nginx/1.0.15",
"Date": "Wed, 04 Jan 2017 21:04:45 GMT",
"Content-Type": "application/rss+xml; charset=UTF-8",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"X-Powered-By": "PHP/5.3.3",
"Vary": "Cookie",
"X-Pingback": "http://www.example.com/xmlrpc.php",
"Last-Modified": "Thu, 06 Jun 2013 14:01:52 GMT",
"ETag": "\"765fc03186b121a764133349f8b716df\"",
"X-Robots-Tag": "noindex, follow",
"Link": "<http://www.example.com/?p=2680>; rel=shortlink",
"X-CACHE-URI": "null cache"
}
}
},
{
"id": "616ca6d7-6e57-4c10-8b57-f6f3dabc0930",
"uuid": "616ca6d7-6e57-4c10-8b57-f6f3dabc0930",
"request": {
"method": "ANY"
},
"response": {
"status": 200,
"proxyBaseUrl": "http://www.example.com"
},
"priority": 10
}
]
}
}
The only clear recording of a URL is in the final entry against proxyBaseUrl, and given that I had to specify a URL in the console call I am now worried that if I record against a different domain, the domain that each one is from will be lost.
That would mean that in playback mode, WireMock would only be able to play back from one domain, and I'd have to restart it and point it to another cache in order to play back different sites. This is not workable for my use case, so is there a way around this problem?
(I have done a little work with Mountebank, and would be willing to switch to it, though I find WireMock generally easier to use. My limited understanding of Mountebank is that it suffers from the same single-domain problem, though I am happy to be corrected on that. I'd be happy to swap to any robust open-source API-driven recorder HTTP proxy, if dropping WireMock is the only way forward).

It's possible to serve WireMock stubs for multiple domains by adding a Host header criterion in your requests. Assuming your DNS/host file maps all the relevant domains to your WireMock server's IP, then this will cause it to behave like virtual hosting on an ordinary web server.
The main issue is that the recorder won't add the host header to your mappings so you'd need to do this yourself afterwards, or hack the recorder to do it on the fly.
I've been considering adding better support for this, so watch this space.
I'd also suggest checking out Hoverfly, which seems to solve this problem pretty well already.

Related

RobotFramework Post Request with attributes on body

Hello im trying to test an api with a body:
{
"customerAttributes": [
{
"customerId": 0,
"id": 0,
"name": "string",
"value": "string"
}
],
"emailAddress": "string",
"firstName": "string",
"id": 0,
"lastName": "string",
"registered": true
}
Since it has more than one object i dont know how to make it work. this is what i have til the moment :
*** Settings ***
Library RequestsLibrary
*** Variables ***
${Base_URL}= http://localhost:8082 /api/v1
*** Test Cases ***
TC_004_POST_Customer
Create session customer ${Base_URL}
${body}= create dictionary customerId=100 id=100 name=test value=0 emailAdress=blabla#gmailcom firstName=algo id=101 lastName=testt registered=true
${header}= create dictionary Content-Type=application/json
${response}= Post On Session customer /customer data=${body} headers=${header}
log to console ${response.status_code}
log to console ${response.content}
Can someone give me a help? thanks!
You should be able to do something like this:
${inner}= Create Dictionary customerId=100 id=100 name=test value=0
${array}= Create List ${inner}
${body}= Create Dictionary customerAttributes=${array} emailAdress=blabla#gmailcom firstName=algo id=101 lastName=testt registered=True
You have to use json argument in POST on Session keyword, like that:
TC_004_POST_Customer
Create session customer ${Base_URL}
${body}= create dictionary customerId=100 id=100 name=test value=0 emailAdress=blabla#gmailcom firstName=algo id=101 lastName=testt registered=true
${header}= create dictionary Content-Type=application/json
${response}= Post On Session customer /customer json=${body} headers=${header}
log to console ${response.status_code}
log to console ${response.content}
This is the documentation POST On Session Keyword

NotXContentException when creating ingest pipeline in Elasticsearch 5.1.2 (Solaris)

I am trying to create a ingest pipeline using below PUT request:
{
"description": "ContentExtractor",
"processors": [
{
"extractor": {
"field": "contentData",
"target_field": "content"
}
}
]
}
But this is resulting in following error:
{
"error": {
"root_cause": [
{
"type": "not_x_content_exception",
"reason": "Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"
}
],
"type": "not_x_content_exception",
"reason": "Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"
},
"status": 500
}
I see below exception in ES logs:
org.elasticsearch.common.compress.NotXContentException: Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes
at org.elasticsearch.common.compress.CompressorFactory.compressor(CompressorFactory.java:57) ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.common.xcontent.XContentHelper.convertToMap(XContentHelper.java:65) ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.ingest.PipelineStore.validatePipeline(PipelineStore.java:154) ~[elasticsearch-5.1.2.jar:5.1.2]
at org.elasticsearch.ingest.PipelineStore.put(PipelineStore.java:133) ~[elasticsearch-5.1.2.jar:5.1.2]
This problem happening when Elasticsearch is running in Solaris, same request works fine in case of Linux. What am I doing wrong? Can somebody help me to fix this issue?
Thanks in advance.
Got the exact same error message but (on different version of elasticsearch and) when querying with erroneous
data format (misinterpreted doc https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html : "request body" is expected as plain JSON -- it is not intended to explain HTTP request body)
or using old syntax within path of URL (just after 'index' in the URL) :
curl -XPUT -H "Content-Type: application/json" http://host:port/index/_mapping/_doc -d "mappings=#mymapping.json"
Just remove the "mappings=" and trailing path!

Is migration needed from "old" onenote API to new microsoft graph api?

I've build a Java library to get the interaction of java application with Microsoft Onenote.
I recently discovered that the API has changed:
For getting a specific Sectionthe url was
https://www.onenote.com/api/v1.0/me/notes/sections/SECTION_ID
And is now:
https://graph.microsoft.com/v1.0/me/onenote/sections/SECTION_ID
Both are "v1.0" while both have a different signature :
Onenote API:
{
"#odata.context": "https://www.onenote.com/api/v1.0/$metadata#me/notes/sections(parentNotebook(id,name,self),parentSectionGroup(id,name,self))/$entity",
"id": "SECTION_ID",
"self": "https://www.onenote.com/api/v1.0/me/notes/sections/SECTION_ID",
"createdTime": "2014-05-29T08:56:57.223Z",
"name": "Adresses",
"createdBy": "xxxx",
"lastModifiedBy": "xxxx",
"lastModifiedTime": "2014-06-10T12:55:22.41Z",
"isDefault": false,
Microsoft Graph API:
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#users('xxx%40live.com')/onenote/sections/$entity",
"id": "SECTION_ID",
"self": "https://graph.microsoft.com/v1.0/users/xxx#live.com/onenote/sections/SECTION_ID",
"createdDateTime": "2014-05-29T08:56:57.223Z",
"displayName": "Adresses",
"lastModifiedDateTime": "2014-06-10T12:55:22.41Z",
"isDefault": false,
"pagesUrl": "https://graph.microsoft.com/v1.0/users/xxx#live.com/onenote/sections/SECTION_ID/pages",
"createdBy": {
"user": {
"id": "USER_ID",
"displayName": "xxxx"
}
},
"lastModifiedBy": {
"user": {
"id": "USER_ID",
"displayName": "xxxx"
}
},
I wonder if I need to upgrade to Microsoft Graph API or if it safe to remain with the Onenote API.
I don't find any documentation about the migration. All the links pointing to the old url are now pointing to the new url...
We encourage you to move to Microsoft Graph API because it has a single auth token for multiple services like OneNote, OneDrive, SharePoint, etc. But you could stay on the OneNote API and we still fully support it

how to get an pinterest access token

i am using rest api to get an access toke of my user using following link
https://api.pinterest.com/v1/oauth/token?grant_type=authorization_code&client_id=my id&client_secret=my secret&code="+oauthVerifier+
i am getting the response as
{"status": "failure", "code": 3, "host": "devplatform-devapi-prod-4a8937b3", "generated_at": "Tue, 28 Jun 2016 06:06:56 +0000", "message": "Authorization failed.", "data": null}
what shod i do to get my users access token

Spring Data Elasticsearch Query Date Format

We are using the spring data elasticsearch library to query our elasticsearch server. We are currently using a rest call to get the results and have been successful, but we want to use the library.
The working rest query we are sending resembles
{
"query": {
"bool": {
"must": [
{ "range" : { "startDateTime" : { "from" : "2016-01-31T00:00:00", "to" : "2016-02-01T00:00:00" }}},
{ "match_phrase" : { "keyword" : "task" }}
]
}
}
}
Using the spring data elasicsearch query derivation tool we created the method
findMessagesByKeywordAndStartDateTimeBetween(String keyword, String start, String end);
Which derives to the query
{
"from": 0,
"query": {
"bool": {
"must": [
{"query_string":{"query":"\"tasks\"","fields":["keyword"]}},
{"range":{"startDateTime":{"from":"2016-01-31T00:00:00","to":"2016-02-01T00:00:00","include_lower":true,"include_upper":true}}}
]
}
}
}
I can run this query in a rest client and receive data, however, when the library attempts to query the database, I receive an error
{
"timestamp": 1454360466934,
"status": 500,
"error": "Internal Server Error",
"exception": "org.elasticsearch.action.search.SearchPhaseExecutionException",
"message": "Failed to execute phase [query_fetch], all shards failed; shardFailures {
[██████████████████████][████████████][0]: RemoteTransportException[
[██████-██-███-███████][inet[/██.███.███.███:9300]]
[indices:data/read/search[phase/query+fetch]]]; nested: SearchParseException[
[████████████][0]: from[0],size[10]: Parse Failure [
Failed to parse source [
{\"from\":0,\"size\":10,\"query\":{\"bool\":{\"must\":[{\"query_string\":{\"query\":\"\"/tasks\"\",\"fields\":[\"method\"]}},{\"range\":{\"startDateTime\":{\"from\":\"2016-01-31T00:00:00.000Z\",\"to\":\"2016-02-01T00:00:00.000Z\",\"include_lower\":true,\"include_upper\":true}}}]}}}
]
]
];
nested: NumberFormatException[For input string: \"2016-01-31T00:00:00.000Z\"];
}",
"path": "/report/tasks"
}
This leads us to believe that the date we are asking for is not in the correct format to reference against database items, but a sample result looks like
{
"_index": "████████████",
"_type": "████████████",
"_id": "████████████",
"_score": 0.000,
"_source": {
"keyword": "tasks",
"endDateTime": "2016-01-15T00:57:31.427Z",
"startDateTime": "2016-01-15T00:57:30.201Z",
"#timestamp": "2016-01-15T00:57:31+00:00",
"responseBody": "{...stuff goes here...}"
}
},...
So you would think that you would be able to query using that format.
We decided to attempt to get all results with the tasks keyword using a new query
findMessagesByKeyword(String keyword);
which derives to
{
"from": 0,
"query": {
"bool": {
"must": [
{"query_string":{"query":"\"tasks\"","fields":["keyword"]}}
]
}
}
}
This returns all the results in a page and after printing the mapped objects startDateTime and responseBody fields to the console
10: [
[Thu Oct 15 18:55:53 EDT 2015, {...stuff goes here...}]
[Thu Oct 15 18:56:38 EDT 2015, {...stuff goes here...}]
[Thu Oct 15 18:56:49 EDT 2015, {...stuff goes here...}]
[Thu Oct 15 18:58:59 EDT 2015, {...stuff goes here...}]
[Thu Oct 15 18:59:16 EDT 2015, {...stuff goes here...}]
[Thu Oct 15 18:59:33 EDT 2015, {...stuff goes here...}]
[Thu Oct 15 18:59:54 EDT 2015, {...stuff goes here...}]
[Thu Oct 15 19:00:02 EDT 2015, {...stuff goes here...}]
[Thu Oct 15 19:00:02 EDT 2015, {...stuff goes here...}]
[Thu Oct 15 19:00:11 EDT 2015, {...stuff goes here...}]
] //These are paged results, there are results as recently as last week, just not in this page
I notice that the date time field is now in a different format, so I use the format
public String DATETIME_FORMAT = "EE MMM dd HH:mm:ss zz yyyy";
instead of
public String DATETIME_FORMAT = "yyyy-MM-dd'T'00:00:00.000'Z'";
and get the error
NumberFormatException[For input string: \"Sun Jan 31 00:00:00 EST 2016\"]
The mapping for the field, if that helps, is
"startDateTime": {
"type": "date",
"format": "dateOptionalTime"
},
We have tried many formats and datatypes. When we changed the format
to
yyyMMddHHmmss
We no longer receive the error, but get no results.
At this point we know we must be doing something wrong, but are not sure where to go. All help is greatly appreciated.
Feb 2, 2016 10:15AM
Thanks to #Richa: After converting the date to long (in milliseconds) the query seems to run, but there are no results returned.
This is run on the default timerange of from yesterday until today, and on a manual timerange of about 10 days which I know have about 300 records.
I am also able to verify using the current rest implementation that there is data, and I am able to use a rest client to triple check, but no data for spring data impl.
Thoughts?
I got this error in my project too.Got it resolved using Long datatype. The dynamic finder provided by spring Data which you are using is taking date argument in String. Convert date to milliseconds and pass date in Long. Use method as:
findMessagesByKeywordAndStartDateTimeBetween(String keyword, Long start, Long end);
Hope this helps.

Categories

Resources