RobotFramework Post Request with attributes on body - java

Hello im trying to test an api with a body:
{
"customerAttributes": [
{
"customerId": 0,
"id": 0,
"name": "string",
"value": "string"
}
],
"emailAddress": "string",
"firstName": "string",
"id": 0,
"lastName": "string",
"registered": true
}
Since it has more than one object i dont know how to make it work. this is what i have til the moment :
*** Settings ***
Library RequestsLibrary
*** Variables ***
${Base_URL}= http://localhost:8082 /api/v1
*** Test Cases ***
TC_004_POST_Customer
Create session customer ${Base_URL}
${body}= create dictionary customerId=100 id=100 name=test value=0 emailAdress=blabla#gmailcom firstName=algo id=101 lastName=testt registered=true
${header}= create dictionary Content-Type=application/json
${response}= Post On Session customer /customer data=${body} headers=${header}
log to console ${response.status_code}
log to console ${response.content}
Can someone give me a help? thanks!

You should be able to do something like this:
${inner}= Create Dictionary customerId=100 id=100 name=test value=0
${array}= Create List ${inner}
${body}= Create Dictionary customerAttributes=${array} emailAdress=blabla#gmailcom firstName=algo id=101 lastName=testt registered=True

You have to use json argument in POST on Session keyword, like that:
TC_004_POST_Customer
Create session customer ${Base_URL}
${body}= create dictionary customerId=100 id=100 name=test value=0 emailAdress=blabla#gmailcom firstName=algo id=101 lastName=testt registered=true
${header}= create dictionary Content-Type=application/json
${response}= Post On Session customer /customer json=${body} headers=${header}
log to console ${response.status_code}
log to console ${response.content}
This is the documentation POST On Session Keyword

Related

Why an user with the 'read' role on the database cannot list the collections?

Given a normal user ('simpleROUser', only with the 'read' role on the database), it is throwing error when attempting to list collections.
The error message is:
Exception in thread "main" com.mongodb.MongoCommandException: Command failed with error 13 (Unauthorized): 'not authorized on wmMonitoring to execute command { listCollections: 1, cursor: {}, $db: "wmMonitoring", ...' on server xxxxxxx:27001. The full response is {"operationTime": {"$timestamp": {"t": 1614169303, "i": 1}}, "ok": 0.0, "errmsg": "not authorized on wmMonitoring to execute command { listCollections: 1, cursor: {}, $db: \"wmMonitoring\", ...
However, changing only the user credentials to one with 'root' role, it works (lists all the collections under the database 'wmMonitoring'.
I've checked the 'simpleROUser' privileges, the 'listCollections' is there.
rs-dev-00:PRIMARY> grants = db.getUser( "simpleROUser", { showCredentials: true, showPrivileges: true, showAuthenticationRestrictions: true } )
rs-dev-00:PRIMARY> grants.user
simpleROUser
rs-dev-00:PRIMARY> grants.inheritedPrivileges
[
{
"resource" : {
"db" : "wmISMonitoring",
"collection" : ""
},
"actions" : [
...
"listCollections",
...
]
},
...
]
rs-dev-00:PRIMARY>
So... what am I missing?
More info:
MongoDB server: Percona distribution, v4.4.1-3
Mongo Java Driver: v4.2.1
Found th issue.
In Brazilan Portuguese, this is also referred as "dedo gordo".
There was a typo in the grant: the actual database name is wmMonitoring.
Once the database name (in the grant command) has fixed, everything else worked.

Spring Cloud Config Server: error: No such label: master

when I access the url http://localhost:8888/actuator/health y have this error
{
"status": "DOWN",
"details": {
"diskSpace": {
"status": "UP",
"details": {
"total": 457192763392,
"free": 347865096192,
"threshold": 10485760
}
},
"refreshScope": {
"status": "UP"
},
"configServer": {
"status": "DOWN",
"details": {
"repository": {
"application": "app",
"profiles": "default"
},
"error": "org.springframework.cloud.config.server.environment.NoSuchLabelException: No such label: master"
}
}
}
}
my application.yml
enter image description here
By default, spring cloud server tries to get properties from branch "master" in the git repository. Your repository doesn't have it (you have branch "main" instead).
You can use property default-label to set custom branch name (see docs):
spring:
cloud:
config:
server:
git:
default-label: main
Or, you can rename your branch to master and leave all other things as they are.
Check your property file name and request file name.
I was getting same and resolved by making two below changes.
1st Change:
spring.cloud.config.server.git.uri=E:\\\\spring_microservice\\\\git-localconfig-repo
Removed file///...... and added 4 slash (\\)
2nd Change:
Actually i was making request with incorrect property file name. I was making request like http://localhost:8888/limit-servers/default but correct file name is limit-server.properties so i have correct the request as below.
http://localhost:8888/limit-server/default
And it worked for me.
I was getting the same issue, and resolved it by making the two below changes:
spring.cloud.config.server.git.uri=E:/spring_microservice/git-localconfig-repo
Added forward slash (/) everywhere.
You can simply add in your configuration file application.properties or apllication.yml, the following line of code :
- For people that use application.properties file :
spring.cloud.config.server.git.default-label=main
- For people that use application.use file :
spring:
cloud:
config:
server:
git:
default-label: main

Java Azure Function Not Executing Locally

Hi all, I'm working on Azure functions, I'm new to this, I have created a local java Azure function project using the below archetype:
*mvn archetype:generate -DgroupId=com.mynew.serverlesstest -DartifactId=serverlessexample -
DarchetypeGroupId=com.microsoft.azure -DarchetypeArtifactId=azure-functions-archetype -
DinteractiveMode=false*
*The template has a simple java function.
Upon executing a "mvn clean package" command, function.json gets generated in the target folder for the function, below is my function.json*
{
"scriptFile" : "..\\serverlessexample-1.0-SNAPSHOT.jar",
"entryPoint" : "com.mynew.serverlesstest.Function.hello",
"bindings" : [ {
"type" : "httpTrigger",
"name" : "req",
"direction" : "in",
"authLevel" : "anonymous",
"methods" : [ "get", "post" ]
}, {
"type" : "http",
"name" : "$return",
"direction" : "out"
} ],
"disabled" : false
}
on doing a mvn azure-functions:run, the application starts successfully , and I get below in the command prompt:
[06-04-2020 07:26:55] Initializing function HTTP routes
[06-04-2020 07:26:55] Mapped function route 'api/hello' [get,post] to 'hello'
[06-04-2020 07:26:55] Mapped function route 'api/HttpTrigger-Java' [get,post] to 'HttpTrigger-Java'
[06-04-2020 07:26:55]
[06-04-2020 07:26:55] Host initialized (424ms)
[06-04-2020 07:26:55] Host started (433ms)
[06-04-2020 07:26:55] Job host started
Http Functions:
hello: [GET,POST] Hosting environment: Production
http://localhost:7071/api/hello
Content root path: C:\Users\ramaswamys\Development\azure-
serverless\serverlessexample\target\azure-functions\serverlessexample-20200403205054646
Now listening on: http://0.0.0.0:7071
Application started. Press Ctrl+C to shut down.
HttpTrigger-Java: [GET,POST] http://localhost:7071/api/HttpTrigger-Java
[06-04-2020 07:27:00] Host lock lease acquired by instance ID
'000000000000000000000000852CF5C4'.
But when I try hitting the api(http://localhost:7071/api/hello) from postman, I dont get any response, I see the below in the command prompt:
[06-04-2020 07:29:04] Executing HTTP request: {
[06-04-2020 07:29:04] "requestId": "af46115f-7a12-49a9-87e0-7fb073a66450",
[06-04-2020 07:29:04] "method": "GET",
[06-04-2020 07:29:04] "uri": "/api/hello"
[06-04-2020 07:29:04] }
[06-04-2020 07:29:05] Executing 'Functions.hello' (Reason='This function was programmatically called
via the host APIs.', Id=7c712cdf-332f-413f-bda2-138f9b89025b)
After this nothing happens:
after 30 min I get a timeout exception like below in the command prompt:
Microsoft.Azure.WebJobs.Host: Timeout value of 00:30:00 was exceeded by function: Functions.hello.
Can someone suggest what might be causing this, why no response is seen in the postman, Am I doing
any thing wrong here ? Am I missing any configuration stuff ? Timely help would be appreciated
Please have a check of your function code.
From the info you offer, the trigger has already be triggered. So that means the code logic inside your trigger is not completed. In other words, you successfully triggered the trigger, but you are stuck in the internal logic of the trigger. Please check your code.

How to ignore Avro schema by different Confluent Registry source?

I produce the same Avro schema to one topic use different Confluent Registry sources. I get the error when I consume this topic:
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition XXXXX_XXXX_XXX-0 at offset 0. If needed, please seek past the record to continue consumption.
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 7
Caused by: org.apache.kafka.common.errors.SerializationException: Could not find class XXXXX_XXXX_XXX specified in writer's schema whilst finding reader's schema for a SpecificRecord.
How to ignore differently Avro message-id?
Schema:
{
"type": "record",
"name": "XXXXX_XXXX_XXX",
"namespace": "aa.bb.cc.dd",
"fields": [
{
"name": "ACTION",
"type": [
"null",
"string"
],
"default":null,
"doc":"action"
},
{
"name": "EMAIL",
"type": [
"null",
"string"
],
"default":null,
"doc":"email address"
}
]
}
Produced command
{"Action": "A", "EMAIL": "xxxx#xxx.com"}
It's not possible to use different Registry urls in a producer and be able to consume them consistently.
The reason is that a different ID will be placed in the topic.
The Schema ID lookup cannot be skipped
If you had the used same registry, the same schema payload would always generate the same ID, which the consumer would then be able to use consistently to read messages

Can WireMock play back requests from multiple domains?

I am building a Dockerised record-playback system to help me record websites, so I can design scrapers aginst a local version rather than the real thing. This means that I do not swamp a website with automated requests, and has the added advantage that I do not need to be connected to the web to work.
I have used the Java-based WireMock internally, which records from a queue of site scrapes using Wget. I am using the WireMock API to read various pieces information from the mappings it records.
However, I have spotted from a mapping response that domain information does not seem to be recorded (except where it is in response headers by accident). See the following response from __admin/mappings:
{
"result": {
"ok": true,
"list": [
{
"id": "794d609f-99b9-376d-b6b8-04dab161c023",
"uuid": "794d609f-99b9-376d-b6b8-04dab161c023",
"request": {
"url": "/robots.txt",
"method": "GET"
},
"response": {
"status": 404,
"bodyFileName": "body-robots.txt-j9qqJ.txt",
"headers": {
"Server": "nginx/1.0.15",
"Date": "Wed, 04 Jan 2017 21:04:40 GMT",
"Content-Type": "text/html",
"Connection": "keep-alive"
}
}
},
{
"id": "e246fac2-f9ad-3799-b7b7-066941408b8b",
"uuid": "e246fac2-f9ad-3799-b7b7-066941408b8b",
"request": {
"url": "/about/careers/",
"method": "GET"
},
"response": {
"status": 200,
"bodyFileName": "body-about-careers-GhVqy.txt",
"headers": {
"Server": "nginx/1.0.15",
"Date": "Wed, 04 Jan 2017 21:04:35 GMT",
"Content-Type": "text/html",
"Last-Modified": "Wed, 04 Jan 2017 12:52:12 GMT",
"Connection": "keep-alive",
"X-CACHE-URI": "/about/careers/",
"Accept-Ranges": "bytes"
}
}
},
{
"id": "def378f5-a93c-333e-9663-edcd30c936d7",
"uuid": "def378f5-a93c-333e-9663-edcd30c936d7",
"request": {
"url": "/about/careers/feed/",
"method": "GET"
},
"response": {
"status": 200,
"bodyFileName": "body-careers-feed-Fd2fO.xml",
"headers": {
"Server": "nginx/1.0.15",
"Date": "Wed, 04 Jan 2017 21:04:45 GMT",
"Content-Type": "application/rss+xml; charset=UTF-8",
"Transfer-Encoding": "chunked",
"Connection": "keep-alive",
"X-Powered-By": "PHP/5.3.3",
"Vary": "Cookie",
"X-Pingback": "http://www.example.com/xmlrpc.php",
"Last-Modified": "Thu, 06 Jun 2013 14:01:52 GMT",
"ETag": "\"765fc03186b121a764133349f8b716df\"",
"X-Robots-Tag": "noindex, follow",
"Link": "<http://www.example.com/?p=2680>; rel=shortlink",
"X-CACHE-URI": "null cache"
}
}
},
{
"id": "616ca6d7-6e57-4c10-8b57-f6f3dabc0930",
"uuid": "616ca6d7-6e57-4c10-8b57-f6f3dabc0930",
"request": {
"method": "ANY"
},
"response": {
"status": 200,
"proxyBaseUrl": "http://www.example.com"
},
"priority": 10
}
]
}
}
The only clear recording of a URL is in the final entry against proxyBaseUrl, and given that I had to specify a URL in the console call I am now worried that if I record against a different domain, the domain that each one is from will be lost.
That would mean that in playback mode, WireMock would only be able to play back from one domain, and I'd have to restart it and point it to another cache in order to play back different sites. This is not workable for my use case, so is there a way around this problem?
(I have done a little work with Mountebank, and would be willing to switch to it, though I find WireMock generally easier to use. My limited understanding of Mountebank is that it suffers from the same single-domain problem, though I am happy to be corrected on that. I'd be happy to swap to any robust open-source API-driven recorder HTTP proxy, if dropping WireMock is the only way forward).
It's possible to serve WireMock stubs for multiple domains by adding a Host header criterion in your requests. Assuming your DNS/host file maps all the relevant domains to your WireMock server's IP, then this will cause it to behave like virtual hosting on an ordinary web server.
The main issue is that the recorder won't add the host header to your mappings so you'd need to do this yourself afterwards, or hack the recorder to do it on the fly.
I've been considering adding better support for this, so watch this space.
I'd also suggest checking out Hoverfly, which seems to solve this problem pretty well already.

Categories

Resources