I am trying to retrieve a list of projects from the OpenStack API, and would like to use pagination in order to retrieve n projects at a time.
In the OpenStack documentation, it states that I can append "/?limit=n" to the URL and up to n results will be fetched accordingly.
However, when executing the GET request to the URL as follows:
https://identity-3.eu-de-1.cloud.sap/v3/auth/projects/?limit=1
I still get ALL projects. I can't seem to understand what I am missing.
NOTE: the request itself works and returns results as needed, but simply ignores the limit parameter (this is not an authentication issue).
I think it does not all OpenStack API provide limit parameter
In keystone API doc, there is no limit parameter in Request parameter descriptions for /v3/auth/projects API
keystone-project-API-doc
Other services like cinder volume list, it provides limit parameter in doc
cinder-volume-API-doc
Related
I have been trying to do a health check of AWS DynamoDB using Lambda fn in java using the dynamodb: ListTables Action.
However, due to restrictions on the existing role, I am getting AccessDeniedException.
I even tried to list out a specific table name like this:
ListTablesRequest request = new ListTablesRequest().withLimit(10).withExclusiveStartTableName("<existing table name>");
This returned
INFO: List tables request {ExclusiveStartTableName:<existing table name> ,Limit: 10}
It would also be helpful if I get to specify a startsWith pattern with reference to the List Tables parameter.
But apart from ListTables is there any other way of doing a health check on DDB?
If by "health check" you mean check that you have a working correction with the given DynamoDB endpoint, the fastest and easiest way is to send an HTTP or HTTPS request to "/" on the endpoint. The response is a simple "healthy" message:
$ curl https://dynamodb.us-west-2.amazonaws.com/
healthy: dynamodb.us-west-2.amazonaws.com
For better and for worse, this sort of health check doesn't require any authentication or authorization (roles). It's better because it's faster, simpler, and because you said you had a problem with your authorization. But for the same reason, it's worse because it doesn't check your authorization, so it is possible that your health check will succeed but the actual request will not - because you don't have the right permissions.
I am creating a REST service in Java ,and have a doubt with regards to params for the GET method .
I have to pass the below params in a GET request
Function
"GET" File status :
Params:
Time Range:(String)
FlowId:(String)
ID_A= or ID_B= or Both (String)
IS_ADD_A= or IS_ADD_B= or both (String)
Regex=(String)
Cookie=XXXXX
So as there are 6 params,so passing it as a query string would not be an efficient way and can't but the same in body(as it is against the HTTP GET specification)
Making this as a POST call would be against the REST principle as I want to get data from the server ,
What would be an efficient way of solving this ,would passing the params as query string is out of question,passing it in body which is against the HTTP spec ,making this as headers which may also be not good ,making this as POST request which will voilate the fielding's REST principle .
Passing data in the body of an HTTP GET call is not only against the spec but causes problems with various server-side technologies which assume you don't need access to the body in a GET call. (Some client side frameworks also have some issues with GET and a query in the body) If you have queried with long parameters I'd go with POST. It's then using POST for getting data but you'd not be the only one having to go this way to support potentially large queries.
If your parameters values aren't very long, using query string is your best option here. 6 params is not a lot, as long you don't exceed the IE limit of characters in the path - 2,048 (http://www.boutell.com/newfaq/misc/urllength.html). For example Google search engine uses many more params then 6. If there is a possibility that the URL path will exceed the limit above, you should use POST instead.
I have a requirement below;
I have a jqgrid that loads the json data using webservice(RESTful webservices) call.When form loads, i hit the server and load the data to grid.If i have 50 rows, the grid is loading 50 rows only.but i used pagination, so it will display only 10 records and click on next button in pagination other 10 records will displayed.But my requirement is on formload i should hit server and restric to display only 10 records.Then i click on Next again i call webservice call and display the next 10 rows.Is it possible?If yes, can share any samples?
Classical RESTful web services don't support pagination. So one have to return all data from the server and to use client side pagination. If you have only 50 rows of data I would recommend you to use client side pagination. You need just include loadonce: true option to the jqGrid and all should already work. In general it's recommended to use loadonce: true option if one loads not so many data from the server. There are not exist the exact limit of rows where client side pagination is preferred. It's about 1000 or 10000 rows of data. So in case of 50 rows of data it's really strictly recommended.
If you really need to implement server side pagination of RESTful services (in case of really large dataset) then your service have to support additional parameters of request which will be have no relation to resource URL. For example Open Data Protocol (OData) URI supports starting with version 2.0 (see here for example) parameters $orderby, $skip, $top and $inlinecount which could be appended to the URL to inform the server to return the data sorted by $orderby. The returned data should contains only one page of sorted data based on the values of $skip, $top parameters. The URL looks like
http://host:port/path/SampleService.svc/Categories(1)/Products?$top=2&$orderby=Name
\______________________________________/\____________________/ \__________________/
| | |
service root URL resource path query options
The old answer provides an example of implementation jqGrid which calls Open Data Protocol (OData) web service. I used serializeGridData callback to fill $top, $skip, $orderby and $inlinecount which "understands" OData web service. I used beforeProcessing callback to total property in based on count property returned from the server because of usage $inlinecount: "allpages" in the request. If the RESTful web services, which you use, supports OData too then you can use the same code.
I have a question about ActiveMQ and the AJAX Interface concerning the life span of a message. In the AMQ web interface, I can set a TimeToLive Value for a message in milliseconds.
I've already found out, that I can use this parameter via REST:
curl -vd body="test" "http://localhost:8161/demo/message/TESTQUEUE?type=queue&JMSTimeToLive=500&JMSPersistent=-1"
This example message will live 500ms
But how can I use the AMQ Ajax Interface to set those parameters?
The JavaScript function to send a message provides only two parameters
amq.sendMessage(myDestination,myMessage);
Info: http://activemq.apache.org/ajax.html
myDestination is unfortunately not an URL, it's something like this "queue://"
Thanks four your help
Regards
Rolf
The current implementation of the AJAX client does not offer the possibility to send a message with a time to live.
The time to leave of the message is basically set in the message property (headers), via the property "JMSExpiration"
Currently if you go through the amq.js code, you see there is no API that allows you to define the headers or Time to Live.
It should be relatively easy to add this feature to the client. Check the code, you could probably just hardcode the TTL for your application. At the end, it just does a post command in the same way that you do your REST call.
I've been playing with Amazon S3 presigned URLs all night attempting to PUT a file. I generate the presigned URL in java code.
AWSCredentials credentials = new BasicAWSCredentials( accessKey, secretKey );
client = new AmazonS3Client( credentials );
GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest( bucketName, "myfilename", HttpMethod.PUT);
request.setExpiration( new Date( System.currentTimeMillis() + (120 * 60 * 1000) ));
return client.generatePresignedUrl( request ).toString();
I then want to use the generated, presigned URL to PUT a file using curl.
curl -v -H "content-type:image/jpg" -T mypicture.jpg https://mybucket.s3.amazonaws.com/myfilename?Expires=1334126943&AWSAccessKeyId=<accessKey>&Signature=<generatedSignature>
I assumed that, like a GET, this would work on a bucket which is not public (that's the point of presigned, right?) Well, I got access denied on every attempt. Finally out of frustration I changed the permission of the bucket to allow EVERYONE to write. Of course, then the presigned URL worked. I quickly removed the EVERYONE permission from the bucket. Now, I don't have permission to delete the item that was uploaded into my bucket by my own self-pre-signed URL. I see now that I probably should have put a x-amz-acl header on what I uploaded. I suspect I'll create several more undelete-able objects before I get that right.
This leads to a few questions:
How can I upload with curl using PUT and a generated presigned URL?
How can I delete the uploaded file and the bucket I created to test it with?
The end goal is that a mobile phone will use this presigned URL to PUT images. I'm trying to get it going in curl as a proof of concept.
Update: I asked a question on the amazon forums. If an answer is provided there I'll put it as an answer here.
This is indeed a bit puzzling, I consider it to be a bug in the AWS SDK for Java (see below) - but first and foremost, the following curl command will upload your file as such (assuming an updated pre-signed URL of course):
curl -v -T mypicture.jpg https://mybucket.s3.amazonaws.com/myfilename?Expires=1334126943&AWSAccessKeyId=<accessKey>&Signature=<generatedSignature>
That is, I've excluded the Content type header, which yields application/octet-stream (or binary/octet-stream) as a result, which is obviously not desired; thus, further digging had been order.
Background / Analysis
Pre-signed URLs for PUT (and DELETE as well as HEAD) requests to Amazon S3 are known to work in principle, not the least evidenced in related questions on this site (see e.g. my answer to Upload to s3 with curl using pre-signed URL (getting 403)).
The facilitated Query String Request Authentication Alternative is documented to use the following pseudo-grammar that illustrates the query string request authentication method:
StringToSign = HTTP-VERB + "\n" +
Content-MD5 + "\n" +
Content-Type + "\n" +
Expires + "\n" +
CanonicalizedAmzHeaders +
CanonicalizedResource;
It does include the Content-Type header, and (as you already discovered) this has been the missing piece in some documented cases, see e.g. the AWS team response to GetPreSignedURL with PUT request, yielding a working pre-signed URL once added.
This is easy to achieve with the AWS SDK for .NET indeed, which provides the convenience method GetPreSignedUrlRequest.WithContentType to do just that:
Sets the ContentType property for this request. This property defaults
to "binary/octet-stream", but if you require something else you can
set this property.
Accordingly, extending the respective sample Upload an Object Using Pre-Signed URL - AWS SDK for .NET as follows yields a working pre-signed URL with content type, that can be uploaded via curl as expected (i.e. exactly as you attempted to):
// ...
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest();
// ...
request.WithContentType("image/jpg");
// ...
Now, one would like to extend the semantically identical sample Upload an Object Using Pre-Signed URL - AWS SDK for Java in a similar fashion, but (as you've discovered already as well), there is no dedicated method to achieve this. This might just be a lacking convenience method though and could be achievable via addRequestParameter() or setResponseHeaders() eventually, e.g.:
// ...
request.setExpiration( new Date( System.currentTimeMillis() + (120 * 60 * 1000) ));
request.addRequestParameter("content-type", "image/jpg");
return client.generatePresignedUrl( request ).toString();
// ...
However, both method's documentation suggests other purposes, and it doesn't work indeed, i.e. they always yield the identical signature, no matter which content type is set like so (if any).
Debugging further into the SDKs reveals, that both provide a semantically similar core method to calculate the query string authentication according to the pseudo-grammar referenced above, see buildSigningString() for .NET and makeS3CanonicalString() for Java.
But the respective code in the Java version to Add all interesting headers to a list, then sort them, where "Interesting" is defined as Content-MD5, Content-Type, Date, and x-amz- is never executed in fact, because there is indeed no method to provide these headers somehow, which are only available for class DefaultRequest and not class GeneratePresignedUrlRequest used to initialize the former, which is used as input for calculating the signature in turn, see protected method createRequest().
Interestingly/Notably, the two methods to calculate the query string authentication in .NET vs. Java compose their input from an almost inverse combination of header vs. parameter sources on the call stack, which could hint on the cause of the Java bug, but obviously that might as well be just difficult to decipher, i.e. the internal architecture could differ significantly of course.
Preliminary Conclusion
There are two angles to this:
The AWS SDK for Java is definitely lacking the convenience method for setting the content type, which might be a comparatively rare, but nonetheless obvious use case accounted for in other AWS SDKs accordingly - this is surprising, given its widespread use in AWS related backend services.
Regardless, there seems to be something fishy with the way the Query String Request Authentication is implemented in comparison to the .NET version for example - again this is surprising, given it is a core functionality, however, this is still within the S3 model/namespace and thus might only be required by the respective uses cases above.
In conclusion, the only reasonable way to resolve this would be an updated SDK, so a bug report is in order - obviously one could as well duplicate/extend the SDK functionality to account for this special case separately (ideally in a way allowing to submit a pull request for the aws-sdk-for-java project), but getting this right in a compatible and maintainable way seems to be a bit tricky, thus is likely best done by the SDK maintainers themselves.
Ran into this problem as well. We're already tracking when the file is uploaded on the backend, so our work around was to set the content type after the client uploads the file using the Rails app with a call to copy_from.