I have been trying to do a health check of AWS DynamoDB using Lambda fn in java using the dynamodb: ListTables Action.
However, due to restrictions on the existing role, I am getting AccessDeniedException.
I even tried to list out a specific table name like this:
ListTablesRequest request = new ListTablesRequest().withLimit(10).withExclusiveStartTableName("<existing table name>");
This returned
INFO: List tables request {ExclusiveStartTableName:<existing table name> ,Limit: 10}
It would also be helpful if I get to specify a startsWith pattern with reference to the List Tables parameter.
But apart from ListTables is there any other way of doing a health check on DDB?
If by "health check" you mean check that you have a working correction with the given DynamoDB endpoint, the fastest and easiest way is to send an HTTP or HTTPS request to "/" on the endpoint. The response is a simple "healthy" message:
$ curl https://dynamodb.us-west-2.amazonaws.com/
healthy: dynamodb.us-west-2.amazonaws.com
For better and for worse, this sort of health check doesn't require any authentication or authorization (roles). It's better because it's faster, simpler, and because you said you had a problem with your authorization. But for the same reason, it's worse because it doesn't check your authorization, so it is possible that your health check will succeed but the actual request will not - because you don't have the right permissions.
Related
REST endpoint design says: Not use verb
In an workflow-like create Employee which has multi-tab style like "Basic Details", "Educational Details", "Work Experience", etc... One first tab data is filled continue button is pushed resulting in an backend API call which just validates the detail in that tab and returns the list of validation errors if any or moves to next tab for the user to fill data. So basically this calls for validate API for each of the tabs with no intention of saving data. Now one thing that comes naturally is below:
POST /employee/basic/validate
(removing api versioning details from endpoint for simplicity)
But using validate in API means verb. How to design then?
There's a separate flow where one can just save "basic details" of employee - so its like any normal API validate and save - so POST /employee/basic/ is good for that case.
REST endpoint design says: Not use verb
That's not a REST constraint - REST doesn't care what spellings you use for your resource identifiers.
All of these URL work, exactly the way that your browser expects them to:
https://www.merriam-webster.com/dictionary/post
https://www.merriam-webster.com/dictionary/get
https://www.merriam-webster.com/dictionary/put
https://www.merriam-webster.com/dictionary/patch
https://www.merriam-webster.com/dictionary/delete
Resources are generalizations of documents; the nature of the HTTP uniform interface is that we have a large set of documents, and a small number of messages that we can send to them.
So if you want a good resource identifier, the important thing to consider is the nature of the "document" that you are targeting with the request.
For instance, the document you are using to validate user inputs might be the validation policy; or you might instead prefer to think of that document as an index into a collection of validation reports (where we have one report available for each input).
Seems that what you try to do in the end is to run your operation in dry-run mode.
My suggestion would be to add a dry-run option as request parameter for instance.
/employee/basic?dry-run=true
REST says that you should use standards like HTTP to achieve a uniform interface. There are no URL standards as far as I know, even OData says that its URL naming conventions are optional.
Another thing that the browser is a bad REST client. REST was designed for webservices and machine to machine communication, not for the communication of browsers with webapplications, which is sort of human to machine communication. It is for solving problems like automatically order from the wholesaler to fill my webshop with new items, etc. If you check in this scenario both the REST service and REST client are on servers and have nothing to do with the browser. If you want to use REST from the browser, then it might be better to use a javascript based REST client. So using the browser with HTML forms as a REST client is something extreme.
If you have a multitab form, then it is usually collected into a session in regular webapplications until it is finalized. So one solution is having a regular webapplication, which is what you actually have, since I am pretty sure you have no idea about the mandatory REST constraints described by Fielding. In this case you just do it as you want to and forget about REST.
As of naming something that does validation I would do something like POST /employee/basic/validation and return the validation result along with 200 ok. Though most validation rules like "is it a date", "is it a number", etc. can be done on the clients currently they can be done even in HTML. You can collect the input in a session on server or client side and save it in the database after finilazing the employee description.
As of the REST way I would have a hyperlink that describes all the parameters along with their validations and let the REST client make tabs and do the REST. At the end the only time it would communicate with the REST service is when the actual POST is sent. The REST client can be in browser and collect the input into a variable or cookies or localstorage with javascript, or the REST client can be on server and collect the input into a server side session for example. As of the REST service the communication with it must be stateless, so it cannot maintain server side session, only JWT for example where all the session data is sent with every request.
If you want to save each tab in the webservice before finalizing, then your problem is something like the on that is solved with the builder design pattern in programming. In that case I would do something like POST /employeeRegistrationBuilder at the first step, and which would return a new resource something like /employeeRegistrationBuilder/1. After that I can do something like PUT/POST /employeeRegistrationBuilder/1/basics, PUT/POST /employeeRegistrationBuilder/1/education, PUT/POST /employeeRegistrationBuilder/1/workExperience, etc. and finalize it with PUT/POST /employeeRegistrationBuilder/1/finished. Though you can spare the first and the last steps and create the resource with the basics and finish it automagically after the workExperience is sent. Cancelling it would be DELETE /employeeRegistrationBuilder/1, modifying previous tabs would be PUT/PATCH /employeeRegistrationBuilder/1/basics. Removing previous tabs would be DELETE /employeeRegistrationBuilder/1/basics.
A more general approach is having a sort of transaction builder and do something like this:
POST /transactions/ {type:"multistep", method: "POST", id: "/employee/"}
-> {id: "/transactions/1", links: [...]}
PATCH /transactions/1 {append: "basics", ...}
PATCH /transactions/1 {append: "education", ...}
PATCH /transactions/1 {remove: "basics", ...}
PATCH /transactions/1 {append: "workExperience", ...}
PATCH /transactions/1 {append: "basics", ...}
...
POST /employee/ {type: "transaction", id: "/transactions/1"}
-> /employee/123
With this approach you can create a new employee both in multiple steps or in a single step depending on whether you send actual input data or a transaction reference with POST /employee.
From data protection (GDPR) perspective the transaction can be the preparation of a contract, committing the transaction can be signing the contract.
I am writing a custom Java webscript that accepts document noderef and an external username (string value) as parameters. I have auditing enabled and the audit log shows access to the document when I call the webscript. Now I wanted to know if it is possible to modify the audit trail so that when it shows the log for that particular document it also shows the name of the external user.
webscript url: http://localhost:8080/alfresco/service/node/{noderef}/user/{user}
On calling this I get the following output in log:
Extracted audit data:
Application: AuditApplication[ name=alfresco-access, id=1, disabledPathsId=2]
Values:
/alfresco-access/transaction/sub-actions=readContent
/alfresco-access/transaction/action=READ
/alfresco-access/transaction/node=workspace://SpacesStore/c21db432-4ad6-4af2-8bcf-78bc89724afe
/alfresco-access/transaction/type=cm:content
/alfresco-access/transaction/path=/app:company_home/app:shared/cm:audit-services-context.xml
/alfresco-access/transaction/user=admin
New Data:
/alfresco-access/transaction/sub-actions=readContent
/alfresco-access/transaction/action=READ
/alfresco-access/transaction/type=cm:content
/alfresco-access/transaction/user=admin
/alfresco-access/transaction/path=/app:company_home/app:shared/cm:audit-services-context.xml
I want to store the {user} also in the audit trail.
You can try to use AuthenticationUtil.setFullyAuthenticatedUser. I think this should help you. But I didn't test this.
You probably do not want to do that, at least not in the way you describe, not without making extra security precautions.
This goes IMHO opinion against security standards, if admin needs to read a document,the operation needs to be logged with his username, if a normal user needs to access a document, he needs to be properly authenticated for that operation.
Judging from the little context I have I would say this is actually an integration with some other app that does not share SSO with Alfresco. So I would recommend a solution of the following :
Use proper SSO between Alfresco and your application, have the concerned user ping the right endpoint in Alfresco and let SSO authenticate the request properly for you.
Use a shared secret (something like a shared passphrase to encode encode the authority name in the request + proper authentication subsystem or request filter to handle that) or a key pair (something like securecomms between solr and alfresco) to be able to securely pass on authority information to the request
Use a system account (preferably not admin, but one that is dedicated to this usecase/application integration) to generate a valid alf_ticket for the user in question, and have your app attach that ticket to the request. (Of course, your "impersonate" webscript would need to check for the right system/integration username, before running the snippet to get the alf_ticket from a runAsSystem block). In this case, I would also recommend not using the admin account for this but rather use a user with no permissions at all except for this usecase.
If you are going to opt for the quick implementation that you have, I would recommend at least the following :
You need to make sure that not any user can ping that webscript and that only admin/system user can actually access that webscript.
You probably should log the whole impersonation operation in the audit trail (either using the same audit entry or a separate one), so that it would be clear that this is actually an operation that was made on behalf of the user and not directly by the user himself
If you use the webscript in question for anything other than reading the content of the node (Can be the case also if you have a onReadContent behaviour that has some nasty AuthenticationUtil.setFullyAuthenticatedUser as well), and you require that operation to be logged as system/originally authenticated user, You will probably have a hard time doing that... and you should switch to a more robust approach!
I need to write an API to check if a user name already exists in a database.
I want my server (Struts Action class instance in tomcat server) to return true/false.
Its something like this
checkUserName?userName=john
I want to know what is the standard way to do this?
Shall I return a JSON response with just one boolean value ... seems like a overkill.
Shall I do something like manually setting the HTTP header to 200 or 404 (for true/false), but that seems to violate the actual purpose of using the headers which I believe must only be used to indicate network failures etc.
(Too long for a comment.)
I don't see any reason not to return a standard JSON response with something indicating whether or not the user name exists. That's what APIs do: there's nothing "overkill" about providing a response useful across clients.
To your second point: headers do a lot more than "indicate network problems". A 404 isn't a network problem, it means the requested resource doesn't exist. It is not appropriate in your case, because you're not requesting a resource: the resource is checkUserName, which does exist. If instead your request was /userByName/john a 404 would be appropriate if the user didn't exist. That's not an appropriate request in this case, because you don't want to return the user.
A 401 isn't a network problem, it's an authentication issue. A 302 isn't a network problem, it's a redirect. Etc. Using HTTP response codes is entirely appropriate, if they match your requests.
I am creating a REST service in Java ,and have a doubt with regards to params for the GET method .
I have to pass the below params in a GET request
Function
"GET" File status :
Params:
Time Range:(String)
FlowId:(String)
ID_A= or ID_B= or Both (String)
IS_ADD_A= or IS_ADD_B= or both (String)
Regex=(String)
Cookie=XXXXX
So as there are 6 params,so passing it as a query string would not be an efficient way and can't but the same in body(as it is against the HTTP GET specification)
Making this as a POST call would be against the REST principle as I want to get data from the server ,
What would be an efficient way of solving this ,would passing the params as query string is out of question,passing it in body which is against the HTTP spec ,making this as headers which may also be not good ,making this as POST request which will voilate the fielding's REST principle .
Passing data in the body of an HTTP GET call is not only against the spec but causes problems with various server-side technologies which assume you don't need access to the body in a GET call. (Some client side frameworks also have some issues with GET and a query in the body) If you have queried with long parameters I'd go with POST. It's then using POST for getting data but you'd not be the only one having to go this way to support potentially large queries.
If your parameters values aren't very long, using query string is your best option here. 6 params is not a lot, as long you don't exceed the IE limit of characters in the path - 2,048 (http://www.boutell.com/newfaq/misc/urllength.html). For example Google search engine uses many more params then 6. If there is a possibility that the URL path will exceed the limit above, you should use POST instead.
I've been playing with Amazon S3 presigned URLs all night attempting to PUT a file. I generate the presigned URL in java code.
AWSCredentials credentials = new BasicAWSCredentials( accessKey, secretKey );
client = new AmazonS3Client( credentials );
GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest( bucketName, "myfilename", HttpMethod.PUT);
request.setExpiration( new Date( System.currentTimeMillis() + (120 * 60 * 1000) ));
return client.generatePresignedUrl( request ).toString();
I then want to use the generated, presigned URL to PUT a file using curl.
curl -v -H "content-type:image/jpg" -T mypicture.jpg https://mybucket.s3.amazonaws.com/myfilename?Expires=1334126943&AWSAccessKeyId=<accessKey>&Signature=<generatedSignature>
I assumed that, like a GET, this would work on a bucket which is not public (that's the point of presigned, right?) Well, I got access denied on every attempt. Finally out of frustration I changed the permission of the bucket to allow EVERYONE to write. Of course, then the presigned URL worked. I quickly removed the EVERYONE permission from the bucket. Now, I don't have permission to delete the item that was uploaded into my bucket by my own self-pre-signed URL. I see now that I probably should have put a x-amz-acl header on what I uploaded. I suspect I'll create several more undelete-able objects before I get that right.
This leads to a few questions:
How can I upload with curl using PUT and a generated presigned URL?
How can I delete the uploaded file and the bucket I created to test it with?
The end goal is that a mobile phone will use this presigned URL to PUT images. I'm trying to get it going in curl as a proof of concept.
Update: I asked a question on the amazon forums. If an answer is provided there I'll put it as an answer here.
This is indeed a bit puzzling, I consider it to be a bug in the AWS SDK for Java (see below) - but first and foremost, the following curl command will upload your file as such (assuming an updated pre-signed URL of course):
curl -v -T mypicture.jpg https://mybucket.s3.amazonaws.com/myfilename?Expires=1334126943&AWSAccessKeyId=<accessKey>&Signature=<generatedSignature>
That is, I've excluded the Content type header, which yields application/octet-stream (or binary/octet-stream) as a result, which is obviously not desired; thus, further digging had been order.
Background / Analysis
Pre-signed URLs for PUT (and DELETE as well as HEAD) requests to Amazon S3 are known to work in principle, not the least evidenced in related questions on this site (see e.g. my answer to Upload to s3 with curl using pre-signed URL (getting 403)).
The facilitated Query String Request Authentication Alternative is documented to use the following pseudo-grammar that illustrates the query string request authentication method:
StringToSign = HTTP-VERB + "\n" +
Content-MD5 + "\n" +
Content-Type + "\n" +
Expires + "\n" +
CanonicalizedAmzHeaders +
CanonicalizedResource;
It does include the Content-Type header, and (as you already discovered) this has been the missing piece in some documented cases, see e.g. the AWS team response to GetPreSignedURL with PUT request, yielding a working pre-signed URL once added.
This is easy to achieve with the AWS SDK for .NET indeed, which provides the convenience method GetPreSignedUrlRequest.WithContentType to do just that:
Sets the ContentType property for this request. This property defaults
to "binary/octet-stream", but if you require something else you can
set this property.
Accordingly, extending the respective sample Upload an Object Using Pre-Signed URL - AWS SDK for .NET as follows yields a working pre-signed URL with content type, that can be uploaded via curl as expected (i.e. exactly as you attempted to):
// ...
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest();
// ...
request.WithContentType("image/jpg");
// ...
Now, one would like to extend the semantically identical sample Upload an Object Using Pre-Signed URL - AWS SDK for Java in a similar fashion, but (as you've discovered already as well), there is no dedicated method to achieve this. This might just be a lacking convenience method though and could be achievable via addRequestParameter() or setResponseHeaders() eventually, e.g.:
// ...
request.setExpiration( new Date( System.currentTimeMillis() + (120 * 60 * 1000) ));
request.addRequestParameter("content-type", "image/jpg");
return client.generatePresignedUrl( request ).toString();
// ...
However, both method's documentation suggests other purposes, and it doesn't work indeed, i.e. they always yield the identical signature, no matter which content type is set like so (if any).
Debugging further into the SDKs reveals, that both provide a semantically similar core method to calculate the query string authentication according to the pseudo-grammar referenced above, see buildSigningString() for .NET and makeS3CanonicalString() for Java.
But the respective code in the Java version to Add all interesting headers to a list, then sort them, where "Interesting" is defined as Content-MD5, Content-Type, Date, and x-amz- is never executed in fact, because there is indeed no method to provide these headers somehow, which are only available for class DefaultRequest and not class GeneratePresignedUrlRequest used to initialize the former, which is used as input for calculating the signature in turn, see protected method createRequest().
Interestingly/Notably, the two methods to calculate the query string authentication in .NET vs. Java compose their input from an almost inverse combination of header vs. parameter sources on the call stack, which could hint on the cause of the Java bug, but obviously that might as well be just difficult to decipher, i.e. the internal architecture could differ significantly of course.
Preliminary Conclusion
There are two angles to this:
The AWS SDK for Java is definitely lacking the convenience method for setting the content type, which might be a comparatively rare, but nonetheless obvious use case accounted for in other AWS SDKs accordingly - this is surprising, given its widespread use in AWS related backend services.
Regardless, there seems to be something fishy with the way the Query String Request Authentication is implemented in comparison to the .NET version for example - again this is surprising, given it is a core functionality, however, this is still within the S3 model/namespace and thus might only be required by the respective uses cases above.
In conclusion, the only reasonable way to resolve this would be an updated SDK, so a bug report is in order - obviously one could as well duplicate/extend the SDK functionality to account for this special case separately (ideally in a way allowing to submit a pull request for the aws-sdk-for-java project), but getting this right in a compatible and maintainable way seems to be a bit tricky, thus is likely best done by the SDK maintainers themselves.
Ran into this problem as well. We're already tracking when the file is uploaded on the backend, so our work around was to set the content type after the client uploads the file using the Rails app with a call to copy_from.