I am using aws sdk, in Java, to generate signed object urls from a given bucket. These objects are images for which I want to generated a signed url using following code.
URL url = amazonS3Client.generatePresignedUrl(generatePresignedUrlRequest);
Following is an example of generated url
In the urls there is AWS3AccessKeyId and signature that will be visible to end users and developers. Is there a chance for someone to generate PUT, DELTE, etc operations on objects using the information in the url. Is it safe to provide these urls to users who I don't know ?
Background
I like to share images and other object urls with people, who will use them in their websites. For example,a url to display an image in a website, or url from where user can download a file.
No, they will not be able to use these URLs to do anything other than what you allow. So if you generate a pre-signed url to get a particular S3 object, they will ONLY be able to use it to get that S3 object. They can't reverse-engineer the url to give themselves access to other buckets or objects.
Related
I'm having a brainstorming issue on how to get user uploaded pictures viewed by only the friends of the users.
So what I've come up with so far is:
Create a DynamoDB table for each user, with a dynamic list of friends/new friends added.
Generate a Signed URL for every user-uploaded picture.
Allow access to the Signed URL to every friend listed in the DynamoDB table to view set picture/s.
Does this sound correct? Also, would I technically have just one bucket for ALL user uploaded pictures? Something about my design sounds off...
Can anyone give me a quick tutorial on how to accomplish this via Java?
There two basic approaches:
Permissions in Amazon S3, or
Application-controlled access to object in Amazon S3
Permissions in Amazon S3
You can provide credentials (either via IAM or Amazon Cognito) that allow users to access a particular path within an Amazon S3 bucket. For example, each user could have their own path within the bucket.
Your application would generate URLs that include signatures that identify them as that particular user and Amazon S3 would grant access to the objects.
One benefit of this approach is that you could provide the AWS credentials to the users and they could interact directly with AWS, such as using the AWS Command-Line Interface (CLI) to upload/download files without having to always go via your application.
Application-controlled access to object in Amazon S3
In this scenario, users have no permissions within Amazon S3. Instead, each time that your application wishes to generate a URL to an object in S3 (eg in an <img> tag), you created a pre-signed URL. This will grant access to the object for a limited time. It only takes a couple of lines of code and can be done within the application without communication with AWS to generate the URL.
There is no need to store pre-signed URLs. They are generated on-the-fly.
The benefit of this approach is that your application has full control over which objects they can access. Friends could share pictures with other users and the application would grant access, whereas the first method only grants access to objects within the user's specific path.
I have an S3 bucket with confidential files for many users. I am sending emails to specific users containing a pre-signed URL to access their specific confidential file.
Are there security issues I am risking with Google robots being able to view these contents in these S3 pre-signed URLs? Can I do anything to prevent this?
Private objects in Amazon S3 are not accessible by default.
You are using a pre-signed URL, which permits access to a specific object until a specific date/time.
Yes, if Google (or somebody else) got hold of the pre-signed URL, they would also be able to read the object. Therefore, do not publicly reveal a pre-signed URL. Think of it like a time-limited password -- if somebody had your password, they could act as if they were you. Therefore, keep your pre-signed URLs as safe as you would keep a password.
I want to create or update a image file with password protection. Scenario is, our infra team will upload an image file to AWS S3. Later we want to protect this image file with password from java. Password will be auto generated and will not be disclosed with anyone. If any one trying to download the image directly from AWS S3, it should not open. I have tried Server-Side encryption in AWS S3
CopyObjectRequest request = new CopyObjectRequest(bucket, key, bucket, key);
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setServerSideEncryption(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);
request.setNewObjectMetadata(objectMetadata);
s3client.copyObject(request)
But still i'm able to open it. Is there any other way to do it.
Server-Side Encryption merely encrypts the data stored on disk. It is not a method for protecting access to data.
Rather, it appears that your requirement is:
Store some data (eg an image) on Amazon S3 and keep it private
Selectively allow people to download it if they have been authorized
The most suitable solution would be to use an Amazon S3 Pre-Signed URL.
By default, all objects in Amazon S3 are private. You can then add permissions so that people can access your objects. This can be done via:
Access Control List permissions on individual objects
A Bucket Policy (that grants wide-ranging access based on path, IP address, referrer, etc)
IAM Users and Groups (that grant permissions to Users with AWS credentials)
Pre-Signed URLs
A Pre-Signed URL can be used to grant access to S3 objects as a way of "overriding" access controls. A normally private object can be accessed via a URL by appending an expiry time and signature. This is a great way to serve private content without requiring a web server.
It would be the responsibility of your application to appropriately authenticate users to determine whether they are allowed access to objects in S3. If they are granted access, then your application should generate a pre-signed URL as an authenticated link to the objects. The URL will only be valid for a limited time duration.
This is best done by having a back-end app (probably running on Amazon EC2 or AWS Lambda) perform the authentication and then generate the URL. Your authenticated user can then use the pre-signed URL to download the object during the allocated time period (eg 5 minutes).
This method has several benefits over the use of a password:
It properly authenticates the user (through your code) rather than merely trusting anyone who knows the password
It allows you to log access, so you know who is accessing the object
Your back-end app could generate an HTML page willed with many pre-signed URLs and your users could simply click the links to access the objects, rather than having to provide a password for every object they wish to download
We are trying to use aws S3 for storing files. We created a simple REST API in JAVA to upload and retrieve a file.
Clients requesting to update files use our REST API's which provide a presigned url to either PUT/GET the file. We are using AWS SDK for java to generate the pre signed urls.
We need to add some custom metadata to the files when they are being updated on S3. As we dont control the upload to S3 itself, is there a way we can add this information while we are generating the pre signed url? It wont be good to have the clients to provide this information as a part of their request headers.
We stumbled upon the same issue today and were trying to use
// does not work
request.putCustomRequestHeader(Headers.S3_USER_METADATA_PREFIX + "foo", "bar");
which unfortunately does not really work, it adds the metadata but the caller of the presigned url has to still provide the metadata using request headers which is something the client should not have to do.
Finally we found that using GeneratePresignedUrlRequest#addRequestParameter does the job marvellously:
GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest("bucket", "yourFile.ending");
request.addRequestParameter(Headers.S3_USER_METADATA_PREFIX + "foo", "bar");
// import com.amazonaws.services.s3.Headers; needed
The presigned url then looks something like
https://bucket.s3.region.amazonaws.com/yourFile.ending?x-amz-meta-foo=bar&X-Amz-Security-Token=...
The metadata can be clearly seen in the url, using Postman to PUT to that file using upload-file in the body creates the file with the correct metadata in the bucket and it is not possible for the client to change the meta-data because that would make the signature no longer match the request.
The only not-so-pretty part about this is having to specify the internal aws header prefix for user metadata.
My application generates binary data (compressed xml) and saves it to the Google App Engine Blobstore. I want to return a public URL that clients can use to access this binary file from anywhere, without needing to go through AppEngine and my Java servlets. I know it's possible to get this kind of URL for images using ImagesService#getServingUrl, but what about for other types of files?
I create my file like this:
FileService#createNewBlobFile("application/octet-stream", "myfile.bin");
When I call AppEngineFile#getFullPath() I get something like:
/blobstore/writable:a2c0noo4_LNQ0mS6wFdCMA
I can see the file created in my dev filesystem with a different random name. What's its URL?
You can use Google Cloud Storage for your file.
You can then provide a public cloud storage URL for you file that will allow you clients to download it without going through AppEngine. There are a whole lot of different ACL scenarios you can use to tighten down the access as well, it's in the docs.
Using Google Cloud Storage with the Files API.
Cloud Storage Docs.
According to the docs, there's no public URL for objects in the blobstore. You can only access it through the Blobstore API from your app. So if you want your blobstore objects to each have a URL, you'd have to create your own handler for that, which of course will have to pass through AppEngine and your Java servlets.
I think the ImagesService#getServingUrl doesn't generate a Public URL for the images in blobstore. That URL doesn't really point directly to your blobstore data. It points to an image service that is able to access your blobstore, and serves your image for you through its own high performance infrastructure.