Can Google robots see my S3 contents? - java

I have an S3 bucket with confidential files for many users. I am sending emails to specific users containing a pre-signed URL to access their specific confidential file.
Are there security issues I am risking with Google robots being able to view these contents in these S3 pre-signed URLs? Can I do anything to prevent this?

Private objects in Amazon S3 are not accessible by default.
You are using a pre-signed URL, which permits access to a specific object until a specific date/time.
Yes, if Google (or somebody else) got hold of the pre-signed URL, they would also be able to read the object. Therefore, do not publicly reveal a pre-signed URL. Think of it like a time-limited password -- if somebody had your password, they could act as if they were you. Therefore, keep your pre-signed URLs as safe as you would keep a password.

Related

How to properly use s3 to deliver and store files in a web application?

So we are planning to move static content to s3 for operational reasons. I just want to understand where to place s3 in the workflow of handling a request.
If website requires an image, should the request hit our service first which would fetch the image from s3 (reverse-proxy) or should client directly request the file.
How to hide file names ,pathnames and manage permissions in request for file?
Same questions applicable for uploading new content.
Handle s3 quota and parallel requests
I was going to comment, but this turned into a full answer instead...
Either. If your assets are public, the lowest-weight method is to just request them from a public S3 bucket. If they're not, though, it's probably easiest to use Cloudfront rather than rolling-your-own auth around S3 requests.
You can make it look like your asset A.jpeg in S3.yourBucket/A.jpeg is at yourWebsite.com/A.jpeg using Cloudfront. If you want to also obscure the filename A, you need to use e.g. API gateway to serve you the file without revealing anything about it to your front end. If it were me, I wouldn't bother.
Unless you absolutely have to, don't let users upload to the same bucket that other users download from. There are several approaches to uploads depending on the use-case. Pre-signed URL's are good for one-time use. You can also just provide the user with AWS credentials that are allowed to write-only to the upload bucket, by using Cognito.
There's no S3 quota. You get charged for reads and writes. For a simple site, these charges will be tiny. If you're worried, you can use Cloudfront to rate-limit your users. You can also use API Gateway to create limits for individual users. S3 is extremely parallelizable.

S3 Bucket Signed URLs to grant access to pictures

I'm having a brainstorming issue on how to get user uploaded pictures viewed by only the friends of the users.
So what I've come up with so far is:
Create a DynamoDB table for each user, with a dynamic list of friends/new friends added.
Generate a Signed URL for every user-uploaded picture.
Allow access to the Signed URL to every friend listed in the DynamoDB table to view set picture/s.
Does this sound correct? Also, would I technically have just one bucket for ALL user uploaded pictures? Something about my design sounds off...
Can anyone give me a quick tutorial on how to accomplish this via Java?
There two basic approaches:
Permissions in Amazon S3, or
Application-controlled access to object in Amazon S3
Permissions in Amazon S3
You can provide credentials (either via IAM or Amazon Cognito) that allow users to access a particular path within an Amazon S3 bucket. For example, each user could have their own path within the bucket.
Your application would generate URLs that include signatures that identify them as that particular user and Amazon S3 would grant access to the objects.
One benefit of this approach is that you could provide the AWS credentials to the users and they could interact directly with AWS, such as using the AWS Command-Line Interface (CLI) to upload/download files without having to always go via your application.
Application-controlled access to object in Amazon S3
In this scenario, users have no permissions within Amazon S3. Instead, each time that your application wishes to generate a URL to an object in S3 (eg in an <img> tag), you created a pre-signed URL. This will grant access to the object for a limited time. It only takes a couple of lines of code and can be done within the application without communication with AWS to generate the URL.
There is no need to store pre-signed URLs. They are generated on-the-fly.
The benefit of this approach is that your application has full control over which objects they can access. Friends could share pictures with other users and the application would grant access, whereas the first method only grants access to objects within the user's specific path.

Password protected image file

I want to create or update a image file with password protection. Scenario is, our infra team will upload an image file to AWS S3. Later we want to protect this image file with password from java. Password will be auto generated and will not be disclosed with anyone. If any one trying to download the image directly from AWS S3, it should not open. I have tried Server-Side encryption in AWS S3
CopyObjectRequest request = new CopyObjectRequest(bucket, key, bucket, key);
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setServerSideEncryption(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);
request.setNewObjectMetadata(objectMetadata);
s3client.copyObject(request)
But still i'm able to open it. Is there any other way to do it.
Server-Side Encryption merely encrypts the data stored on disk. It is not a method for protecting access to data.
Rather, it appears that your requirement is:
Store some data (eg an image) on Amazon S3 and keep it private
Selectively allow people to download it if they have been authorized
The most suitable solution would be to use an Amazon S3 Pre-Signed URL.
By default, all objects in Amazon S3 are private. You can then add permissions so that people can access your objects. This can be done via:
Access Control List permissions on individual objects
A Bucket Policy (that grants wide-ranging access based on path, IP address, referrer, etc)
IAM Users and Groups (that grant permissions to Users with AWS credentials)
Pre-Signed URLs
A Pre-Signed URL can be used to grant access to S3 objects as a way of "overriding" access controls. A normally private object can be accessed via a URL by appending an expiry time and signature. This is a great way to serve private content without requiring a web server.
It would be the responsibility of your application to appropriately authenticate users to determine whether they are allowed access to objects in S3. If they are granted access, then your application should generate a pre-signed URL as an authenticated link to the objects. The URL will only be valid for a limited time duration.
This is best done by having a back-end app (probably running on Amazon EC2 or AWS Lambda) perform the authentication and then generate the URL. Your authenticated user can then use the pre-signed URL to download the object during the allocated time period (eg 5 minutes).
This method has several benefits over the use of a password:
It properly authenticates the user (through your code) rather than merely trusting anyone who knows the password
It allows you to log access, so you know who is accessing the object
Your back-end app could generate an HTML page willed with many pre-signed URLs and your users could simply click the links to access the objects, rather than having to provide a password for every object they wish to download

Is a signed Amazon s3 url safe to share?

I am using aws sdk, in Java, to generate signed object urls from a given bucket. These objects are images for which I want to generated a signed url using following code.
URL url = amazonS3Client.generatePresignedUrl(generatePresignedUrlRequest);
Following is an example of generated url
In the urls there is AWS3AccessKeyId and signature that will be visible to end users and developers. Is there a chance for someone to generate PUT, DELTE, etc operations on objects using the information in the url. Is it safe to provide these urls to users who I don't know ?
Background
I like to share images and other object urls with people, who will use them in their websites. For example,a url to display an image in a website, or url from where user can download a file.
No, they will not be able to use these URLs to do anything other than what you allow. So if you generate a pre-signed url to get a particular S3 object, they will ONLY be able to use it to get that S3 object. They can't reverse-engineer the url to give themselves access to other buckets or objects.

Google App Engine - Uploading blobs and authentication

(I tried asking this on the GAE forums but didn't get an answer so am trying it here.)
Currently to upload blobs, the app engine's blob store service creates a unique one-
time URL that a user can post blobs to. My requirement is that I only
want authenticated / authorized users to post blobs in my application. I can achieve this currently if the page that includes the multipart form to upload blobs is in my application.
However, I am looking to providing a "REST API" for my users to upload their blobs. While it is true that the one-time nature of the upload URL mitigates the chances of rogue use but it's still possible.
I was wondering if there is anyone on the app engine team here that can consider a feature where developers can register an upload listener. (Or if there is already a way, I'll be all ears). A standard servlet filter could also potentially do the job. This will give us an opportunity to authenticate / validate / decorate requests before the request gets forwarded to the blob store service.
Thanks,
Keyur
Since, as you point out, it's only possible to upload blobs if you have a valid upload URL, you can simply issue valid upload URLs only to authorized users. The only way an unauthorized user could then get an upload URL would be if an authorized user gave it to them, or it was intercepted - and in either case, the same caveat would apply to regular credentials.
In any case, it's still possible to check a user's credentials after the upload, at which point you can immediately delete the blob if you're not satisfied. If it were possible to regularly upload unauthorized blobs, this could lead to a denial of service vulnerability, but due to the restrictions on handing out the encoded URLs I mentioned above, this is only likely to apply if, for example, a user's access was revoked after you generated an upload URL for them.
I'm not sure whether it would work (i.e. GAE might not let you do it), but a servlet filter which wraps the /_ah/upload pattern could first check whether the POST came from same IP address as the authenticated client.
Now, you can upload file with Blobstore API, check out here: http://code.google.com/appengine/docs/java/blobstore/overview.html

Categories

Resources