How to get AWS credentails from custom source/location? - java

I am following: AWS Docs to setup the credentials. The problem is that it requires me to create a .aws folder in the machine. I want to get the keys and other secrets from a custom location. How can it be achieved?
P.S. If I follow the tutorial recommendation then all machines running the project would have to setup that .aws folder which would be a big hassle for everyone.

Where exactly would you suggest getting the credentials from? You could store them somewhere else, like a HashiCorp Vault server, and write a script or something to pull the values and set them as environment variables, but then you'll need to figure out how to give each computer secure credentials to access the Vault server.
If by "custom location" you simply mean a different local file system location, like a mapped drive or something, then you can specify that using the AWS_CREDENTIAL_PROFILES_FILE environment variable. Although it sounds like you want to do this on multiple people's workstations, and I would caution against sharing credentials files in that scenario. You really want to assign each person different AWS access keys so that you can track each person's AWS API actions, and revoke one person's access if they leave the company or something.
I recommend reading this page for understanding all the options to configure credentials for the AWS SDK.

Assuming you are using Amazon EC2 to host your application, then you can use IAM role to grant permissions, by attaching IAM role to your EC2 instances.
Furthermore, using IAM role avoid storing sensitive credential file in your instances.
Read this document, or watch this video to implement it.

Related

How to grant AWS programmatic access to thousands of non-EC2 computers, and rotate keys?

My java service will run on my computers (let's say I'll have more than 1000 computers) and will send some data to S3. I use AWS Java SDK for it.
If I'm right, for doing it I need to use access key & secret key on my computers. (let's say it will be in .aws/credential file)
I read a lot of AWS documentation about the best practices for resources programmatic access, but still can't understand it.
Rotating access keys. After an access key is rotated, how can I change it in all applications that run my computers? Should my application be self-updated?
Temporary credentials. In this approach I still need to have access key & secret key on my computers. If yes, I have the same problem as in Q1.
Can somebody advise me what the best way and secure to programmatically access AWS resources in my situation? What do I need to do with access key & secret key?
Thank you.
UPDATES:
Computers are in different networks
Java app sends to S3 and also reads from S3
New computers can be added every time
The computers will need AWS credentials to talk with S3.
The simplest way is to store the credentials on each computer. However, as you say, it makes it hard to rotate the keys.
Another option is to store the credentials in a database that they can access, so they always get the latest credentials. However, they will need some sort of login to access the database.
Alternatively, you could setup identity federation, so that that the computers can authenticate against something like Active Directory, and then you can write a central service that will provide temporary credentials to each computer.
The process is basically:
The computers authenticate to AD
They call your service and prove that they are authenticated to AD
Your service then calls STS and generates temporary credentials valid for up to 36 hours
It provides those credentials to the computers
See: GetFederationToken - AWS Security Token Service
AFAIK you need to ensure that your application on computer has up-to-date access key. My recommendation is to store the access key on centralized place from which application will retrieve it. Thus, once you rotate the key and update the centralized storage, it will be reflected in all your application instances.
The AWS Java SDKs use a credential chain. The credential chain just means the SDK will look for credentials in 6 different places in this order:
Java system properties–aws.accessKeyId and aws.secretAccessKey. The AWS SDK for Java uses the SystemPropertyCredentialsProvider to load these credentials.
Environment variables–AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. The AWS SDK for Java uses the EnvironmentVariableCredentialsProvider class to load these credentials.
The default credential profiles file– The specific location of this file can vary per platform, but is typically located at ~/.aws/credentials. This file is shared by many of the AWS SDKs and by the AWS CLI. The AWS SDK for Java uses the ProfileCredentialsProvider to load these credentials.
You can create a credentials file by using the aws configure command provided by the AWS CLI. You can also create it by editing the file with a text editor. For information about the credentials file format, see AWS Credentials File Format.
Amazon ECS container credentials– This is loaded from Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set. The AWS SDK for Java uses the ContainerCredentialsProvider to load these credentials.
Instance profile credentials– This is used on Amazon EC2 instances, and delivered through the Amazon EC2 metadata service. The AWS SDK for Java uses the InstanceProfileCredentialsProvider
to load these credentials.
https://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/credentials.html

AWS Api Credentials in application.conf file - Safe enough?

I am wondering if its safe enough to put the AWS Api Credentials (or credentials in general) in a .conf file? I dont think so and want to ask if there is any easy to use approach to encrypt and decrypt the credentials. I am using JAVA with Eclipse IDE. Does anyone have a hint for a newbie in this section?
What is the ".conf" file? Is it a configuration file for an application? If that's the case, then no, it's not safe enough.
Amazon has a document describing best practices for access keys. Their recommendation for applications is to attach a role to whatever is running the application (EC2 / ECS / Lambda).
The biggest benefit of using a role is that the credentials it provides are temporary: they have a maximum lifetime of 12 hours, and a default lifetime of 1 hour. So if someone manages to extract them from your server, they won't be able to do long-term damage (unlike the "permanent" access keys associated with users).
While that helps, you still need to restrict the scope of those credentials (ie, don't use wildcards in their permission policies). And you should be monitoring your deployment for invalid credential use (ie, an API call that is made from somewhere outside of your VPC).
If you're talking about credentials for development use, AWS already has a place to store those. However, you don't need to store credentials anywhere if you use AWS Single-SignOn, which gives you limited-lifetime credentials for CLI/SDK use.
And if you're talking about credentials for a mobile application, look into Cognito and AWS Amplify.

S3 Bucket Signed URLs to grant access to pictures

I'm having a brainstorming issue on how to get user uploaded pictures viewed by only the friends of the users.
So what I've come up with so far is:
Create a DynamoDB table for each user, with a dynamic list of friends/new friends added.
Generate a Signed URL for every user-uploaded picture.
Allow access to the Signed URL to every friend listed in the DynamoDB table to view set picture/s.
Does this sound correct? Also, would I technically have just one bucket for ALL user uploaded pictures? Something about my design sounds off...
Can anyone give me a quick tutorial on how to accomplish this via Java?
There two basic approaches:
Permissions in Amazon S3, or
Application-controlled access to object in Amazon S3
Permissions in Amazon S3
You can provide credentials (either via IAM or Amazon Cognito) that allow users to access a particular path within an Amazon S3 bucket. For example, each user could have their own path within the bucket.
Your application would generate URLs that include signatures that identify them as that particular user and Amazon S3 would grant access to the objects.
One benefit of this approach is that you could provide the AWS credentials to the users and they could interact directly with AWS, such as using the AWS Command-Line Interface (CLI) to upload/download files without having to always go via your application.
Application-controlled access to object in Amazon S3
In this scenario, users have no permissions within Amazon S3. Instead, each time that your application wishes to generate a URL to an object in S3 (eg in an <img> tag), you created a pre-signed URL. This will grant access to the object for a limited time. It only takes a couple of lines of code and can be done within the application without communication with AWS to generate the URL.
There is no need to store pre-signed URLs. They are generated on-the-fly.
The benefit of this approach is that your application has full control over which objects they can access. Friends could share pictures with other users and the application would grant access, whereas the first method only grants access to objects within the user's specific path.

Google Cloud Storage with a service account in Java - 403 Caller does not have storage.objects.list access to bucket

We want to download files from Google Storage in our application server. It is important to have read-only restricted access to a single bucket and nothing else.
At first I used a regular user account (not a service account) which have permissions to access all buckets in our Google Cloud project, and everything worked fine - my Java code opened buckets and downloaded files without problems.
Storage storage = StorageOptions.getDefaultInstance().getService();
Bucket b = storage.get( "mybucketname" );
Then I wanted to switch to use a specially created service account which has access to a single bucket only. So I created a service account, gave permissions to read a single bucket, and downloaded its key file. The permissions in Google Cloud Console are named as:
Storage Object Viewer (3 members) Read access to GCS objects.
gsutil command line utility works fine with this account - from the command line it allows accessing this bucket but not the others.
The initialization from the command line is done using the following command:
gcloud --project myprojectname auth activate-service-account files-viewer2#myprojectname.iam.gserviceaccount.com --key-file=/.../keyfilename.json
I even tried two different service accounts which have access to different buckets, and from the command line I can switch between them and gsutil gives access to a relevant bucket only, and for any other it returns the error:
"AccessDeniedException: 403 Caller does not have storage.objects.list access to bucket xxxxxxxxxx."
So, from the command line everything worked fine.
But in Java there is some problem with the authentication.
The default authentication I previously used with a regular user account stopped working - it reports the error:
com.google.cloud.storage.StorageException: Anonymous users does not have storage.buckets.get access to bucket xxxxxxxxxx.
Then I've tried the following code (this is the simplest variant because it relies on the key json file, but I've already tried a number of other variants found in various forums, with no success):
FileInputStream fis = new FileInputStream( "/path/to/the/key-file.json" );
ServiceAccountCredentials credentials = ServiceAccountCredentials.fromStream( fis );
Storage storage = StorageOptions.newBuilder().setCredentials( credentials )
.setProjectId( "myprojectid" ).build().getService();
Bucket b = storage.get( "mybucketname" );
And all I receive is this error:
com.google.cloud.storage.StorageException: Caller does not have storage.buckets.get access to bucket mybucketname.
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
The same error is returned no matter to what buckets I'm trying to access (even non-existing).
What confuses me is that the same service account, initialized with the same JSON key file, works fine from the command line.
So I think something is missing in Java code that ensures correct authentication.
TL;DR - If you're using Application Default Credentials (which BTW you are when you do StorageOptions.getDefaultInstance().getService();), and if you need to use the credentials from a service account, you can do so without changing your code. All you need to do is set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the full path of your service account json file and you are all set.
Longer version of the solution using Application Default Credentials
Use your original code as-is
Storage storage = StorageOptions.getDefaultInstance().getService();
Bucket b = storage.get( "mybucketname" );
Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the full path of your json file containing the service account credentials.
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service_account_credentials.json
Run your java application once again to verify that it is working as expected.
Alternate solution using hard-coded Service Account Credentials
The code example you posted for initializing ServiceAccountCredentials looks valid to me on a quick glance. I tried the following code snippet and it is working for me as expected.
String SERVICE_ACCOUNT_JSON_PATH = "/path/to/service_account_credentials.json";
Storage storage =
StorageOptions.newBuilder()
.setCredentials(
ServiceAccountCredentials.fromStream(
new FileInputStream(SERVICE_ACCOUNT_JSON_PATH)))
.build()
.getService();
Bucket b = storage.get("mybucketname");
When specifying a service account credential, the project ID is automatically picked up from the information present in the json file. So you do not have to specify it once again. I'm not entirely sure though if this is related to the issue you're observing.
Application Default Credentials
Here is the full documentation regarding Application Default Credentials explaining which credentials are picked up based on your environment.
How the Application Default Credentials work
You can get Application Default Credentials by making a single client
library call. The credentials returned are determined by the
environment the code is running in. Conditions are checked in the
following order:
The environment variable GOOGLE_APPLICATION_CREDENTIALS is checked. If this variable is specified it should point to a file that
defines the credentials. The simplest way to get a credential for this
purpose is to create a Service account key in the Google API Console:
a. Go to the API Console Credentials page.
b. From the project drop-down, select your project.
c. On the Credentials page, select the Create credentials drop-down,
then select Service account key.
d.From the Service account drop-down, select an existing service
account or create a new one.
e. For Key type, select the JSON key option, then select Create. The
file automatically downloads to your computer.
f. Put the *.json file you just downloaded in a directory of your
choosing. This directory must be private (you can't let anyone get
access to this), but accessible to your web server code.
g. Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to
the path of the JSON file downloaded.
If you have installed the Google Cloud SDK on your machine and have run the command gcloud auth application-default login, your
identity can be used as a proxy to test code calling APIs from that
machine.
If you are running in Google App Engine production, the built-in service account associated with the application will be used.
If you are running in Google Compute Engine production, the built-in service account associated with the virtual machine instance
will be used.
If none of these conditions is true, an error will occur.
IAM roles
I would recommend going over the IAM permissions and the IAM roles available for Cloud Storage. These provide control at project and bucket level. In addition, you can use ACLs to control permissions at the object level within the bucket.
If your use case involves just invoking storage.get(bucketName). This operation will require just storage.buckets.get permission and the best IAM role for just this permission is roles/storage.legacyObjectReader.
If you also want to grant the service account permissions to get (storage.objects.get) and list (storage.objects.list) individual objects, then also add the role roles/storage.objectViewer to the service account.
Thanks to #Taxdude's long explanation, I understood that my Java code should be all right, and started looking at other possible reasons for the problem.
One of additional things I've tried were the permissions set to the service account, and there I've found the solution – it was unexpected, actually.
When a service account is created, it must not be given permissions to read from Google Storage, because then it will have read permissions to ALL buckets, and it is impossible to change that (not sure why), because the system marks these permissions as "inherited".
Therefore, you have to:
Create a "blank" service account with no permissions, and
Configure permissions from the bucket configuration
To do so:
Open Google Cloud Web console
Open Storage Browser
Select your bucket
Open the INFO PANEL with Permissions
Add the service account with the Storage Object Viewer permission, but there are also permissions named Storage Legacy Object Reader and Storage Legacy Bucket Reader
Because of the word "Legacy" I thought those should not be used – they look like something kept for backward compatibility. And after experimenting and adding these "legacy" permissions, all of a sudden the same code I was trying all the time started working properly.
I'm still not entirely sure what is the minimal set of permissions I should assign to a service account, but at least now it works with all three "read" permissions on the bucket – two "legacy" and one "normal".

Consistent user data across directories in Stormpath

I have an application that I am securing using Stormpath. So far the basic registration/login process works great.
I have now added Social authentication, but I'm running into a problem. The way it's configured right now, it will allow two simultaneous users to be created with the same email address. I would like to have the email as my primary key for the user.
Is there a way I can have Stormpath "merge" these accounts so they are treated as one account just with multiple ways to authenticate?
Yep! Stormpath has a feature where we can automatically link accounts between directories.
You can use the Stormpath Cloud Directory as the "master" and the social directories as "mirrors" that feed into the cloud directory. This allows you to use the cloud directory as the source of truth.
See this post for more info on how this feature works: https://stormpath.com/blog/unify-social-accounts-account-linking

Categories

Resources