AmazonS3ClientBuilder.defaultClient() fails to account for region? - java

The Amazon Java SDK has marked the Constructors for AmazonS3Client deprecated in favor of some AmazonS3ClientBuilder.defaultClient(). Following the recommendation, though, does not result in an AmazonS3 client that works the same. In particular, the client has somehow failed to account for Region. If you run the tests below, the thisFails test demonstrates the problem.
public class S3HelperTest {
#Test
public void thisWorks() throws Exception {
AmazonS3 s3Client = new AmazonS3Client(); // this call is deprecated
s3Client.setS3ClientOptions(S3ClientOptions.builder().setPathStyleAccess(true).build());
assertNotNull(s3Client);
}
#Test
public void thisFails() throws Exception {
AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient();
/*
* The following line throws like com.amazonaws.SdkClientException:
* Unable to find a region via the region provider chain. Must provide an explicit region in the builder or
* setup environment to supply a region.
*/
s3Client.setS3ClientOptions(S3ClientOptions.builder().setPathStyleAccess(true).build());
}
}
com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
at com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:371)
at com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:337)
at com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46)
at com.amazonaws.services.s3.AmazonS3ClientBuilder.defaultClient(AmazonS3ClientBuilder.java:54)
at com.climate.tenderfoot.service.S3HelperTest.thisFails(S3HelperTest.java:21)
...
Is this an AWS SDK Bug? Is there some "region default provider chain" or some mechanism to derive the region from the Environment and set it into the client? It seems really weak that the method to replace the deprecation doesn't result in the same capability.

Looks like a region is required for the builder.
Probably this thread is related (I would use .withRegion(Regions.US_EAST_1) though in the 3rd line):
To emulate the previous behavior (no region configured), you'll need
to also enable "forced global bucket access" in the client builder:
AmazonS3 client =
AmazonS3ClientBuilder.standard()
.withRegion("us-east-1") // The first region to try your request against
.withForceGlobalBucketAccess(true) // If a bucket is in a different region, try again in the correct region
.build();
This will suppress the exception you received and automatically retry
the request under the region in the exception. It is made explicit in
the builder so you are aware of this cross-region behavior. Note: The
SDK will cache the bucket region after the first failure, so that
every request against this bucket doesn't have to happen twice.
Also, from the AWS documentation if you want to use AmazonS3ClientBuilder.defaultClient(); then you need to have
~/.aws/credentials and ~/.aws/config files
~/.aws/credentials contents:
[default]
aws_access_key_id = your_id
aws_secret_access_key = your_key
~/.aws/config contents:
[default]
region = us-west-1
From the same AWS documentation page, if you don't want to hardcode the region/credentials, you can have it as environment variables in your Linux machine the usual way:
export AWS_ACCESS_KEY_ID=your_access_key_id
export AWS_SECRET_ACCESS_KEY=your_secret_access_key
export AWS_REGION=your_aws_region

BasicAWSCredentials creds = new BasicAWSCredentials("key_ID", "Access_Key");
AWSStaticCredentialsProvider provider = new
AWSStaticCredentialsProvider(creds);
AmazonSQS sqs =AmazonSQSClientBuilder.standard()
.withCredentials(provider)
.withRegion(Regions.US_EAST_2)
.build();

Create file named "config" under .aws.
And place below content.
~/.aws/config contents:
[default]
region = us-west-1
output = json

AmazonSQS sqsClient = AmazonSQSClientBuilder
.standard()
.withRegion(Regions.AP_SOUTH_1)
.build();

Related

How to configure AWS DynamoDB Camel component

I am trying to POC accessing DynamoDB via an Apache Camel application. Obviously Dynamo DB will run in AWS but for development purposes we have it running locally as a docker container.
It was very easy to create a Dynamo BD table locally and put some items in there manually. I used for this my intelij Dynamo DB console and all I had to provide was a custom end point http://localhost:8000 and the Default credential provider chain.
Now at some certain times of the day I would like to trigger a job that will scan the Dynamo DB items and perform some actions on the returned data.
from("cron:myCron?schedule=0 */5 * * * *")
.log("Running myCron scheduler")
.setHeader(Ddb2Constants.OPERATION, () -> Ddb2Operations.Scan)
.to("aws2-ddb:myTable")
.log("Performing some work on items");
When I am trying to run my application it fails to start complaining that the security token is expired which makes me think it is trying to go to AWS rather than accessing the local. I was unable to find anything about how would I set this. The camel dynamo db component (https://camel.apache.org/components/3.15.x/aws2-ddb-component.html) is talking about being able to configure the component with a DynamoDbClient but this is an interface and its implementation called DefaultDynamoDbClient is not public and so is the DefaultDynamoDbClientBuilder.
Assuming that you use Spring Boot as Camel runtime, the simplest way in your case is to configure the DynamoDbClient used by Camel thanks to options set in the application.properties as next:
# The value of the access key used by the component aws2-ddb
camel.component.aws2-ddb.accessKey=test
# The value of the secret key used by the component aws2-ddb
camel.component.aws2-ddb.secretKey=test
# The value of the region used by the component aws2-ddb
camel.component.aws2-ddb.region=us-east-1
# Indicates that the component aws2-ddb should use the new URI endpoint
camel.component.aws2-ddb.override-endpoint=true
# The value of the URI endpoint used by the component aws2-ddb
camel.component.aws2-ddb.uri-endpoint-override=http://localhost:8000
For more details please refer to the documentation of those options:
camel.component.aws2-ddb.accessKey
camel.component.aws2-ddb.secretKey
camel.component.aws2-ddb.region
camel.component.aws2-ddb.override-endpoint
camel.component.aws2-ddb.uri-endpoint-override
For other runtimes, it can be configured programatically as next:
Ddb2Component ddb2Component = new Ddb2Component(context);
String accessKey = "test";
String secretKey = "test";
String region = "us-east-1";
String endpoint = "http://localhost:8000";
ddb2Component.getConfiguration().setAmazonDDBClient(
DynamoDbClient.builder()
.endpointOverride(URI.create(endpoint))
.credentialsProvider(
StaticCredentialsProvider.create(
AwsBasicCredentials.create(accessKey, secretKey)
)
)
.region(Region.of(region))
.build()
);

Add trigger to AWS Lambda Function using Java SDK for s3

How can I add new trigger for existing AWS Lambda function using Java SDK?
I would like to add S3 trigger.
I have program which converts an image from one format to another.
I have two buckets in first, when I add source image in second I want to get result.
Any examples will be appreciated.
Thanks.
Trigger like this:
I try to do it, but it dosn't work:
final AWSLambda client = AWSLambdaClientBuilder.standard()
.withCredentials(credentials)
.build();
client.listFunctions().getFunctions()
.stream()
.filter(f -> f.getFunctionName().equals(FUNCTION_NAME))
.findFirst()
.ifPresent(lambda -> {
final AddPermissionRequest addPermissionRequest = new AddPermissionRequest();
addPermissionRequest.setStatementId("s3triggerId");
addPermissionRequest.withSourceArn("arn:aws:s3:::" + INPUT_BUCKET_NAME);
addPermissionRequest.setAction("lambda:InvokeFunction");
addPermissionRequest.setPrincipal("events.amazonaws.com");
addPermissionRequest.setFunctionName(lambda.getFunctionName());
AddPermissionResult addPermissionResult = client.addPermission(addPermissionRequest);
System.out.println("Trigger was added to lambda " + addPermissionResult.getStatement());
});
For aws java sdk v2:
You can add trigger by adding notification configuration such:
PutBucketNotificationConfiguration
You can see your current configuration via:
GetBucketNotificationConfiguration
And check other from: https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html
Init S3Client with region and credentials provider( in my
case,Region.US_WEST_2 and ProfileCredentialsProvider respectively).
Choose method( type of action's configuration ) from s3client for
your action(in my case putBucketNotificationConfiguration).
Build request for your notification configuration with bucketName and notification configuration.
Build notification configuration: (types:
topicConfiguration(SNS),queueConfiguration(SQS), lambdaFunctionConfiguration(Lambda))
in my case lambdaFunctionConfiguration.
Build lambdaFunctionConfiguration with arn and events that will
triger your lambda function(in my case, "arn:aws:lambda:us-west-2:12345678912:function:your-lambda" and Event.S3_OBJECT_CREATED_PUT; I assign one event, but your can add more).
Also read: https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
For this example:
S3lient s3Client = S3Client.builder()
.region(Region.US_WEST_2)
.credentialsProvider(ProfileCredentialsProvider.create())
.build();
s3Client.putBucketNotificationConfiguration(PutBucketNotificationConfigurationRequest.builder()
.bucket(BUCKET_NAME)
.notificationConfiguration(NotificationConfiguration.builder()
.lambdaFunctionConfigurations(LambdaFunctionConfiguration.builder()
.lambdaFunctionArn("arn:aws:lambda:us-west-2:12345678912:function:your-lambda")
.events(Event.S3_OBJECT_CREATED_PUT)
.build())
.build())
.build());
You can do it either in the console or via SAM.

AmazonClientException: Unable To Load Credentials from any Provider in the Chain

My mule application writes json record to a kinesis stream. I use KPL producer library. When run locally, it picks AWS credentials from .aws/credentials and writes record to kinesis successfully.
However, when I deploy my application to Cloudhub, it throws AmazonClientException, obviously due to not having access to any of directories that DefaultAWSCredentialsProviderChain class supports. (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html)
This is how I attach credentials and it looks locally in .aws/credentials:
config.setCredentialsProvider( new
DefaultAWSCredentialsProviderChain());
I couldn't figure out a way to provide credentials explicitly using my-app.properies file.
Then I tried to create a separate configuration file with getters/setters. set access key and private key as private and then impement a getter:
public AWSCredentialsProvider getCredentials() {
if(accessKey == null || secretKey == null) {
return new DefaultAWSCredentialsProviderChain();
}
return new StaticCredentialsProvider(new BasicAWSCredentials(getAccessKey(), getSecretKey()));
}
}
This was intended to be used instead of DefaultAWSCredentialsProviderChain class this way---
config.setCredentialsProvider(new AWSConfig().getCredentials());
Still throws the same error when deployed.
The following repo states that it is possible to provide explicit credentials. I need help to figure out how because I can't find a proper documentation / example.
https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer-sample/src/com/amazonaws/services/kinesis/producer/sample/SampleProducer.java
I have Faced the same issue so, I got this solution I hope this will work for you also.
#Value("${s3_accessKey}")
private String s3_accessKey;
#Value("${s3_secretKey}")
private String s3_secretKey;
//this above value I am taking from Application.properties file
BasicAWSCredentials creds = new BasicAWSCredentials(s3_accessKey,
s3_secretKey);
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().
withCredentials(new AWSStaticCredentialsProvider(creds))
.withRegion(Regions.US_EAST_2)
.build();

How enable force global bucket access in aws s3 sdk java 2.0?

Here is a link to the documentation for java 3 sdk version 1. Does version 2.0 has something similar or they removed such option?
Yes! It is possible in AWS SDK v2 to execute S3 operations on regions other than the one configured in the client.
In order to do this, set useArnRegionEnabled to true on the client.
An example of this using Scala is:
val s3Configuration = S3Configuration.builder.useArnRegionEnabled(true).build
val client = S3Client
.builder
.credentialsProvider({$foo})
.region(Region.EU_WEST_1)
.overrideConfiguration({$foo})
.serviceConfiguration(s3Configuration)
.build
Here is the documentation: https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Configuration.Builder.html#useArnRegionEnabled-java.lang.Boolean-
Not supported per here
In version 1.x, services such as Amazon S3, Amazon SNS, and Amazon SQS allowed access to resources across Region boundaries. This is no longer allowed in version 2.x using the same client. If you need to access a resource in a different region, you must create a client in that region and retrieve the resource using the appropriate client.
This works for me when using java AWS SDK 2.16.98 and only requires the name of the bucket rather than the full arn.
private S3Client defaultClient;
private S3Client bucketSpecificClient;
private String bucketName = "my-bucket-in-some-region";
// this client seems to be able to look up the location of buckets from any region
defaultClient = S3Client.builder().endpointOverride(URI.create("https://s3.us-east-1.amazonaws.com")).region(Region.US_EAST_1).build();
public S3Client getClient() {
if (bucketSpecificClient == null) {
String bucketLocation = defaultClient.getBucketLocation(builder -> builder.bucket(this.bucketName)).locationConstraintAsString();
Region region = bucketLocation.trim().equals("") ? Region.US_EAST_1 : Region.of(bucketLocation);
bucketSpecificClient = S3Client.builder().region(region).build();
}
return bucketSpecificClient;
}
Now you can use bucketSpecificClient to perform operations on objects in the bucket my-bucket-in-some-region

Setting AWS region programmatically for SQS

I just started working on AWS SDK for Java and .net.
currently i am creating a AWS SQS Queue. I was able to Create a QUEUE, List the existing queues, and talk to the queues with .net SDK.
When i tried the same with the java i m getting following error.
Unable to find a region via the region provider chain. Must provide an
explicit region in the builder or setup environment to supply a
region.
I have set all the necessary access keys, Region and credentials in the aws preferences in eclipse.
This is how i am initializing SQS client in a Java maven project
AmazonSQS sqs = AmazonSQSClientBuilder.defaultClient();
I have googled and found that there is a key word called withregion() for S3 where i can specify the region but its not there for SQS.
I also tried setting region as
sqs.setRegion(Region.AP_Mumbai);
This shows up following exception
The method setRegion(com.amazonaws.regions.Region) in the type
AmazonSQS is not applicable for the arguments
(com.amazonaws.services.s3.model.Region)
i tried setting the same using com.amazonaws.regions.Region but there is no provision as such.
Please Suggest
I setup the aws sqs client this way:
BasicAWSCredentials bAWSc = new BasicAWSCredentials(accessKey, secretKey);
return AmazonSQSClientBuilder.standard().withRegion(region).withCredentials(new AWSStaticCredentialsProvider(bAWSc)).build();
based on what #Francesco put, I created a more intuitive version
BasicAWSCredentials bAWSc = new BasicAWSCredentials(accessKey, secretKey);
final AmazonSQS sqs = AmazonSQSClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withCredentials(new AWSStaticCredentialsProvider(bAWSc ))
.build();

Categories

Resources