I would like to create a bucket in the ceph object storage via the S3 API. Which works fine, if I use Pythons boto3:
s3 = boto3.resource(
's3',
endpoint_url='https://my.non-amazon-endpoint.com',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key
)
bucket = s3.create_bucket(Bucket="my-bucket") # successfully creates bucket
Trying the same with java leads to an exception:
BasicAWSCredentials awsCreds = new BasicAWSCredentials(access_key, secret_key);
AwsClientBuilder.EndpointConfiguration config =
new AwsClientBuilder.EndpointConfiguration(
"https://my.non-amazon-endpoint.com",
"MyRegion");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withEndpointConfiguration(config)
.build();
List<Bucket> buckets = s3Client.listBuckets();
// this works and lists all containers, hence the connection should be fine
for (Bucket bucket : buckets) {
System.out.println(bucket.getName() + "\t" +
StringUtils.fromDate(bucket.getCreationDate()));
}
Bucket bucket = s3Client.createBucket("my-bucket");
// AmazonS3Exception: The specified location-constraint is not valid (Service: Amazon S3; Status Code: 400; Error Code: InvalidLocationConstraint...
I am aware of several related issues, for instance this issue, but I was not able to adjust the suggested solutions to my non-amazon storage.
Digging deeper into the boto3 code, it turns out, that the LocationConstraint is set to None, if no region has been specified. But omitting the region in java leads to the InvalidLocationConstrain, too.
How do I have to configure the endpoint with the java s3 aws sdk to successfully create buckets?
Kind regards
UPDATE
Setting the signingRegion to "us-east-1" enables bucket creation functionality:
AwsClientBuilder.EndpointConfiguration config =
new AwsClientBuilder.EndpointConfiguration(
"https://my.non-amazon.endpoint.com",
"us-east-1");
If one assigns another region, the sdk will parse the region from endpoint url, as specified here.
In my case, this leads to an invalid region, for instance non-amazon.
Setting the signingRegion to "us-east-1" enables bucket creation functionality:
AwsClientBuilder.EndpointConfiguration config =
new AwsClientBuilder.EndpointConfiguration(
"https://my.non-amazon.endpoint.com",
"us-east-1");
If one assigns another region, the sdk will parse the region from endpoint url, as specified here.
In my case, this leads to an invalid region, for instance non-amazon.
Related
I am trying to upload a file to an AWS S3 Bucket using the AWS SDK 2.0 for Java, but I am getting an error when trying to do so.
software.amazon.awssdk.services.s3.model.S3Exception: The request signature we calculated does not match the signature you provided. Check your key and signing method.
I am not sure what I am missing. I have tried adding in a key, but I am not even sure what I need to put in there, think it is just a name to refer to what has been uploaded though.
private S3Client s3Client;
private void upload() {
setUpS3Client();
PutObjectRequest putObjectRequest = PutObjectRequest.builder()
.bucket(bucketName) //name of the bucket i am trying to upload to
.key("testing") //No idea what goes in here.
.build();
byte[] objectByteArray = getObjectByteArray(bucketRequest.getPathToFile()); //bucketRequest just holds the data that will be sent
PutObjectResponse putObjectResponse = s3Client.putObject(putObjectRequest, RequestBody.fromBytes(objectByteArray));
}
private void setUpS3Client() {
Region region = Region.AF_SOUTH_1;
s3Client = S3Client.builder()
.region(region)
.credentialsProvider(createStaticCredentialsProvider())
.build();
this.s3Client = s3Client;
}
Does anyone know what this error is referring to and what I need to change to get the file to upload? Any help will be appreciated.
This Java example works fine. Here is a screenshot of this Java code working:
You state:
key("testing") //No idea what goes in here.
.build();
The key is the name of the object to update. For example, book.pdf to upload a PDF file. All input to Java AWS V2 examples is documented at start in the main method.
Now for your problem - make sure you have the required dependencies in the POM file. Use the POM file located here:
https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2/example_code/s3
Also at start of every program, there is a link to Java V2 Dev guide that talks about setting up your DEV Environment, including your creds.
The problem was my credentials. The secret access key I provided was incorrect. :(
I am generating a preSignedUrl and then uploading the file through that url.
The issue is that even if I enter the wrong access key or secret key I get the preSignedUrl, though if I try to upload using that url I get 400 error.
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AuthorizationQueryParametersError</Code>
<Message>Query-string authentication version 4 requires the X-Amz-Algorithm, X-Amz-Credential, X-Amz-Signature, X-Amz-Date, X-Amz-SignedHeaders, and X-Amz-Expires parameters.</Message>
<RequestId>{requestId}</RequestId>
<HostId>{hostId}</HostId>
</Error>
Is there some way I get the error while generating the preSignedUrl so that I don't have to try and upload the file.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("accessKey", "secretKey")))
.withRegion(clientRegion)
.build();
GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(bucketName, objectKey)
.withMethod(HttpMethod.PUT)
.withExpiration(expiration);
URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);
Generating a pre-signed URL doesn't require an API call; it can be generated by the framework using the specified access key and secret.
The generated URL will be validated by S3 when the request is received, and will obviously only be accepted when valid credentials were used for generating it.
Bottom line: in order to validate your credentials you need to make an API request that actually performs a call to AWS. This can be pretty much any other method on your s3Client.
Let's start with this:
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("accessKey", "secretKey")))
Static credentials go against AWS best practice. Instead, rely on credentials provided via environment variables or an execution role (when running on EC2, ECS, or Lambda).
The only way that you can verify that the credentials are valid is to try them. You could write a small dummy file, however this may cause problems for anything that is supposed to read that file, due to eventual consistency on S3.
There's also the problem that the expiration that you give the URL may not correspond to the lifetime of the credentials.
The best solution to all of these problems is to create a role that has access to PUT the files on S3, and has a duration consistent with your URL expiration (note that the maximum is 12 hours), then explicitly assume that role in order to construct the request:
final String assumedRoleArn = "arn:aws:iam::123456789012:role/Example";
final String sessionName = "example";
final String bucketName = "com-example-mybucket";
final String objectKey = "myfile.txt";
final int expirationSeconds = 12 * 3600;
final Date expiresAt = new Date(System.currentTimeMillis() + expirationSeconds * 1000);
AWSSecurityTokenService stsClient = AWSSecurityTokenServiceClientBuilder.defaultClient();
AWSCredentialsProvider credentialsProvider = new STSAssumeRoleSessionCredentialsProvider.Builder(assumedRoleArn, sessionName)
.withStsClient(stsClient)
.withRoleSessionDurationSeconds(expirationSeconds)
.build();
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withCredentials(credentialsProvider).build();
URL presignedUrl = s3Client.generatePresignedUrl(bucketName, objectKey, expiresAt, HttpMethod.PUT);
I'm creating a very simple example with AWS Lambda and I have problem using Java runtime. I have to read a S3 object from a bucket of mine and with a NodeJS example like the following I have no problem
var S3FS = require('s3fs');
exports.handler = (req, res) => {
var s3Options = {
region: 'eu-west-3',
accessKeyId: 'key',
secretAccessKey: 'secret'
};
var fsImpl = new S3FS('mybucket', s3Options);
fsImpl.readFile("myfile",function (err,data) {
if (err) throw err;
console.log(data.toString());
});
}
If I try a similar Java example my function always timeouts (even if I increase to 1 minute)
context.getLogger().log("Before");
BasicAWSCredentials awsCreds = new BasicAWSCredentials("key", "secret");
AmazonS3 s3 = AmazonS3ClientBuilder.standard()
.withRegion("eu-west-3")
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
context.getLogger().log("client created");
S3Object object = s3.getObject(
new GetObjectRequest(bucketName, key));
context.getLogger().log("After");
The function always blocks when creating the S3 client. I know I can avoid using the key and secret in the Lambda, but also in this way the function blocks. It isn't a policy problem because I'm testing these examples from the same Lambda configuration so I think it's something related to the Java AWS S3 API. Any suggestions?
The Java lambda finally works using defaultClient() and not standard() method of AmazonS3ClientBuilder.
The difference between these two methods are the credentials, retrieved from env or passed as arguments. There is probably a wrong configuration I don't see, but anyway a more clear error could be useful
For getting s3 client object i am using below code.
BasicAWSCredentials creds = new BasicAWSCredentials(key, S3secretKey);
AmazonS3 s3Client =AmazonS3ClientBuilder.standard().withCredentials(new AWSStaticCredentialsProvider(creds)).build();
Getting below errors
Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
I had to change to:
AmazonS3 client = AmazonS3ClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withForceGlobalBucketAccess(true)
.build();
to emulate the "old" way (i.e. new AmazonS3Client() )
With a builder you need to provide your S3 bucket region using builder method, like .withRegion(Regions.US_EAST_1)
One way to do it with the 1.11.98 version of the sdk, in your code, you would do:
AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient();
And you need to have ~/.aws/credentials and ~/.aws/config files:
~/.aws/credentials contents:
[pca]
aws_access_key_id = KDDDJGIzzz3VVBXYA6Z
aws_secret_access_key = afafaoRDrJhzzzzg/Hhcccppeeddaf
[deault]
aws_access_key_id = AMVKNEIzzzNEBXYJ4m
aws_secret_access_key = bU4rUwwwhhzzzzcppeeddoRDrJhogA
~/.aws/config contents:
[default]
region = us-west-1
[pca]
region = us-west-1
Make sure they're readable, and that you export a profile if you have multiple as above before starting your service:
alper$ export AWS_PROFILE="pca"
There's a good description here
I am trying to upload a file to S3. The code to do so is below:
AmazonS3 s3Client = new AmazonS3Client(new ProfileCredentialsProvider());
String key = String.format(Constants.KEY_NAME + "/%s/%s", activity_id, aFile.getName());
s3Client.putObject(Constants.BUCKET_NAME, key, aFile.getInputStream(), new ObjectMetadata());
The problem I am having is that my ProfileCredentialsProvider cannot access my AWS keys. I have set my environment variables:
AWS_ACCESS_KEY=keys go here
AWS_SECRET_KEY=keys go here
AWS_ACCESS_KEY_ID=keys go here
AWS_DEFAULT_REGION=us-east-1
AWS_SECRET_ACCESS_KEY=keys go here
And as per Amazon's Documentation the set environment variables have precedence over any configuration files. This leads me to ask, why are my keys not being grabbed from my environment variables?
Figured it out.
If you specify a ProfileCredentialsProvider() the AWS SDK will look for a configuration file, regardless of precedence. Simply creating a S3 Client like this:
AmazonS3 s3Client = new AmazonS3Client();
Will check the various locations specified for credentials.