AWS Lambda with Java unable to GET a file from S3 - java

I'm creating a very simple example with AWS Lambda and I have problem using Java runtime. I have to read a S3 object from a bucket of mine and with a NodeJS example like the following I have no problem
var S3FS = require('s3fs');
exports.handler = (req, res) => {
var s3Options = {
region: 'eu-west-3',
accessKeyId: 'key',
secretAccessKey: 'secret'
};
var fsImpl = new S3FS('mybucket', s3Options);
fsImpl.readFile("myfile",function (err,data) {
if (err) throw err;
console.log(data.toString());
});
}
If I try a similar Java example my function always timeouts (even if I increase to 1 minute)
context.getLogger().log("Before");
BasicAWSCredentials awsCreds = new BasicAWSCredentials("key", "secret");
AmazonS3 s3 = AmazonS3ClientBuilder.standard()
.withRegion("eu-west-3")
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
context.getLogger().log("client created");
S3Object object = s3.getObject(
new GetObjectRequest(bucketName, key));
context.getLogger().log("After");
The function always blocks when creating the S3 client. I know I can avoid using the key and secret in the Lambda, but also in this way the function blocks. It isn't a policy problem because I'm testing these examples from the same Lambda configuration so I think it's something related to the Java AWS S3 API. Any suggestions?

The Java lambda finally works using defaultClient() and not standard() method of AmazonS3ClientBuilder.
The difference between these two methods are the credentials, retrieved from env or passed as arguments. There is probably a wrong configuration I don't see, but anyway a more clear error could be useful

Related

Issuing uploading file to Amazon S3 Bucket. using the AWS SDK for Java v2

I am trying to upload a file to an AWS S3 Bucket using the AWS SDK 2.0 for Java, but I am getting an error when trying to do so.
software.amazon.awssdk.services.s3.model.S3Exception: The request signature we calculated does not match the signature you provided. Check your key and signing method.
I am not sure what I am missing. I have tried adding in a key, but I am not even sure what I need to put in there, think it is just a name to refer to what has been uploaded though.
private S3Client s3Client;
private void upload() {
setUpS3Client();
PutObjectRequest putObjectRequest = PutObjectRequest.builder()
.bucket(bucketName) //name of the bucket i am trying to upload to
.key("testing") //No idea what goes in here.
.build();
byte[] objectByteArray = getObjectByteArray(bucketRequest.getPathToFile()); //bucketRequest just holds the data that will be sent
PutObjectResponse putObjectResponse = s3Client.putObject(putObjectRequest, RequestBody.fromBytes(objectByteArray));
}
private void setUpS3Client() {
Region region = Region.AF_SOUTH_1;
s3Client = S3Client.builder()
.region(region)
.credentialsProvider(createStaticCredentialsProvider())
.build();
this.s3Client = s3Client;
}
Does anyone know what this error is referring to and what I need to change to get the file to upload? Any help will be appreciated.
This Java example works fine. Here is a screenshot of this Java code working:
You state:
key("testing") //No idea what goes in here.
.build();
The key is the name of the object to update. For example, book.pdf to upload a PDF file. All input to Java AWS V2 examples is documented at start in the main method.
Now for your problem - make sure you have the required dependencies in the POM file. Use the POM file located here:
https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2/example_code/s3
Also at start of every program, there is a link to Java V2 Dev guide that talks about setting up your DEV Environment, including your creds.
The problem was my credentials. The secret access key I provided was incorrect. :(

Set data retrieval option on S3 bucket for glacier storage class

My question is about data I have stored in S3 bucket that has the storage class glacier. I would like to retrieve it with the fastest option available, but I can't find a suitable method to achieve that. It seems like the default request is using the standard retrieval option.
In the aws docs, I found a nice way to restore the data.
From here I have this code example:
import java.io.IOException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.RestoreObjectRequest;
public class RestoreArchivedObject {
public static void main(String[] args) throws IOException {
String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";
String keyName = "*** Object key ***";
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
// Create and submit a request to restore an object from Glacier for two days.
RestoreObjectRequest requestRestore = new RestoreObjectRequest(bucketName, keyName, 2);
s3Client.restoreObjectV2(requestRestore);
// Check the restoration status of the object.
ObjectMetadata response = s3Client.getObjectMetadata(bucketName, keyName);
Boolean restoreFlag = response.getOngoingRestore();
System.out.format("Restoration status: %s.\n",
restoreFlag ? "in progress" : "not in progress (finished or failed)");
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}
That is nice and works. But i dont find in the docs how to set the Retrieval option if i want to choose between:
Bulk retrieval 5-12 hours
Expedited trieval 1-5 min
Standard trieval 3-5 hours
here is the RestoreObjectRequest class from aws docs. There i can see a function setType(String type) Sets the restore request type. But there is no description about setting one of the mentioned options(1-3). Would be nice if someone can tell me if this is possible to set with java sdk aws.
EDIT:
Here i can read that setTier(String tier) should do it.
The data access tier to use when restoring the archive. Standard is the default.
Type: Enum
Valid values: Expedited | Standard | Bulk
Ancestors: RestoreRequest And now i have an error message if i change the default Request to:
RestoreObjectRequest requestRestore = new RestoreObjectRequest(bucketName, keyName, 2).withTier("Standard");
Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: The XML you provided was not well-formed or did not validate against our published schema
using somehow an old version of java sdk
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk</artifactId>
<version>1.11.386</version>
</dependency>
Probably late to the party, but I found myself into the same situation ... and I guess something is wrong on the Java SDK.
I could achieve the expected result by defining the GlacierJobParameters with the Tier you required and then adding the GlacierJobParameters to the restore request.
A little Scala snippet which seems to then generate a valid XML for S3
val glacierJobParameters = (new GlacierJobParameters).withTier(tier)
val restoreObjectRequest =
new RestoreObjectRequest(bucketName.value, key.value, expirationInDays)
.withGlacierJobParameters(glacierJobParameters)
update After checking https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOSTrestore.html
I can see that the withTier set directly on the RestoreObjectRequest would create a valid XML when doing a SELECT query, but in your case and my one we require the GlacierJobParameters.

Java aws sdk - The specified location-constraint is not valid (non-amazon)

I would like to create a bucket in the ceph object storage via the S3 API. Which works fine, if I use Pythons boto3:
s3 = boto3.resource(
's3',
endpoint_url='https://my.non-amazon-endpoint.com',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key
)
bucket = s3.create_bucket(Bucket="my-bucket") # successfully creates bucket
Trying the same with java leads to an exception:
BasicAWSCredentials awsCreds = new BasicAWSCredentials(access_key, secret_key);
AwsClientBuilder.EndpointConfiguration config =
new AwsClientBuilder.EndpointConfiguration(
"https://my.non-amazon-endpoint.com",
"MyRegion");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withEndpointConfiguration(config)
.build();
List<Bucket> buckets = s3Client.listBuckets();
// this works and lists all containers, hence the connection should be fine
for (Bucket bucket : buckets) {
System.out.println(bucket.getName() + "\t" +
StringUtils.fromDate(bucket.getCreationDate()));
}
Bucket bucket = s3Client.createBucket("my-bucket");
// AmazonS3Exception: The specified location-constraint is not valid (Service: Amazon S3; Status Code: 400; Error Code: InvalidLocationConstraint...
I am aware of several related issues, for instance this issue, but I was not able to adjust the suggested solutions to my non-amazon storage.
Digging deeper into the boto3 code, it turns out, that the LocationConstraint is set to None, if no region has been specified. But omitting the region in java leads to the InvalidLocationConstrain, too.
How do I have to configure the endpoint with the java s3 aws sdk to successfully create buckets?
Kind regards
UPDATE
Setting the signingRegion to "us-east-1" enables bucket creation functionality:
AwsClientBuilder.EndpointConfiguration config =
new AwsClientBuilder.EndpointConfiguration(
"https://my.non-amazon.endpoint.com",
"us-east-1");
If one assigns another region, the sdk will parse the region from endpoint url, as specified here.
In my case, this leads to an invalid region, for instance non-amazon.
Setting the signingRegion to "us-east-1" enables bucket creation functionality:
AwsClientBuilder.EndpointConfiguration config =
new AwsClientBuilder.EndpointConfiguration(
"https://my.non-amazon.endpoint.com",
"us-east-1");
If one assigns another region, the sdk will parse the region from endpoint url, as specified here.
In my case, this leads to an invalid region, for instance non-amazon.

AmazonS3Client is deprecated how to get s3client object with using credential

For getting s3 client object i am using below code.
BasicAWSCredentials creds = new BasicAWSCredentials(key, S3secretKey);
AmazonS3 s3Client =AmazonS3ClientBuilder.standard().withCredentials(new AWSStaticCredentialsProvider(creds)).build();
Getting below errors
Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
I had to change to:
AmazonS3 client = AmazonS3ClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withForceGlobalBucketAccess(true)
.build();
to emulate the "old" way (i.e. new AmazonS3Client() )
With a builder you need to provide your S3 bucket region using builder method, like .withRegion(Regions.US_EAST_1)
One way to do it with the 1.11.98 version of the sdk, in your code, you would do:
AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient();
And you need to have ~/.aws/credentials and ~/.aws/config files:
~/.aws/credentials contents:
[pca]
aws_access_key_id = KDDDJGIzzz3VVBXYA6Z
aws_secret_access_key = afafaoRDrJhzzzzg/Hhcccppeeddaf
[deault]
aws_access_key_id = AMVKNEIzzzNEBXYJ4m
aws_secret_access_key = bU4rUwwwhhzzzzcppeeddoRDrJhogA
~/.aws/config contents:
[default]
region = us-west-1
[pca]
region = us-west-1
Make sure they're readable, and that you export a profile if you have multiple as above before starting your service:
alper$ export AWS_PROFILE="pca"
There's a good description here

AWS ProfileCredentialsProvider not able to get credentials

I am trying to upload a file to S3. The code to do so is below:
AmazonS3 s3Client = new AmazonS3Client(new ProfileCredentialsProvider());
String key = String.format(Constants.KEY_NAME + "/%s/%s", activity_id, aFile.getName());
s3Client.putObject(Constants.BUCKET_NAME, key, aFile.getInputStream(), new ObjectMetadata());
The problem I am having is that my ProfileCredentialsProvider cannot access my AWS keys. I have set my environment variables:
AWS_ACCESS_KEY=keys go here
AWS_SECRET_KEY=keys go here
AWS_ACCESS_KEY_ID=keys go here
AWS_DEFAULT_REGION=us-east-1
AWS_SECRET_ACCESS_KEY=keys go here
And as per Amazon's Documentation the set environment variables have precedence over any configuration files. This leads me to ask, why are my keys not being grabbed from my environment variables?
Figured it out.
If you specify a ProfileCredentialsProvider() the AWS SDK will look for a configuration file, regardless of precedence. Simply creating a S3 Client like this:
AmazonS3 s3Client = new AmazonS3Client();
Will check the various locations specified for credentials.

Categories

Resources