How to convert an S3 object to a Resource in springBoot? - java

I have a video which is of S3Object type.
I am trying to convert it to a Resource using the following code.
Resource resource = new InputStreamResource(video.getObjectContent());
I get the following error
"InputStream has already been read - do not use InputStreamResource if a stream needs to be read multiple times"
Is it possible to read an S3Object directly as a Resource?
How can I correct it?

How to load S3 bucket file from a private bucket:
//Make sure your credentials are in the aws credentials file or use .withCredentials()
AmazonS3 s3client = AmazonS3ClientBuilder
.standard()
.withRegion("us-east-1")
.build();
if(!s3client.doesBucketExistV2(adfsProperties.keystoreBucket())) {
throw new Exception.... //bucket not found
}
S3Object s3object = s3client.getObject("bucket-name", "file-name");
S3ObjectInputStream inputStream = s3object.getObjectContent();
Resource r = new InputStreamResource(inputStream);

Look like before passing into InputStreamResource you stream used by some method. Or better approach if your s3 url is publicly accessible, then try like that
#Autowired
ResourceLoader resourceLoader;
In method use as follows.
Resource resource = resourceLoader.getResource(<s3 url>)

Related

AWS Lambda with Java unable to GET a file from S3

I'm creating a very simple example with AWS Lambda and I have problem using Java runtime. I have to read a S3 object from a bucket of mine and with a NodeJS example like the following I have no problem
var S3FS = require('s3fs');
exports.handler = (req, res) => {
var s3Options = {
region: 'eu-west-3',
accessKeyId: 'key',
secretAccessKey: 'secret'
};
var fsImpl = new S3FS('mybucket', s3Options);
fsImpl.readFile("myfile",function (err,data) {
if (err) throw err;
console.log(data.toString());
});
}
If I try a similar Java example my function always timeouts (even if I increase to 1 minute)
context.getLogger().log("Before");
BasicAWSCredentials awsCreds = new BasicAWSCredentials("key", "secret");
AmazonS3 s3 = AmazonS3ClientBuilder.standard()
.withRegion("eu-west-3")
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
context.getLogger().log("client created");
S3Object object = s3.getObject(
new GetObjectRequest(bucketName, key));
context.getLogger().log("After");
The function always blocks when creating the S3 client. I know I can avoid using the key and secret in the Lambda, but also in this way the function blocks. It isn't a policy problem because I'm testing these examples from the same Lambda configuration so I think it's something related to the Java AWS S3 API. Any suggestions?
The Java lambda finally works using defaultClient() and not standard() method of AmazonS3ClientBuilder.
The difference between these two methods are the credentials, retrieved from env or passed as arguments. There is probably a wrong configuration I don't see, but anyway a more clear error could be useful

storing pdf files on amazon s3 using itext

This is my first time using amazon s3 and I want to store pdf files that I create using itext in java spring.
The code (hosted on ec2 instance) creates a pdf that I would like to store somewhere. I am exploring if amazon s3 can hold those files. Eventually I would like to retrieve it as well. Can this be done using itext and java spring? Any examples would be great.
To Upload Files to Amazon s3 You need to use putObject method of AmazonS3Client class like this:
AWSCredentials credentials = new BasicAWSCredentials(appId,appSecret);
AmazonS3 s3Client = new AmazonS3Client(credentials);
String bucketPath = "YOUR_BUCKET_NAME/FOLDER_INSIDE_BUCKET";
InputStream is = new FileInputStream("YOUR_PDF_FILE_PATH");
ObjectMetadata meta = new ObjectMetadata();
meta.setContentLength(is.available());
s3Client.putObject(new PutObjectRequest(bucketPath,"YOUR_FILE.pdf", is, meta).withCannedAcl(CannedAccessControlList.Private));
And to get file from S3, You need to generate a pre-signed URL to access private file from S3 or if your files are public then you can directly access your file by hitting link of file in your browser, The link for your file will be available in AWS S3 console.
Also we have specified CannedAccessControlList.Private in the above upload code which means we are making permission of file as private So we need to generate presigned URL to access file like this:
AWSCredentials credentials = new BasicAWSCredentials(appId,appSecret);
AmazonS3 s3Client = new AmazonS3Client(credentials);
GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest("YOUR_BUCKET_NAME", "FOLDER_INSIDE_BUCKET/YOUR_FILE.pdf");
generatePresignedUrlRequest.setMethod(HttpMethod.GET);
Date expiration = new Date();
long milliSeconds = expiration.getTime();
milliSeconds += 1000 * 60 * 60; // Add 1 hour.
expiration.setTime(milliSeconds);
generatePresignedUrlRequest.setExpiration(expiration);
URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);
String finalUrl = url.toString();

Partially reading a tar.gz file from Amazon S3

I'm trying to extract specific files from Amazon S3 without having to read all the bytes because the archives can be huge and I only need 2 or 3 files out of it.
I'm using the AWS Java SDK. Here's the code (Exception Handing skipped):
AWSCredentials credentials = new BasicAWSCredentials("accessKey", "secretKey");
AWSCredentialsProvider credentialsProvider = new AWSStaticCredentialsProvider(credentials);
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withRegion(Regions.US_EAST_1).withCredentials(credentialsProvider).build();
S3Object object = s3Client.getObject("bucketname", "file.tar.gz");
S3ObjectInputStream objectContent = object.getObjectContent();
TarArchiveInputStream tarInputStream = new TarArchiveInputStream(new GZIPInputStream(objectContent));
TarArchiveEntry currentEntry;
while((currentEntry = tarInputStream.getNextTarEntry()) != null) {
if(currentEntry.getName().equals("1/foo.bar") && currentEntry.isFile()) {
FileOutputStream entryOs = new FileOutputStream("foo.bar");
IOUtils.copy(tarInputStream, entryOs);
entryOs.close();
break;
}
}
objectContent.abort(); // Warning at this line
tarInputStream.close(); // warning at this line
When I use this method it gives a warning that not all the bytes from the stream were read which I'm doing intentionally.
WARNING: Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
Is it necessary to drain the stream and what would be the downsides of not doing it? Can I just ignore the warning?
You don't have to worry about the warning - it only warns you that it will result in the closure of HTTP connection and that there might be data which you will miss. Since close() delegates to abort() you get the warning in either of the calls.
Note that it is not guaranteed as you are not reading the whole archive anyway if the files you are interested in are located towards the end of the archive.
S3's HTTP server supports ranges, so if you could influence the format of the archive or during the creation of it generate some metadata you could actually skip or perhaps request only the file you are interested in.

Setting AWS S3 Region

I am trying to create an aws s3 bucket using the following java code.
AmazonS3 s3client = AmazonS3ClientBuilder.defaultClient();
s3client.setRegion(Region.getRegion(Regions.AP_SOUTH_1));
But I am getting the following error:
"exception": "com.amazonaws.SdkClientException",
"message": "Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region."
Am I trying to set region in an incorrect way? Please advice.
If you are not using any proxies and you already setup your credentials, you can use below code:
AmazonS3 s3client = AmazonS3ClientBuilder.standard()
.withRegion(Region.getRegion(Regions.AP_SOUTH_1));
But if you need to setup a proxy and manually set the credentials, you can use below code:
AWSCredentials cred = new BasicAWSCredentials(<accessKey>,<secretKey>);
AmazonS3 s3client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(cred))
.withClientConfiguration(<your configuration>)
.withRegion(Region.getRegion(Regions.AP_SOUTH_1));
The reason you are getting the error is you have not setup AWS with Eclipse.
If you are using Eclipse as your IDE then read:
http://docs.aws.amazon.com/toolkit-for-eclipse/v1/user-guide/welcome.html
Once the profile is setup then
AmazonS3 s3 = new AmazonS3Client(new ProfileCredentialsProvider());
Region apSouth1 = Region.getRegion(Regions.AP_SOUTH_1);
s3.setRegion(apSouth1);
Also make sure to import:
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;

AmazonS3Client is deprecated how to get s3client object with using credential

For getting s3 client object i am using below code.
BasicAWSCredentials creds = new BasicAWSCredentials(key, S3secretKey);
AmazonS3 s3Client =AmazonS3ClientBuilder.standard().withCredentials(new AWSStaticCredentialsProvider(creds)).build();
Getting below errors
Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
I had to change to:
AmazonS3 client = AmazonS3ClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withForceGlobalBucketAccess(true)
.build();
to emulate the "old" way (i.e. new AmazonS3Client() )
With a builder you need to provide your S3 bucket region using builder method, like .withRegion(Regions.US_EAST_1)
One way to do it with the 1.11.98 version of the sdk, in your code, you would do:
AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient();
And you need to have ~/.aws/credentials and ~/.aws/config files:
~/.aws/credentials contents:
[pca]
aws_access_key_id = KDDDJGIzzz3VVBXYA6Z
aws_secret_access_key = afafaoRDrJhzzzzg/Hhcccppeeddaf
[deault]
aws_access_key_id = AMVKNEIzzzNEBXYJ4m
aws_secret_access_key = bU4rUwwwhhzzzzcppeeddoRDrJhogA
~/.aws/config contents:
[default]
region = us-west-1
[pca]
region = us-west-1
Make sure they're readable, and that you export a profile if you have multiple as above before starting your service:
alper$ export AWS_PROFILE="pca"
There's a good description here

Categories

Resources