software.amazon.awssdk.services.s3.model.s3exception: null with using GetObjectRequest - java

I am trying to use this code but i'm getting an exeption
var awsBasicCredentials = Packages.software.amazon.awssdk.auth.credentials.AwsBasicCredentials.create("ackey","secretkey");
var credentials = Packages.software.amazon.awssdk.auth.credentials.StaticCredentialsProvider.create(awsBasicCredentials);
var region = Packages.software.amazon.awssdk.regions.Region.AWS_CN_GLOBAL;
var uri = Packages.java.net.URI.create("http://host");
var client = Packages.software.amazon.awssdk.services.s3.S3Client.builder()
.credentialsProvider(credentials)
.region(region)
.endpointOverride(uri)
.build();
var request = Packages.software.amazon.awssdk.services.s3.model.GetObjectRequest.builder()
.bucket("/bucketname")
.key("key")
.build();
var response = client.getObject(request);
return response;
I am using /bucketname because the final link looks like host/bucketname instead bucketname.host
S3Exeption:
Caused by: software.amazon.awssdk.services.s3.model.S3Exception: null (Service: S3, Status Code: 403, Request ID: tx0000000000000162eece0-00633fea7f-306fc-msk-rt1)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleErrorResponse(CombinedResponseHandler.java:125)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleResponse(CombinedResponseHandler.java:82)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:60)

Bucket names can consist only of lowercase letters, numbers, dots (.), and hyphens (-).
Here is S3 Java code that works and returns a byte[] that represents the object that is located in the given Amazon S3 bucket.
In this example, the path represents the local file system where the object is written to . FOr example, a PDF file.
package com.example.s3;
// snippet-start:[s3.java2.getobjectdata.import]
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.core.ResponseBytes;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.GetObjectRequest;
import software.amazon.awssdk.services.s3.model.S3Exception;
import software.amazon.awssdk.services.s3.model.GetObjectResponse;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStream;
// snippet-end:[s3.java2.getobjectdata.import]
/**
* Before running this Java V2 code example, set up your development environment, including your credentials.
*
* For more information, see the following documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class GetObjectData {
public static void main(String[] args) {
final String usage = "\n" +
"Usage:\n" +
" <bucketName> <keyName> <path>\n\n" +
"Where:\n" +
" bucketName - The Amazon S3 bucket name. \n\n"+
" keyName - The key name. \n\n"+
" path - The path where the file is written to. \n\n";
if (args.length != 3) {
System.out.println(usage);
System.exit(1);
}
String bucketName = args[0];
String keyName = args[1];
String path = args[2];
ProfileCredentialsProvider credentialsProvider = ProfileCredentialsProvider.create();
Region region = Region.US_EAST_1;
S3Client s3 = S3Client.builder()
.region(region)
.credentialsProvider(credentialsProvider)
.build();
getObjectBytes(s3,bucketName,keyName, path);
s3.close();
}
// snippet-start:[s3.java2.getobjectdata.main]
public static void getObjectBytes (S3Client s3, String bucketName, String keyName, String path) {
try {
GetObjectRequest objectRequest = GetObjectRequest
.builder()
.key(keyName)
.bucket(bucketName)
.build();
ResponseBytes<GetObjectResponse> objectBytes = s3.getObjectAsBytes(objectRequest);
byte[] data = objectBytes.asByteArray();
// Write the data to a local file.
File myFile = new File(path );
OutputStream os = new FileOutputStream(myFile);
os.write(data);
System.out.println("Successfully obtained bytes from an S3 object");
os.close();
} catch (IOException ex) {
ex.printStackTrace();
} catch (S3Exception e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
}
// snippet-end:[s3.java2.getobjectdata.main]
}

Related

Upload a file to azure blob storage met an error using java

I'm new to azure ,I want to upload a file in azure using java sdk and met with error.
Here is my approach,
BlobServiceClient client = new BlobServiceClientBuilder()
.connectionString(connectStr)
.buildClient();
BlobContainerClient blobContainerClient = blobServiceClient.createBlobContainer(containerName);
String localPath = "./data/";
String fileName = "quickstart" + java.util.UUID.randomUUID() + ".txt";
BlobClient blobClient = blobContainerClient.getBlobClient(fileName);
FileWriter writer = null;
try
{
writer = new FileWriter(localPath + fileName, true);
writer.write("Hello, World!");
writer.close();
}
catch (IOException ex)
{
System.out.println(ex.getMessage());
}
blobClient.uploadFromFile(localPath + fileName);
exception,
Exception in thread "main" java.lang.illegalArgumenatation:Input byte array has wrong 4-byte ending unit
at java.base/java.util.Base64$Decoder.decode0(Base64.java:837)
Kindly help with this?
I tried in my environment and successfully uploaded files in azure blob storage.
package com.blobs.quickstart;
import com.azure.storage.blob.*;
import com.azure.storage.blob.BlobServiceClient;
public class App
{
public static void main( String[] args )
{
String connectStr = "<Connection string>";
BlobServiceClient blobServiceClient = new BlobServiceClientBuilder().connectionString(connectStr).buildClient();
String containerName = "test";
BlobContainerClient containerClient = blobServiceClient.getBlobContainerClient(containerName);
String localPath = "C:\\Users\\v-vsettu\\Documents\\Venkat\\barcode.docx";
BlobClient blobClient = containerClient.getBlobClient("barcode.docx");
System.out.println("Blob uploaded");
blobClient.uploadFromFile(localPath);
}
}
Console:
Portal:
Initially, I got an same error :
Because when I pass wrong connection string or access key in the code I got an error. Make sure you are connection string is in correct.
You can get the connection string from portal:
Reference:
Quickstart: Azure Blob Storage library - Java | Microsoft Learn

How do I upload a file to a pre-signed URL in AWS using Java?

URL url = new URL("https://prod-us-west-2-uploads.s3-us-west-2.amazonaws.com/arn%3Aaws%3Adevicefarm%3Aus-west-2%3A225178842088%3Aproject%3A1e6bbc52-5070-4505-b4aa-592d5e807b15/uploads/arn%3Aaws%3Adevicefarm%3Aus-west-2%3A225178842088%3Aupload%3A1e6bbc52-5070-4505-b4aa-592d5e807b15/501fdfee-877b-42b7-b180-de584309a082/Hamza-test-app.apk?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20181011T092801Z&X-Amz-SignedHeaders=host&X-Amz-Expires=86400&X-Amz-Credential=AKIAJSORV74ENYFBITRQ%2F20181011%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Signature=f041f2bf43eca1ba993fbf7185ad8bcb8eccec8429f2877bc32ab22a761fa2a");
File file = new File("C:\\Users\\Hamza\\Desktop\\Hamza-test-app.apk");
//Create Connection
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
BufferedOutputStream bos = new BufferedOutputStream(connection.getOutputStream());
BufferedInputStream bis = new BufferedInputStream(new FileInputStream(file));
int i;
// read byte by byte until end of stream
while ((i = bis.read()) > 0) {
bos.write(i);
}
bos.flush();
bis.close();
bos.close();
System.out.println("HTTP response code: " + connection.getResponseCode());
}catch(Exception ex){
System.out.println("Failed to Upload File");
}
i want to upload a file to aws farm devices in java but file is not uploading to aws project upload list.
The simplest way is to create an Entity bypassing file
import org.apache.http.client.methods.HttpPut;
import org.apache.http.entity.mime.MultipartEntityBuilder;
import org.apache.http.entity.mime.content.FileBody;
import org.apache.http.impl.client.HttpClients;
import java.io.IOException;
import java.io.File;
public class Test {
/**
* Uploading file at pre-signed URL
*
* #throws IOException
*/
private void uploadFileToAWSS3(String preSignedUrl) throws IOException {
File file = new File("/Users/vmagadum/SitCopiedFile/temp/details.csv");
HttpClient httpClient = HttpClients.custom()
.setDefaultRequestConfig(
RequestConfig.custom().setCookieSpec(CookieSpecs.STANDARD).build()
).build();
HttpPut put = new HttpPut(PRE_SIGNED_URL);
HttpEntity entity = EntityBuilder.create()
.setFile(file)
.build();
put.setEntity(entity);
put.setHeader("Content-Type","text/csv");
HttpResponse response = httpClient.execute(put);
if (response.getStatusLine().getStatusCode() == 200) {
System.out.println("File uploaded successfully at destination.");
} else {
System.out.println("Error occurred while uploading file.");
}
}
}
If you create an entity with MultipartEntityBuilder as below
HttpEntity entity = MultipartEntityBuilder.create()
.addPart("file", new FileBody(file))
.build();
Then it will add unnecessary data to the file. Here are the more details
Link
Just to elaborate on my later comment here are two examples how to upload to the pre-signed URL returned by Device Farm's SDK in java.
Jenkins plugin example
Generic s3 documentation example about presigned urls
[update]
Here is an example which uploads a file to the Device Farm s3 presigned URL:
package com.jmp.stackoveflow;
import java.io.File;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.net.HttpURLConnection;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.auth.AWSSessionCredentials;
import com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider;
import com.amazonaws.services.devicefarm.*;
import com.amazonaws.services.devicefarm.model.CreateUploadRequest;
import com.amazonaws.services.devicefarm.model.Upload;
import org.apache.commons.lang3.RandomStringUtils;
import org.apache.http.HttpResponse;
import org.apache.http.client.methods.HttpPut;
import org.apache.http.entity.FileEntity;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
public class App {
public static void main(String[] args) {
String PROJECT_ARN = "arn:aws:devicefarm:us-west-2:111122223333:project:ffb3d9f2-3dd6-4ab8-93fd-bbb6be67b29b";
String ROLE_ARN = "arn:aws:iam::111122223333:role/DeviceFarm_FULL_ACCESS";
System.out.println("Creating credentials object");
// gettting credentials
STSAssumeRoleSessionCredentialsProvider sts = new STSAssumeRoleSessionCredentialsProvider.Builder(ROLE_ARN,
RandomStringUtils.randomAlphanumeric(8)).build();
AWSSessionCredentials creds = sts.getCredentials();
ClientConfiguration clientConfiguration = new ClientConfiguration()
.withUserAgent("AWS Device Farm - stackoverflow example");
AWSDeviceFarmClient api = new AWSDeviceFarmClient(creds, clientConfiguration);
api.setServiceNameIntern("devicefarm");
System.out.println("Creating upload object");
File app_debug_apk = new File(
"PATH_TO_YOUR_FILE_HERE");
FileEntity fileEntity = new FileEntity(app_debug_apk);
CreateUploadRequest appUploadRequest = new CreateUploadRequest().withName(app_debug_apk.getName())
.withProjectArn(PROJECT_ARN).withContentType("application/octet-stream").withType("ANDROID_APP");
Upload upload = api.createUpload(appUploadRequest).getUpload();
// Create the connection and use it to upload the new object using the
// pre-signed URL.
CloseableHttpClient httpClient = HttpClients.createSystem();
HttpPut httpPut = new HttpPut(upload.getUrl());
httpPut.setHeader("Content-Type", upload.getContentType());
httpPut.setEntity(fileEntity);
try {
HttpResponse response = httpClient.execute(httpPut);
System.out.println("Response: "+ response.getStatusLine().getStatusCode());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
OUTPUT
Creating credentials object
Creating upload object
Response: 200
This is a bit of an old question. In case anyone else finds this. Here is how I solved the problem for files less than 5mb. For files over 5mb its recommended to use multi-part upload.
NOTE: Using Java's "try with resources" is convenient. Try Catch makes this a clumsy operation but it ensures that resources are closed in the least amount of code within a method.
/**
* Serial upload of an array of media files to S3 using a presignedUrl.
*/
public void serialPutMedia(ArrayList<String> signedUrls) {
long getTime = System.currentTimeMillis();
LOGGER.debug("serialPutMedia called");
String toDiskDir = DirectoryMgr.getMediaPath('M');
try {
HttpURLConnection connection;
for (int i = 0; i < signedUrls.size(); i++) {
URL url = new URL(signedUrls.get(i));
connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
localURL = toDiskDir + "/" + fileNames.get(i);
try (BufferedInputStream bin = new BufferedInputStream(new FileInputStream(new File(localURL)));
ObjectOutputStream out = new ObjectOutputStream(new BufferedOutputStream(connection.getOutputStream())))
{
LOGGER.debug("S3put request built ... sending to s3...");
byte[] readBuffArr = new byte[4096];
int readBytes = 0;
while ((readBytes = bin.read(readBuffArr)) >= 0) {
out.write(readBuffArr, 0, readBytes);
}
connection.getResponseCode();
LOGGER.debug("response code: {}", connection.getResponseCode());
} catch (FileNotFoundException e) {
LOGGER.warn("\tFile Not Found exception");
LOGGER.warn(e.getMessage());
e.printStackTrace();
}
}
} catch (MalformedURLException e) {
LOGGER.warn(e.getMessage());
e.printStackTrace();
} catch (IOException e) {
LOGGER.warn(e.getMessage());
e.printStackTrace();
}
getTime = (System.currentTimeMillis() - getTime);
System.out.print("Total get time in syncCloudMediaAction: {" + getTime + "} milliseconds, numElement: {" + signedUrls.size() + "}");
}
These answers are all outdated as they are using AWS for Java V1. To perform this use case using best practice is to use AWS Java V2. Amazon Strongly recommends using V2 over V1.
Here is the Java V2 example that demonstrates how to use the S3Presigner client to create a presigned URL and upload an object to an Amazon Simple Storage Service (Amazon S3) bucket.
package com.example.s3;
// snippet-start:[presigned.java2.generatepresignedurl.import]
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.net.HttpURLConnection;
import java.net.URL;
import java.time.Duration;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
import software.amazon.awssdk.services.s3.model.S3Exception;
import software.amazon.awssdk.services.s3.presigner.model.PresignedPutObjectRequest;
import software.amazon.awssdk.services.s3.presigner.S3Presigner;
import software.amazon.awssdk.services.s3.presigner.model.PutObjectPresignRequest;
// snippet-end:[presigned.java2.generatepresignedurl.import]
/**
* To run this AWS code example, ensure that you have setup your development environment, including your AWS credentials.
*
* For information, see this documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class GeneratePresignedUrlAndUploadObject {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" <bucketName> <keyName> \n\n" +
"Where:\n" +
" bucketName - the name of the Amazon S3 bucket. \n\n" +
" keyName - a key name that represents a text file. \n" ;
if (args.length != 2) {
System.out.println(USAGE);
System.exit(1);
}
String bucketName = args[0];
String keyName = args[1];
Region region = Region.US_EAST_1;
S3Presigner presigner = S3Presigner.builder()
.region(region)
.build();
signBucket(presigner, bucketName, keyName);
presigner.close();
}
// snippet-start:[presigned.java2.generatepresignedurl.main]
public static void signBucket(S3Presigner presigner, String bucketName, String keyName) {
try {
PutObjectRequest objectRequest = PutObjectRequest.builder()
.bucket(bucketName)
.key(keyName)
.contentType("text/plain")
.build();
PutObjectPresignRequest presignRequest = PutObjectPresignRequest.builder()
.signatureDuration(Duration.ofMinutes(10))
.putObjectRequest(objectRequest)
.build();
PresignedPutObjectRequest presignedRequest = presigner.presignPutObject(presignRequest);
String myURL = presignedRequest.url().toString();
System.out.println("Presigned URL to upload a file to: " +myURL);
System.out.println("Which HTTP method needs to be used when uploading a file: " +
presignedRequest.httpRequest().method());
// Upload content to the Amazon S3 bucket by using this URL
URL url = presignedRequest.url();
// Create the connection and use it to upload the new object by using the presigned URL
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestProperty("Content-Type","text/plain");
connection.setRequestMethod("PUT");
OutputStreamWriter out = new OutputStreamWriter(connection.getOutputStream());
out.write("This text was uploaded as an object by using a presigned URL.");
out.close();
connection.getResponseCode();
System.out.println("HTTP response code is " + connection.getResponseCode());
} catch (S3Exception e) {
e.getStackTrace();
} catch (IOException e) {
e.getStackTrace();
}
}
// snippet-end:[presigned.java2.generatepresignedurl.main]
}
You should use the AWS SDK as shown here

How to transfer a file between Amazon S3 buckets programmatically using java?

I'm new in Amazon s3 web service and need to develop a command line application to transfer a file between Amazon S3 buckets. The content of the input file must be converted to the target format and then copied to the destination folder. Target format can be XML or Json and file content respects a given data model.
I have intermediate experience with Java and just created an account which is still pending and hence, trying to develop a workflow to solve the problem.
Well, it's not that hard. I have done it to a customer few months back, and you may find the code below. To read a file from AmazonS3 bucket go through this Amazon documentation [1]. To write a file into Amazon s3 bucket read this documentation [2].
Other than that you may need to add all the access tokens into your local Operating system. You may get some help from an Admin person to do that. Getting the correct credentials is the only tricky part as I remember.
Amazon has a nice little documentation and I recommend you to go through that too.
package org.saig.watermark.demo;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.FilterInputStream;
import java.io.IOException;
import java.net.URL;
import org.apache.commons.io.IOUtils;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import com.amazonaws.AmazonClientException;
import com.amazonaws.HttpMethod;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.GeneratePresignedUrlRequest;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.S3Object;
public class AmazonS3Util {
private static AWSCredentials credentials = null;
private static final String fileSeparator = "/";
private static final Log log = LogFactory.getLog(AmazonS3Util.class);
static {
/*
* The ProfileCredentialsProvider will return your [default]
* credential profile by reading from the credentials file located at
* (~/.aws/credentials).
*/
try {
credentials = new ProfileCredentialsProvider().getCredentials();
} catch (Exception e) {
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. "
+ "Please make sure that your credentials file is at the correct "
+ "location (~/.aws/credentials), and is in valid format.",
e);
}
}
public static void readFileFromS3cketBucket(String bucketName, String key, String dirPath,
String fileName) {
FilterInputStream inputStream = null;
FileOutputStream outputStream = null;
try {
// Remove the file if it already exists.
if (new File(dirPath + WatermarkConstants.fileSeparator + fileName).exists()) {
FileUtil.delete(new File(dirPath + WatermarkConstants.fileSeparator + fileName));
}
AmazonS3 s3 = new AmazonS3Client(credentials);
Region usEast1 = Region.getRegion(Regions.US_EAST_1);
s3.setRegion(usEast1);
log.info("Downloading an object from the S3 bucket.");
S3Object object = s3.getObject(new GetObjectRequest(bucketName, key));
log.info("Content-Type: " + object.getObjectMetadata().getContentType());
inputStream = object.getObjectContent();
File dirForOrder = new File(dirPath);
if (!dirForOrder.exists()) {
dirForOrder.mkdir();
}
outputStream = new FileOutputStream(new File(dirPath + fileSeparator + fileName));
IOUtils.copy(inputStream, outputStream);
inputStream.close();
outputStream.close();
} catch (FileNotFoundException e) {
log.error(e);
} catch (IOException e) {
log.error(e);
}
}
public static void uploadFileToS3Bucket(String bucketName, String key, String dirPath,
String fileName) {
AmazonS3 s3 = new AmazonS3Client(credentials);
Region usEast1 = Region.getRegion(Regions.US_EAST_1);
s3.setRegion(usEast1);
s3.putObject(new PutObjectRequest(bucketName, key, new File(dirPath + fileSeparator +
fileName)));
try {
FileUtil.delete(new File(dirPath));
} catch (IOException e) {
log.error(e);
}
}
public static void main(String[] args) {
readFileFromS3cketBucket("bucketName",
"s3Key",
"localFileSystemPath",
"destinationFileName.pdf");
}
}
Hope this helps. Happy Coding !
[1] http://docs.aws.amazon.com/AmazonS3/latest/dev/RetrievingObjectUsingJava.html
[2] http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjSingleOpJava.html

Uploading Base64 encoded image to Amazon s3 using java

I am trying to upload files to Amazon S3 storage using Amazon’s Java API for it. The code is
Byte[] b = data.getBytes();
InputStream stream = new ByteArrayInputStream(b);
//InputStream stream = new FileInputStream(new File("D:/samples/test.txt"));
AWSCredentials credentials = new BasicAWSCredentials("<key>", "<key1>");
AmazonS3 s3client = new AmazonS3Client(credentials);
s3client.putObject(new PutObjectRequest("myBucket",name,stream, new ObjectMetadata()));
When I run the code after commenting the first two lines and uncommenting the third one, ie stream is a FileoutputStream, the file is uploaded correctly. But when data is a base64 encoded String, which is image data, the file is uploaded but image is corrupted.
Amazon documentation says I need to create and attach a POST policy and signature for this to work. How I can do that in java? I am not using an html form for uploading.
First you should remove data:image/png;base64, from beginning of the string:
Sample Code Block:
byte[] bI = org.apache.commons.codec.binary.Base64.decodeBase64((base64Data.substring(base64Data.indexOf(",")+1)).getBytes());
InputStream fis = new ByteArrayInputStream(bI);
AmazonS3 s3 = new AmazonS3Client();
Region usWest02 = Region.getRegion(Regions.US_WEST_2);
s3.setRegion(usWest02);
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(bI.length);
metadata.setContentType("image/png");
metadata.setCacheControl("public, max-age=31536000");
s3.putObject(BUCKET_NAME, filename, fis, metadata);
s3.setObjectAcl(BUCKET_NAME, filename, CannedAccessControlList.PublicRead);
Here's a DTO class that takes in the base64Image data passed in directly from your client and parsed into its different components that can easily be passed in your uploadToAwsS3 method:
public class Base64ImageDto {
private byte[] imageBytes;
private String fileName;
private String fileType;
private boolean hasErrors;
private List<String> errorMessages;
private static final List<String> VALID_FILE_TYPES = new ArrayList<String>(3);
static {
VALID_FILE_TYPES.add("jpg");
VALID_FILE_TYPES.add("jpeg");
VALID_FILE_TYPES.add("png");
}
public Base64ImageDto(String b64ImageData, String fileName) {
this.fileName = fileName;
this.errorMessages = new ArrayList<String>(2);
String[] base64Components = b64ImageData.split(",");
if (base64Components.length != 2) {
this.hasErrors = true;
this.errorMessages.add("Invalid base64 data: " + b64ImageData);
}
if (!this.hasErrors) {
String base64Data = base64Components[0];
this.fileType = base64Data.substring(base64Data.indexOf('/') + 1, base64Data.indexOf(';'));
if (!VALID_FILE_TYPES.contains(fileType)) {
this.hasErrors = true;
this.errorMessages.add("Invalid file type: " + fileType);
}
if (!this.hasErrors) {
String base64Image = base64Components[1];
this.imageBytes = javax.xml.bind.DatatypeConverter.parseBase64Binary(base64Image);
}
}
}
public byte[] getImageBytes() {
return imageBytes;
}
public void setImageBytes(byte[] imageBytes) {
this.imageBytes = imageBytes;
}
public boolean isHasErrors() {
return hasErrors;
}
public void setHasErrors(boolean hasErrors) {
this.hasErrors = hasErrors;
}
public List<String> getErrorMessages() {
return errorMessages;
}
public void setErrorMessages(List<String> errorMessages) {
this.errorMessages = errorMessages;
}
public String getFileType() {
return fileType;
}
public void setFileType(String fileType) {
this.fileType = fileType;
}
public String getFileName() {
return fileName;
}
public void setFileName(String fileName) {
this.fileName = fileName;
}
}
And here's the method you can add to your AwsS3Service that will put the object up there (Note: You might not be using a transfer manager to manage your puts so you'll need to change that code accordingly):
public void uploadBase64Image(Base64ImageDto base64ImageDto, String pathToFile) {
InputStream stream = new ByteArrayInputStream(base64ImageDto.getImageBytes());
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(base64ImageDto.getImageBytes().length);
metadata.setContentType("image/"+base64ImageDto.getFileType());
String bucketName = awsS3Configuration.getBucketName();
String key = pathToFile + base64ImageDto.getFileName();
try {
LOGGER.info("Uploading file " + base64ImageDto.getFileName() + " to AWS S3");
PutObjectRequest objectRequest = new PutObjectRequest(bucketName, key, stream, metadata);
objectRequest.setCannedAcl(CannedAccessControlList.PublicRead);
Upload s3FileUpload = s3TransferManager.upload(objectRequest);
s3FileUpload.waitForCompletion();
} catch (Exception e) {
e.printStackTrace();
LOGGER.info("Error uploading file " + base64ImageDto.getFileName() + " to AWS S3");
}
}
For those who use a later SDK:
implementation group: software.amazon.awssdk, name: s3, version: 2.10.3
byte[] bI = Base64.decodeBase64((base64Data.substring(base64Data.indexOf(",") + 1)).getBytes());
InputStream fis = new ByteArrayInputStream(bI);
amazonS3Client.putObject(PutObjectRequest.builder().bucket(bucketName).key(fileName)
.contentType(contentType)
.contentLength(Long.valueOf(bI.length))
.build(),
RequestBody.fromInputStream(fis, Long.valueOf(bI.length)));
The sample code of how to uploaded images(png / jpg) is as follows. --
try {
BasicAWSCredentials awsCreds = new BasicAWSCredentials(accessKey, secretKey);
AmazonS3 s3Client =
AmazonS3ClientBuilder.standard().withRegion(clientRegion)
.withCredentials(new
AWSStaticCredentialsProvider(awsCreds))
.build();
PutObjectRequest request = new PutObjectRequest(bucketName, fileName, new File(fileToUpload));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("image/jpg");
request.setMetadata(metadata);
s3Client.putObject(request.withCannedAcl(CannedAccessControlList.PublicRead));
logger.info("File " + fileToUpload + " uploaded to AWS bucket " + bucketName);
} catch (AmazonServiceException e) {
logger.error(e);
fileName = Common.NO_VALUE.toString();
} catch (SdkClientException e) {
logger.error(e);
fileName = Common.NO_VALUE.toString();
}
However, I did not use any concept of encoding or decoding. This plane simple metadata content type of "image/jpg" worked.

AmazonS3 putObject with InputStream length example

I am uploading a file to S3 using Java - this is what I got so far:
AmazonS3 s3 = new AmazonS3Client(new BasicAWSCredentials("XX","YY"));
List<Bucket> buckets = s3.listBuckets();
s3.putObject(new PutObjectRequest(buckets.get(0).getName(), fileName, stream, new ObjectMetadata()));
The file is being uploaded but a WARNING is raised when I am not setting the content length:
com.amazonaws.services.s3.AmazonS3Client putObject: No content length specified for stream > data. Stream contents will be buffered in memory and could result in out of memory errors.
This is a file I am uploading and the stream variable is an InputStream, from which I can get the byte array like this: IOUtils.toByteArray(stream).
So when I try to set the content length and MD5 (taken from here) like this:
// get MD5 base64 hash
MessageDigest messageDigest = MessageDigest.getInstance("MD5");
messageDigest.reset();
messageDigest.update(IOUtils.toByteArray(stream));
byte[] resultByte = messageDigest.digest();
String hashtext = new String(Hex.encodeHex(resultByte));
ObjectMetadata meta = new ObjectMetadata();
meta.setContentLength(IOUtils.toByteArray(stream).length);
meta.setContentMD5(hashtext);
It causes the following error to come back from S3:
The Content-MD5 you specified was invalid.
What am I doing wrong?
Any help appreciated!
P.S. I am on Google App Engine - I cannot write the file to disk or create a temp file because AppEngine does not support FileOutputStream.
Because the original question was never answered, and I had to run into this same problem, the solution for the MD5 problem is that S3 doesn't want the Hex encoded MD5 string we normally think about.
Instead, I had to do this.
// content is a passed in InputStream
byte[] resultByte = DigestUtils.md5(content);
String streamMD5 = new String(Base64.encodeBase64(resultByte));
metaData.setContentMD5(streamMD5);
Essentially what they want for the MD5 value is the Base64 encoded raw MD5 byte-array, not the Hex string. When I switched to this it started working great for me.
If all you are trying to do is solve the content length error from amazon then you could just read the bytes from the input stream to a Long and add that to the metadata.
/*
* Obtain the Content length of the Input stream for S3 header
*/
try {
InputStream is = event.getFile().getInputstream();
contentBytes = IOUtils.toByteArray(is);
} catch (IOException e) {
System.err.printf("Failed while reading bytes from %s", e.getMessage());
}
Long contentLength = Long.valueOf(contentBytes.length);
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(contentLength);
/*
* Reobtain the tmp uploaded file as input stream
*/
InputStream inputStream = event.getFile().getInputstream();
/*
* Put the object in S3
*/
try {
s3client.putObject(new PutObjectRequest(bucketName, keyName, inputStream, metadata));
} catch (AmazonServiceException ase) {
System.out.println("Error Message: " + ase.getMessage());
System.out.println("HTTP Status Code: " + ase.getStatusCode());
System.out.println("AWS Error Code: " + ase.getErrorCode());
System.out.println("Error Type: " + ase.getErrorType());
System.out.println("Request ID: " + ase.getRequestId());
} catch (AmazonClientException ace) {
System.out.println("Error Message: " + ace.getMessage());
} finally {
if (inputStream != null) {
inputStream.close();
}
}
You'll need to read the input stream twice using this exact method so if you are uploading a very large file you might need to look at reading it once into an array and then reading it from there.
For uploading, the S3 SDK has two putObject methods:
PutObjectRequest(String bucketName, String key, File file)
and
PutObjectRequest(String bucketName, String key, InputStream input, ObjectMetadata metadata)
The inputstream+ObjectMetadata method needs a minimum metadata of Content Length of your inputstream. If you don't, then it will buffer in-memory to get that information, this could cause OOM. Alternatively, you could do your own in-memory buffering to get the length, but then you need to get a second inputstream.
Not asked by the OP (limitations of his environment), but for someone else, such as me. I find it easier, and safer (if you have access to temp file), to write the inputstream to a temp file, and put the temp file. No in-memory buffer, and no requirement to create a second inputstream.
AmazonS3 s3Service = new AmazonS3Client(awsCredentials);
File scratchFile = File.createTempFile("prefix", "suffix");
try {
FileUtils.copyInputStreamToFile(inputStream, scratchFile);
PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, id, scratchFile);
PutObjectResult putObjectResult = s3Service.putObject(putObjectRequest);
} finally {
if(scratchFile.exists()) {
scratchFile.delete();
}
}
While writing to S3, you need to specify the length of S3 object to be sure that there are no out of memory errors.
Using IOUtils.toByteArray(stream) is also prone to OOM errors because this is backed by ByteArrayOutputStream
So, the best option is to first write the inputstream to a temp file on local disk and then use that file to write to S3 by specifying the length of temp file.
i am actually doing somewhat same thing but on my AWS S3 storage:-
Code for servlet which is receiving uploaded file:-
import java.io.IOException;
import java.io.PrintWriter;
import java.util.List;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.commons.fileupload.FileItem;
import org.apache.commons.fileupload.disk.DiskFileItemFactory;
import org.apache.commons.fileupload.servlet.ServletFileUpload;
import com.src.code.s3.S3FileUploader;
public class FileUploadHandler extends HttpServlet {
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
doPost(request, response);
}
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
PrintWriter out = response.getWriter();
try{
List<FileItem> multipartfiledata = new ServletFileUpload(new DiskFileItemFactory()).parseRequest(request);
//upload to S3
S3FileUploader s3 = new S3FileUploader();
String result = s3.fileUploader(multipartfiledata);
out.print(result);
} catch(Exception e){
System.out.println(e.getMessage());
}
}
}
Code which is uploading this data as AWS object:-
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.util.List;
import java.util.UUID;
import org.apache.commons.fileupload.FileItem;
import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.auth.ClasspathPropertiesFileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.S3Object;
public class S3FileUploader {
private static String bucketName = "***NAME OF YOUR BUCKET***";
private static String keyName = "Object-"+UUID.randomUUID();
public String fileUploader(List<FileItem> fileData) throws IOException {
AmazonS3 s3 = new AmazonS3Client(new ClasspathPropertiesFileCredentialsProvider());
String result = "Upload unsuccessfull because ";
try {
S3Object s3Object = new S3Object();
ObjectMetadata omd = new ObjectMetadata();
omd.setContentType(fileData.get(0).getContentType());
omd.setContentLength(fileData.get(0).getSize());
omd.setHeader("filename", fileData.get(0).getName());
ByteArrayInputStream bis = new ByteArrayInputStream(fileData.get(0).get());
s3Object.setObjectContent(bis);
s3.putObject(new PutObjectRequest(bucketName, keyName, bis, omd));
s3Object.close();
result = "Uploaded Successfully.";
} catch (AmazonServiceException ase) {
System.out.println("Caught an AmazonServiceException, which means your request made it to Amazon S3, but was "
+ "rejected with an error response for some reason.");
System.out.println("Error Message: " + ase.getMessage());
System.out.println("HTTP Status Code: " + ase.getStatusCode());
System.out.println("AWS Error Code: " + ase.getErrorCode());
System.out.println("Error Type: " + ase.getErrorType());
System.out.println("Request ID: " + ase.getRequestId());
result = result + ase.getMessage();
} catch (AmazonClientException ace) {
System.out.println("Caught an AmazonClientException, which means the client encountered an internal error while "
+ "trying to communicate with S3, such as not being able to access the network.");
result = result + ace.getMessage();
}catch (Exception e) {
result = result + e.getMessage();
}
return result;
}
}
Note :- I am using aws properties file for credentials.
Hope this helps.
I've created a library that uses multipart uploads in the background to avoid buffering everything in memory and also doesn't write to disk: https://github.com/alexmojaki/s3-stream-upload
Just passing the file object to the putobject method worked for me. If you are getting a stream, try writing it to a temp file before passing it on to S3.
amazonS3.putObject(bucketName, id,fileObject);
I am using Aws SDK v1.11.414
The answer at https://stackoverflow.com/a/35904801/2373449 helped me
adding log4j-1.2.12.jar file has resolved the issue for me

Categories

Resources