I am working on an API (spring boot). In which i need to return FileInputStream in Response body of get method.
Expected -
When front end call the get-files API, a file download prompt should be open on browser.
Problem - We can't use blob.downloadToFile method, because it will download the file on local machine(or where APIs are hosted) and we need something using which we can send the File directly to front-end on API call.
However there is another method blob.download(), which returns OutputStream which can't be returned on API call.
So is there any way that we can convert that OutputStream to FileInputStream without saving to a actual file on our device.
Example code -
public ByteArrayOutputStream downloadBlob() throws URISyntaxException, InvalidKeyException, StorageException, IOException {
String storageConnectionString = "DefaultEndpointsProtocol=https;" + "AccountName=" + accountName + ";"
+ "AccountKey=" + accountKey;
CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString);
CloudBlobClient blobClient = storageAccount.createCloudBlobClient();
final CloudBlobContainer container = blobClient.getContainerReference("container name");
CloudBlockBlob blob = container.getBlockBlobReference("image.PNG");
ByteArrayOutputStream outputStream =new ByteArrayOutputStream();
blob.download(outputStream);
System.out.println("file downloaded");
return outputStream;
}
Note- the file can be of any type.
Here is the solution - After hitting the API in browser, your file will be downloaded in browser -
package com.example.demo.controllers;
import com.azure.storage.blob.BlobClient;
import com.azure.storage.blob.BlobContainerClient;
import com.azure.storage.blob.BlobServiceClient;
import com.azure.storage.blob.BlobServiceClientBuilder;
import org.springframework.core.io.ByteArrayResource;
import org.springframework.core.io.Resource;
import org.springframework.http.HttpHeaders;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
#RestController
public class FileDownload {
#GetMapping(path="/download-file")
public ResponseEntity<Resource> getFile(){
String fileName = "Can not find symbol.docx";
//azure credentials
String connection_string = "Connection String of storage account on azure";
BlobServiceClient blobServiceClient = new BlobServiceClientBuilder().connectionString(connection_string).buildClient();
BlobContainerClient containerClient= blobServiceClient.getBlobContainerClient("Container name");
System.out.println(containerClient.getBlobContainerName());
BlobClient blob = containerClient.getBlobClient(fileName);
//creating an object of output stream to recieve the file's content from azure blob.
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
blob.download(outputStream);
//converting it to the inputStream to return
final byte[] bytes = outputStream.toByteArray();
ByteArrayInputStream inputStream = new ByteArrayInputStream(bytes);
ByteArrayResource resource = new ByteArrayResource(bytes);
HttpHeaders headers = new HttpHeaders();
headers.add(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=\"" + fileName + "\"");
return ResponseEntity.ok().contentType(MediaType.APPLICATION_OCTET_STREAM)
.headers(headers)
.body(resource);
}
}
Related
I want to generate a password protected zip file and then return to frontend. I am using spring boot and zip4j library. Able to generate zip file in backend service,but not able to send to frontend.
Service
import net.lingala.zip4j.ZipFile;
import net.lingala.zip4j.model.ZipParameters;
import net.lingala.zip4j.model.enums.CompressionLevel;
import net.lingala.zip4j.model.enums.EncryptionMethod;
public ZipFile downloadZipFileWithPassword(String password){
String filePath="Sample.csv";
ZipParameters zipParameters = new ZipParameters();
zipParameters.setEncryptFiles(true);
zipParameters.setCompressionLevel(CompressionLevel.HIGHER);
zipParameters.setEncryptionMethod(EncryptionMethod.AES);
ZipFile zipFile = new ZipFile("Test.zip", password.toCharArray());
zipFile.addFile(new File(filePath), zipParameters);
return zipFile;
}
Controller
import net.lingala.zip4j.ZipFile;
#GetMapping(value = "/v1/downloadZipFileWithPassword")
ResponseEntity<ZipFile> downloadZipFileWithPassword(#RequestParam("password")String password){
ZipFile zipFile = service.downloadZipFileWithPassword(password);
return ResponseEntity.ok().contentType(MediaType.parseMediaType("application/zip"))
.header("Content-Disposition", "attachment; filename=\"Test.zip\"")
.body(zipFile);
}
In controller How can I convert this ZipFile (net.lingala.zip4j.ZipFile) to outputstream and send it to client ?
The below code worked for me.
import net.lingala.zip4j.ZipFile;
#GetMapping(value = "/v1/downloadZipFileWithPassword")
ResponseEntity<StreamingResponseBody> downloadZipFileWithPassword(#RequestParam("password") String password) {
ZipFile zipFile = service.downloadZipFileWithPassword(password);
return ResponseEntity.ok().contentType(MediaType.parseMediaType("application/zip"))
.header("Content-Disposition", "attachment; filename=\"Test.zip\"")
.body(outputStream -> {
try (OutputStream os = outputStream; InputStream inputStream = new FileInputStream(zipFile.getFile())) {
IOUtils.copy(inputStream, os);
}
});
}
I am trying to download a file from my web application in an ActionForward java class. I have looked at many examples to try different solutions but none have worked so far. My knowledge is limited and have spent a good amount of time to get this to work.
From my jsp page a link hits an action in my struts config which takes the thread to an ActionForward return type method on a java class.
I then take the passed in file name and grab it from an amazon s3 bucket. With the file downloaded from the s3 bucket I now have the file bytes[].
I need to then have the file download to the local machine as most files do (appearing in the downloads folder and the web showing the download at the bottom bar of the page)
After following some examples I kept getting this error
Servlet Exception - getOutputStream() has already been called for this
response
I got past the error by doing
response.getOutputStream().write
Instead of creating a new OutputStream like this
OutputStream out = response.getOutputStream();
Now it runs without errors but no file gets downloaded.
Here is the java file I am attempting to do this in.
As you can see in the file below is a commented out DownloadServlet class which I tried as another attempt. I did this because a lot of the examples have classes the extends HttpServlet which I made DownloadServlet extend but it made no difference.
package com.tc.fms.actions;
import com.sun.media.jai.util.PropertyUtil;
import com.tc.fw.User;
import org.apache.commons.beanutils.PropertyUtils;
import java.io.*;
import java.io.File;
import java.util.ArrayList;
import org.apache.struts.action.ActionMessage;
import org.apache.struts.action.ActionMessages;
import org.apache.struts.action.ActionForm;
import org.apache.struts.action.ActionForward;
import org.apache.struts.action.ActionMapping;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.tc.fw.actions.BaseAction;
import org.apache.struts.upload.FormFile;
import io.isfs.utils.ObjectUtils;
import com.tc.fw.*;
import com.tc.fms.*;
import com.tc.fms.service.*;
public class FileDownloadAction extends BaseAction {
private static ObjectUtils objectUtils = new ObjectUtils();
private final int ARBITARY_SIZE = 1048;
public ActionForward performWork(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception {
System.out.println("In File Download Action");
ActionMessages errors = new ActionMessages();
User user = (User)request.getSession().getAttribute(User.lookupKey);
String fileName = (String) PropertyUtils.getSimpleProperty(form, "fileName");
String outboundDir = (String) PropertyUtils.getSimpleProperty(form, "outboundDir");
System.out.println("File Dir: " + outboundDir + " File Name: " + fileName);
try{
try {
// Get file from amazon
byte[] fileBytes = objectUtils.getFileDavid(outboundDir, fileName);
if (fileBytes != null) {
java.io.File file = File.createTempFile(fileName.substring(0, fileName.lastIndexOf(".") - 1), fileName.substring(fileName.lastIndexOf(".")));
FileOutputStream fileOuputStream = new FileOutputStream(file);
fileOuputStream.write(fileBytes);
try {
/* DownloadServlet downloadServlet = new DownloadServlet();
downloadServlet.doGet(request, response, file);*/
response.setContentType("text/plain");
response.setHeader("Content-disposition", "attachment; filename=" + file.getName());
InputStream in = new FileInputStream(file);
/*OutputStream out = response.getOutputStream();*/
byte[] buffer = new byte[ARBITARY_SIZE];
int numBytesRead;
while ((numBytesRead = in.read(buffer)) > 0) {
response.getOutputStream().write(buffer, 0, numBytesRead);
}
} catch (Exception e) {
System.out.println("OutputStream EROOR: " + e);
}
} else {
System.out.println("File Bytes Are Null");
errors.add(ActionMessages.GLOBAL_MESSAGE, new ActionMessage("fms.download.no.file.found"));
saveErrors(request, errors);
return mapping.findForward("failure");
// Failed
}
} catch (Exception eee){
System.out.println("Failed in AWS ERROR: " + eee);
errors.add(ActionMessages.GLOBAL_MESSAGE, new ActionMessage("fms.download.failed"));
saveErrors(request, errors);
return mapping.findForward("failure");
}
}catch (Exception ee){
System.out.println("Failed in global try");
errors.add(ActionMessages.GLOBAL_MESSAGE, new ActionMessage("fms.download.failed"));
saveErrors(request, errors);
return mapping.findForward("failure");
}
return mapping.findForward("success");
}
}
URL url = new URL("https://prod-us-west-2-uploads.s3-us-west-2.amazonaws.com/arn%3Aaws%3Adevicefarm%3Aus-west-2%3A225178842088%3Aproject%3A1e6bbc52-5070-4505-b4aa-592d5e807b15/uploads/arn%3Aaws%3Adevicefarm%3Aus-west-2%3A225178842088%3Aupload%3A1e6bbc52-5070-4505-b4aa-592d5e807b15/501fdfee-877b-42b7-b180-de584309a082/Hamza-test-app.apk?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20181011T092801Z&X-Amz-SignedHeaders=host&X-Amz-Expires=86400&X-Amz-Credential=AKIAJSORV74ENYFBITRQ%2F20181011%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Signature=f041f2bf43eca1ba993fbf7185ad8bcb8eccec8429f2877bc32ab22a761fa2a");
File file = new File("C:\\Users\\Hamza\\Desktop\\Hamza-test-app.apk");
//Create Connection
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
BufferedOutputStream bos = new BufferedOutputStream(connection.getOutputStream());
BufferedInputStream bis = new BufferedInputStream(new FileInputStream(file));
int i;
// read byte by byte until end of stream
while ((i = bis.read()) > 0) {
bos.write(i);
}
bos.flush();
bis.close();
bos.close();
System.out.println("HTTP response code: " + connection.getResponseCode());
}catch(Exception ex){
System.out.println("Failed to Upload File");
}
i want to upload a file to aws farm devices in java but file is not uploading to aws project upload list.
The simplest way is to create an Entity bypassing file
import org.apache.http.client.methods.HttpPut;
import org.apache.http.entity.mime.MultipartEntityBuilder;
import org.apache.http.entity.mime.content.FileBody;
import org.apache.http.impl.client.HttpClients;
import java.io.IOException;
import java.io.File;
public class Test {
/**
* Uploading file at pre-signed URL
*
* #throws IOException
*/
private void uploadFileToAWSS3(String preSignedUrl) throws IOException {
File file = new File("/Users/vmagadum/SitCopiedFile/temp/details.csv");
HttpClient httpClient = HttpClients.custom()
.setDefaultRequestConfig(
RequestConfig.custom().setCookieSpec(CookieSpecs.STANDARD).build()
).build();
HttpPut put = new HttpPut(PRE_SIGNED_URL);
HttpEntity entity = EntityBuilder.create()
.setFile(file)
.build();
put.setEntity(entity);
put.setHeader("Content-Type","text/csv");
HttpResponse response = httpClient.execute(put);
if (response.getStatusLine().getStatusCode() == 200) {
System.out.println("File uploaded successfully at destination.");
} else {
System.out.println("Error occurred while uploading file.");
}
}
}
If you create an entity with MultipartEntityBuilder as below
HttpEntity entity = MultipartEntityBuilder.create()
.addPart("file", new FileBody(file))
.build();
Then it will add unnecessary data to the file. Here are the more details
Link
Just to elaborate on my later comment here are two examples how to upload to the pre-signed URL returned by Device Farm's SDK in java.
Jenkins plugin example
Generic s3 documentation example about presigned urls
[update]
Here is an example which uploads a file to the Device Farm s3 presigned URL:
package com.jmp.stackoveflow;
import java.io.File;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.net.HttpURLConnection;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.auth.AWSSessionCredentials;
import com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider;
import com.amazonaws.services.devicefarm.*;
import com.amazonaws.services.devicefarm.model.CreateUploadRequest;
import com.amazonaws.services.devicefarm.model.Upload;
import org.apache.commons.lang3.RandomStringUtils;
import org.apache.http.HttpResponse;
import org.apache.http.client.methods.HttpPut;
import org.apache.http.entity.FileEntity;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
public class App {
public static void main(String[] args) {
String PROJECT_ARN = "arn:aws:devicefarm:us-west-2:111122223333:project:ffb3d9f2-3dd6-4ab8-93fd-bbb6be67b29b";
String ROLE_ARN = "arn:aws:iam::111122223333:role/DeviceFarm_FULL_ACCESS";
System.out.println("Creating credentials object");
// gettting credentials
STSAssumeRoleSessionCredentialsProvider sts = new STSAssumeRoleSessionCredentialsProvider.Builder(ROLE_ARN,
RandomStringUtils.randomAlphanumeric(8)).build();
AWSSessionCredentials creds = sts.getCredentials();
ClientConfiguration clientConfiguration = new ClientConfiguration()
.withUserAgent("AWS Device Farm - stackoverflow example");
AWSDeviceFarmClient api = new AWSDeviceFarmClient(creds, clientConfiguration);
api.setServiceNameIntern("devicefarm");
System.out.println("Creating upload object");
File app_debug_apk = new File(
"PATH_TO_YOUR_FILE_HERE");
FileEntity fileEntity = new FileEntity(app_debug_apk);
CreateUploadRequest appUploadRequest = new CreateUploadRequest().withName(app_debug_apk.getName())
.withProjectArn(PROJECT_ARN).withContentType("application/octet-stream").withType("ANDROID_APP");
Upload upload = api.createUpload(appUploadRequest).getUpload();
// Create the connection and use it to upload the new object using the
// pre-signed URL.
CloseableHttpClient httpClient = HttpClients.createSystem();
HttpPut httpPut = new HttpPut(upload.getUrl());
httpPut.setHeader("Content-Type", upload.getContentType());
httpPut.setEntity(fileEntity);
try {
HttpResponse response = httpClient.execute(httpPut);
System.out.println("Response: "+ response.getStatusLine().getStatusCode());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
OUTPUT
Creating credentials object
Creating upload object
Response: 200
This is a bit of an old question. In case anyone else finds this. Here is how I solved the problem for files less than 5mb. For files over 5mb its recommended to use multi-part upload.
NOTE: Using Java's "try with resources" is convenient. Try Catch makes this a clumsy operation but it ensures that resources are closed in the least amount of code within a method.
/**
* Serial upload of an array of media files to S3 using a presignedUrl.
*/
public void serialPutMedia(ArrayList<String> signedUrls) {
long getTime = System.currentTimeMillis();
LOGGER.debug("serialPutMedia called");
String toDiskDir = DirectoryMgr.getMediaPath('M');
try {
HttpURLConnection connection;
for (int i = 0; i < signedUrls.size(); i++) {
URL url = new URL(signedUrls.get(i));
connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
localURL = toDiskDir + "/" + fileNames.get(i);
try (BufferedInputStream bin = new BufferedInputStream(new FileInputStream(new File(localURL)));
ObjectOutputStream out = new ObjectOutputStream(new BufferedOutputStream(connection.getOutputStream())))
{
LOGGER.debug("S3put request built ... sending to s3...");
byte[] readBuffArr = new byte[4096];
int readBytes = 0;
while ((readBytes = bin.read(readBuffArr)) >= 0) {
out.write(readBuffArr, 0, readBytes);
}
connection.getResponseCode();
LOGGER.debug("response code: {}", connection.getResponseCode());
} catch (FileNotFoundException e) {
LOGGER.warn("\tFile Not Found exception");
LOGGER.warn(e.getMessage());
e.printStackTrace();
}
}
} catch (MalformedURLException e) {
LOGGER.warn(e.getMessage());
e.printStackTrace();
} catch (IOException e) {
LOGGER.warn(e.getMessage());
e.printStackTrace();
}
getTime = (System.currentTimeMillis() - getTime);
System.out.print("Total get time in syncCloudMediaAction: {" + getTime + "} milliseconds, numElement: {" + signedUrls.size() + "}");
}
These answers are all outdated as they are using AWS for Java V1. To perform this use case using best practice is to use AWS Java V2. Amazon Strongly recommends using V2 over V1.
Here is the Java V2 example that demonstrates how to use the S3Presigner client to create a presigned URL and upload an object to an Amazon Simple Storage Service (Amazon S3) bucket.
package com.example.s3;
// snippet-start:[presigned.java2.generatepresignedurl.import]
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.net.HttpURLConnection;
import java.net.URL;
import java.time.Duration;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
import software.amazon.awssdk.services.s3.model.S3Exception;
import software.amazon.awssdk.services.s3.presigner.model.PresignedPutObjectRequest;
import software.amazon.awssdk.services.s3.presigner.S3Presigner;
import software.amazon.awssdk.services.s3.presigner.model.PutObjectPresignRequest;
// snippet-end:[presigned.java2.generatepresignedurl.import]
/**
* To run this AWS code example, ensure that you have setup your development environment, including your AWS credentials.
*
* For information, see this documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class GeneratePresignedUrlAndUploadObject {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" <bucketName> <keyName> \n\n" +
"Where:\n" +
" bucketName - the name of the Amazon S3 bucket. \n\n" +
" keyName - a key name that represents a text file. \n" ;
if (args.length != 2) {
System.out.println(USAGE);
System.exit(1);
}
String bucketName = args[0];
String keyName = args[1];
Region region = Region.US_EAST_1;
S3Presigner presigner = S3Presigner.builder()
.region(region)
.build();
signBucket(presigner, bucketName, keyName);
presigner.close();
}
// snippet-start:[presigned.java2.generatepresignedurl.main]
public static void signBucket(S3Presigner presigner, String bucketName, String keyName) {
try {
PutObjectRequest objectRequest = PutObjectRequest.builder()
.bucket(bucketName)
.key(keyName)
.contentType("text/plain")
.build();
PutObjectPresignRequest presignRequest = PutObjectPresignRequest.builder()
.signatureDuration(Duration.ofMinutes(10))
.putObjectRequest(objectRequest)
.build();
PresignedPutObjectRequest presignedRequest = presigner.presignPutObject(presignRequest);
String myURL = presignedRequest.url().toString();
System.out.println("Presigned URL to upload a file to: " +myURL);
System.out.println("Which HTTP method needs to be used when uploading a file: " +
presignedRequest.httpRequest().method());
// Upload content to the Amazon S3 bucket by using this URL
URL url = presignedRequest.url();
// Create the connection and use it to upload the new object by using the presigned URL
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestProperty("Content-Type","text/plain");
connection.setRequestMethod("PUT");
OutputStreamWriter out = new OutputStreamWriter(connection.getOutputStream());
out.write("This text was uploaded as an object by using a presigned URL.");
out.close();
connection.getResponseCode();
System.out.println("HTTP response code is " + connection.getResponseCode());
} catch (S3Exception e) {
e.getStackTrace();
} catch (IOException e) {
e.getStackTrace();
}
}
// snippet-end:[presigned.java2.generatepresignedurl.main]
}
You should use the AWS SDK as shown here
i am trying to build web service to download an image from aws s3 using jersey 1.18
i have S3ObjectInputStream with the file.
i need FAST way to retrive the image, my way is very slow (5 seconds)
what is the right way to do that?
here is my code
import java.io.InputStream;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response.ResponseBuilder;
#Path("/getfile")
public class Temp3 {
#GET
#Produces("image/*")
public Response getFile() throws IOException {
System.out.println("in getfile");
awsBL _bl = new awsBL();
S3Object object = _bl.getFile("gps.png");
//System.out.println("**meta:\n"+object.getObjectMetadata());
InputStream objectContent = object.getObjectContent();
InputStream reader = new BufferedInputStream(objectContent);
File file = new File("localFilename");
OutputStream writer = new BufferedOutputStream(new FileOutputStream(file));
int read = -1;
while ( ( read = reader.read() ) != -1 ) {
writer.write(read);
}
writer.flush();
writer.close();
reader.close();
String filename = object.getKey();
ResponseBuilder response = Response.ok(file);
response.header("Content-Disposition",
"attachment; filename="+filename);
return response.build();
}
}
Step one
static byte[] getBinaryData(String filename, String logId) {
return S3_SDK.download(S3_SDK.getFilesBucket(), "/foldername/" + filename, logId);
}
Step two
public static byte[] download(String bucketName, String name, String logId) {
LOG.log(Level.INFO, "{0} :: start download process, bucketName: {1}, name: {2}", new Object[]{logId, bucketName, name});
S3Object object = downloadAsS3Object(bucketName, name, logId);
LOG.log(Level.INFO, "{0} :: download process returns, S3Object: {1}", new Object[]{logId, object});
try {
return IOUtils.toByteArray(object.getObjectContent());
} catch (IOException ex) {
LOG.log(Level.SEVERE, "{0} :: error download process, bucketName: {1}, name: {2}\n{3}", new Object[]{logId, bucketName, name, Utilities.getStackTrace(ex)});
}
return null;
}
I am uploading a file to S3 using Java - this is what I got so far:
AmazonS3 s3 = new AmazonS3Client(new BasicAWSCredentials("XX","YY"));
List<Bucket> buckets = s3.listBuckets();
s3.putObject(new PutObjectRequest(buckets.get(0).getName(), fileName, stream, new ObjectMetadata()));
The file is being uploaded but a WARNING is raised when I am not setting the content length:
com.amazonaws.services.s3.AmazonS3Client putObject: No content length specified for stream > data. Stream contents will be buffered in memory and could result in out of memory errors.
This is a file I am uploading and the stream variable is an InputStream, from which I can get the byte array like this: IOUtils.toByteArray(stream).
So when I try to set the content length and MD5 (taken from here) like this:
// get MD5 base64 hash
MessageDigest messageDigest = MessageDigest.getInstance("MD5");
messageDigest.reset();
messageDigest.update(IOUtils.toByteArray(stream));
byte[] resultByte = messageDigest.digest();
String hashtext = new String(Hex.encodeHex(resultByte));
ObjectMetadata meta = new ObjectMetadata();
meta.setContentLength(IOUtils.toByteArray(stream).length);
meta.setContentMD5(hashtext);
It causes the following error to come back from S3:
The Content-MD5 you specified was invalid.
What am I doing wrong?
Any help appreciated!
P.S. I am on Google App Engine - I cannot write the file to disk or create a temp file because AppEngine does not support FileOutputStream.
Because the original question was never answered, and I had to run into this same problem, the solution for the MD5 problem is that S3 doesn't want the Hex encoded MD5 string we normally think about.
Instead, I had to do this.
// content is a passed in InputStream
byte[] resultByte = DigestUtils.md5(content);
String streamMD5 = new String(Base64.encodeBase64(resultByte));
metaData.setContentMD5(streamMD5);
Essentially what they want for the MD5 value is the Base64 encoded raw MD5 byte-array, not the Hex string. When I switched to this it started working great for me.
If all you are trying to do is solve the content length error from amazon then you could just read the bytes from the input stream to a Long and add that to the metadata.
/*
* Obtain the Content length of the Input stream for S3 header
*/
try {
InputStream is = event.getFile().getInputstream();
contentBytes = IOUtils.toByteArray(is);
} catch (IOException e) {
System.err.printf("Failed while reading bytes from %s", e.getMessage());
}
Long contentLength = Long.valueOf(contentBytes.length);
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(contentLength);
/*
* Reobtain the tmp uploaded file as input stream
*/
InputStream inputStream = event.getFile().getInputstream();
/*
* Put the object in S3
*/
try {
s3client.putObject(new PutObjectRequest(bucketName, keyName, inputStream, metadata));
} catch (AmazonServiceException ase) {
System.out.println("Error Message: " + ase.getMessage());
System.out.println("HTTP Status Code: " + ase.getStatusCode());
System.out.println("AWS Error Code: " + ase.getErrorCode());
System.out.println("Error Type: " + ase.getErrorType());
System.out.println("Request ID: " + ase.getRequestId());
} catch (AmazonClientException ace) {
System.out.println("Error Message: " + ace.getMessage());
} finally {
if (inputStream != null) {
inputStream.close();
}
}
You'll need to read the input stream twice using this exact method so if you are uploading a very large file you might need to look at reading it once into an array and then reading it from there.
For uploading, the S3 SDK has two putObject methods:
PutObjectRequest(String bucketName, String key, File file)
and
PutObjectRequest(String bucketName, String key, InputStream input, ObjectMetadata metadata)
The inputstream+ObjectMetadata method needs a minimum metadata of Content Length of your inputstream. If you don't, then it will buffer in-memory to get that information, this could cause OOM. Alternatively, you could do your own in-memory buffering to get the length, but then you need to get a second inputstream.
Not asked by the OP (limitations of his environment), but for someone else, such as me. I find it easier, and safer (if you have access to temp file), to write the inputstream to a temp file, and put the temp file. No in-memory buffer, and no requirement to create a second inputstream.
AmazonS3 s3Service = new AmazonS3Client(awsCredentials);
File scratchFile = File.createTempFile("prefix", "suffix");
try {
FileUtils.copyInputStreamToFile(inputStream, scratchFile);
PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, id, scratchFile);
PutObjectResult putObjectResult = s3Service.putObject(putObjectRequest);
} finally {
if(scratchFile.exists()) {
scratchFile.delete();
}
}
While writing to S3, you need to specify the length of S3 object to be sure that there are no out of memory errors.
Using IOUtils.toByteArray(stream) is also prone to OOM errors because this is backed by ByteArrayOutputStream
So, the best option is to first write the inputstream to a temp file on local disk and then use that file to write to S3 by specifying the length of temp file.
i am actually doing somewhat same thing but on my AWS S3 storage:-
Code for servlet which is receiving uploaded file:-
import java.io.IOException;
import java.io.PrintWriter;
import java.util.List;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.commons.fileupload.FileItem;
import org.apache.commons.fileupload.disk.DiskFileItemFactory;
import org.apache.commons.fileupload.servlet.ServletFileUpload;
import com.src.code.s3.S3FileUploader;
public class FileUploadHandler extends HttpServlet {
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
doPost(request, response);
}
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
PrintWriter out = response.getWriter();
try{
List<FileItem> multipartfiledata = new ServletFileUpload(new DiskFileItemFactory()).parseRequest(request);
//upload to S3
S3FileUploader s3 = new S3FileUploader();
String result = s3.fileUploader(multipartfiledata);
out.print(result);
} catch(Exception e){
System.out.println(e.getMessage());
}
}
}
Code which is uploading this data as AWS object:-
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.util.List;
import java.util.UUID;
import org.apache.commons.fileupload.FileItem;
import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.auth.ClasspathPropertiesFileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.S3Object;
public class S3FileUploader {
private static String bucketName = "***NAME OF YOUR BUCKET***";
private static String keyName = "Object-"+UUID.randomUUID();
public String fileUploader(List<FileItem> fileData) throws IOException {
AmazonS3 s3 = new AmazonS3Client(new ClasspathPropertiesFileCredentialsProvider());
String result = "Upload unsuccessfull because ";
try {
S3Object s3Object = new S3Object();
ObjectMetadata omd = new ObjectMetadata();
omd.setContentType(fileData.get(0).getContentType());
omd.setContentLength(fileData.get(0).getSize());
omd.setHeader("filename", fileData.get(0).getName());
ByteArrayInputStream bis = new ByteArrayInputStream(fileData.get(0).get());
s3Object.setObjectContent(bis);
s3.putObject(new PutObjectRequest(bucketName, keyName, bis, omd));
s3Object.close();
result = "Uploaded Successfully.";
} catch (AmazonServiceException ase) {
System.out.println("Caught an AmazonServiceException, which means your request made it to Amazon S3, but was "
+ "rejected with an error response for some reason.");
System.out.println("Error Message: " + ase.getMessage());
System.out.println("HTTP Status Code: " + ase.getStatusCode());
System.out.println("AWS Error Code: " + ase.getErrorCode());
System.out.println("Error Type: " + ase.getErrorType());
System.out.println("Request ID: " + ase.getRequestId());
result = result + ase.getMessage();
} catch (AmazonClientException ace) {
System.out.println("Caught an AmazonClientException, which means the client encountered an internal error while "
+ "trying to communicate with S3, such as not being able to access the network.");
result = result + ace.getMessage();
}catch (Exception e) {
result = result + e.getMessage();
}
return result;
}
}
Note :- I am using aws properties file for credentials.
Hope this helps.
I've created a library that uses multipart uploads in the background to avoid buffering everything in memory and also doesn't write to disk: https://github.com/alexmojaki/s3-stream-upload
Just passing the file object to the putobject method worked for me. If you are getting a stream, try writing it to a temp file before passing it on to S3.
amazonS3.putObject(bucketName, id,fileObject);
I am using Aws SDK v1.11.414
The answer at https://stackoverflow.com/a/35904801/2373449 helped me
adding log4j-1.2.12.jar file has resolved the issue for me