I'm sending files to amazon s3 server like this and really need to change part sizes of sending file from default amazon (5mb) to 1 mb, is there any way to do that?
TransferObserver observer = transferUtility.upload(
"mydir/test_dir", /* The bucket to upload to */
data.getData().getLastPathSegment(), /* The key for the uploaded object */
root /* The file where the data to upload exists */
);;
The minimum part size for S3 multipart uploads is 5MB. (See http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html). The Transfer Utility uses the smallest allowable part size, which is usually 5MB.
Related
I have a java application in which I would like to process around 10GB records of file and zip them to a single folder and upload to S3. Since the overall size is around 10GB I cannot add all the files in memory and then upload to S3, and hence I would need to create a zip file in S3 and update the contents of the zip file by partitioning my files. Is there any means by which I can update an existing zip file in S3 without downloading to my local folder?
You can use aws java sdk for it
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk</artifactId>
<version>1.11.398</version>
</dependency>
Create a amazon s3 client using following
BasicAWSCredentials credentials = new BasicAWSCredentials("access_key", "secret_key");
AmazonS3 amazonS3 = AmazonS3ClientBuilder.standard().withCredentials(new AWSStaticCredentialsProvider(credentials)).build();
Create a TransferManager and set the MultipartUploadThresholdoad.
Amazon S3 impose minimum part size of 5mb, So we are using 5mb here. You can increase the size as per your requirement.
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(amazonS3)
.withMultipartUploadThreshold((long) (5 * 1024 * 1024))
.build();
Set yours S3 bucket name where you want to upload and keyName will used to name the uploaded file. tm.upload will start the uploading process in background.
String bucketName = "my-bucket";
String keyName = "mydata.zip";
File file = new File("path_to_file/mydata.zip");
Upload upload = tm.upload(bucketName, keyName, file);
waitForCompletion is the blocking call and will return result once function completes its execution to upload file to s3.
try {
upload.waitForCompletion();
} catch (AmazonClientException e) {
// ...
}
I need to send larger video files (and other files) to server with base64 encode.
I get out of memory exception, because I want to store the file in the memory (in byte[]) then encode it to string with Base64.encodeToString. But how can I encode the file and send it out on-the-air and/or using less memory? Or how can I do this better?
To the request I using now MultipartEntityBuilder after I build it, I send it out to the server with post method and with the file I need to send other data too. So I need to send both in one request and the server only accepts files with base64 encoded.
OR
Because I using Drupal's REST module to create content from posts, it's another solution for me, if I can send normal post with a normal form. (like the browser does) The problem is, I can't find, just only one solution. When you call the <endpoint>/file url and you pass four things, these are:
array("filesize" => 1029, // file size
"filename" => "something.mp4", //file name
"uid" => 1, // user id, who upload the file
"file" => "base64 encoded file string")
After this request I get an fid, which is the uploaded file's id. I need to send this with the real content, when I create node. If I can send the file with normal post mode (without encode) like the browser does at form send, it would be better.
I need to send larger video files (and other files) to server with base64 encode.
You should consider getting a better server, one that supports binary uploads.
I get out of memory exception, because I want to store the file in the memory (in byte[]) then encode it to string with Base64.encodeToString.
That will not work for any significant video. You do not have heap space for this.
But how can I encode the file and send it out on-the-air and/or using less memory? Or how can I do this better?
You can implement a streaming converter to base64 (read the bytes in from a file and write the bytes out to a base64-encoded file, where you are only processing a small number of bytes at a time in RAM). Then, upload the file along with the rest of your form data.
i have application that picks the file from a dedicated path on my device and sends it to server.
I m using ksoap2 lib to call .NET webservice to send my file to server. i am using Base 64 encoding.
I can send file with max size of 1MB without encryption and 850Kb with encryption. Encyrption algorithm i am using is 3DES.
If i try to send files larger than above size i get following error: Caused by: java.lang.OutOfMemoryError at org.ksoap2.transport.HttpTransportSE.call(HttpTransportSE.java:121)
My Test environment: Android emulator with API Level 8, Android 2.2 and SDCard memory 512 MB
Is it that i am missing out something? Can using BLOB help me in this scenario
Is there any way to send larger file? i have heard of sending data chunks but have no idea on that . any link or sample code will really help.
to get file data using following code:
here url = where file is stored
public byte[] getFileData( String vURL){
instream = new FileInputStream(vURL);
size = (int) vURL.length();
fileContent = new byte[size];
instream.read(fileContent);
}
Encode the data using following code:
byte[] res = Utilities.getFileData(file);
String mdata = android.util.Base64.encodeToString(res, android.util.Base64.DEFAULT);
calling server side web service and sending data to server
SoapObject request = new SoapObject(nameSpace, methodName);
if (fileData != null && !fileData.equals("")) {
request.addProperty("vBLOBData", fileData);
}
SoapSerializationEnvelope envelope = getEnvelope(request);
HttpTransportSE ht = new HttpTransportSE(url); // ,3000
ht.debug = true;
ht.call(soapAction, envelope);
response = (envelope.getResponse()).toString();
Not able to send filedata more than 1 MB.
Thanks in advance
I don't know what you are trying to achieve, but why you don't divide your file into parts and send each part individually into a loop or into an android background service using a timer that sends a part every x seconds.
Try to set your buffer to 1024 before sending it , its about the limit of your buffer size and your ram
Use GZip Zipping algorithm to zip large file from mobile side; use same unzipping from server.
Also use MultipartEntity can help to upload large file content.
If compression doesn't help - as mentioned in previous post - you will probably have to segment the message yourself.
Segmentation
In a REST client, i upload several files to a server.
In order to report to the user about the upload process, i use a progress bar.
The total size is set to the sum of the files dimension.
This is an estimated value, because there are more bytes in the upload than the number of bytes in the file.
The question is: can I obtain the actual number of bytes of the upload before that the upload begins?
This would allow for the total size to be determined rather than estimated before the upload begins.
FormDataMultiPart multiPart = new FormDataMultiPart();
FileDataBodyPart fdbp = new FileDataBodyPart("data.zip", new File("data.zip"));
BodyPart bp = multiPart.bodyPart(fdbp);
builder.post(String.class, multiPart);
Simple answer - no.
More comprehensive one: you could. But it is not worth it. Basically you would need to somehow invoke MessageBodyWriter for FormDataMultiPart instance (see org.glassfish.jersey.media.multipart.internal.MultiPartWriter.writeTo(...)) and get written bytes from that call. There are some issues related to this approach;
you'll do the writeTo operation twice - that means one request will consume twice resources as it should or
you'll have to cache the output and send it as an byte[] - you will need lots of memory (depends on data.zip file size).
Im currently building an web-app that allows users to upload content via blobstore and to later download it.
However, the servlet that takes care of the download is called BlobServiceServlet
and whenever a user downloads a blob, the filename is changed to "BlobServiceServlet" and the extension is also changed sometimes to .bin. Does anyone know how to fix this problem?
Add a "Content-disposition" header to the response.
See http://en.wikipedia.org/wiki/MIME#Content-Disposition for an example.
E.g., in the handler,
self.response.headers['Content-Disposition'] = 'attachment; filename=foo.doc'
Just an additional info.
This is the code to let the browser know the file size:
BlobInfoFactory blobInfoFactory = new BlobInfoFactory(DatastoreServiceFactory.getDatastoreService());
BlobInfo blobInfo = blobInfoFactory.loadBlobInfo(blobKey);
resp.setContentLength(new Long(blobInfo.getSize()).intValue());
resp.setHeader("content-type", blobInfo.getContentType());
resp.setHeader("content-disposition", "attachment;filename=" + blobInfo.getFilename());
blobstoreService.serve(blobKey, resp);
Note that if you have files with more than 1MB in size, the file size is not sent to browser. GAE reads the blob 1MB at a time and overwrites the file size header in the response.
I found all this information here:
http://www.mail-archive.com/google-appengine#googlegroups.com/msg29314.html