I am using AWS S3 bucket for Uploading list of files, I am using MultipleFileUpload and here is my request, while uploading the files if the internet gets disconnected and again came back then uploading process is not getting updated. How can I do so when internet is coming back, it should automatically get uploaded from the last position.
final ObjectMetadataProvider metadataProvider = new ObjectMetadataProvider() {
public void provideObjectMetadata(File file, ObjectMetadata metadata) {
}
};
final MultipleFileUpload multipleFileUpload = transferManager.uploadFileList(HttpUrls.IMAGE_BUCKET_NAME, "photos/mint_original/", myDir_temp, upload_file, metadataProvider);
The TransferManager component in the AWS Android SDK has been deprecated in favor of the TransferUtility component. The TransferUtility component allows you to pause and resume transfers. It also has support for network monitoring and will automatically pause and resume transfers when the network goes down and comes back up. Here is the link to the TransferUtility documentation - https://aws-amplify.github.io/docs/android/storage
Related
I am generating S3 pre signed URLs so that the client(mobile app) can PUT an image directly to S3 instead of going through a service. For my use case the expiry time of the pre signed URL needs to be configured for a longer window (10-20 mins). Therefore, I want to limit the size of file upload to S3 so that any malicious attacker can not upload large files to the S3 bucket. The client will get the URL from a service which has access to the S3 bucket. I am using AWS Java SDK.
I found that this can be done using POST forms for browser uploads but how can I do this using just signed S3 URL PUT?
I was using S3-Signed-URLS the first time and was also concerned about this.
And I think this whole signed Urls stuff is a bit of a pain because you cant put a maximum Object/Upload size limit on them.
I think thats something very important on file-uploads in general, that is just missing..
By not having this option you are forced to handle that problem with the expiry time etc. This gets really messy..
But it seems that you can use S3 Buckets also with normal Post-Requests, what has a content-length parameter in their policy.
So I'll probably exchange my Signed-URLS with POST-Routes in the future.
I think for proper, larger applications this is the way to go.(?)
What might help with your issue:
In the JavaScript SDK there is a method / function that gets you only the meta-data of the an S3-Object (including File Size) without downloading the whole file.
It's called s3.headObject()
I think, after the upload is done, it takes some time for AWS to process that newly uploaded file and then is available in your bucket.
What I did was, I set a timer after each upload to check the file-size and if its bigger 1mb, it will delete the file.
I think for production you wanna log that somewhere in a DB.
My FileNames also include the user-id of who uploaded the file.
That way, you can block an account after a too big upload if you wanted.
This here worked for me in javascript..
function checkS3(key) {
return new Promise((resolve, reject) => {
s3.headObject(headParams, (err, metadata) => {
console.log("setTimeout upload Url");
if (err && ["NotFound", "Forbidden"].indexOf(err.code) > -1) {
// console.log(err);
return reject(err);
//return resolve();
} else if (err) {
const e = Object.assign({}, Errors.SOMETHING_WRONG, { err });
// console.log(e);
// console.log(err);
return reject(e);
}
return resolve(metadata);
});
});
}
I'm trying to implement multipart upload in Java, following this sample: https://docs.aws.amazon.com/AmazonS3/latest/dev/llJavaUploadFile.html
But my actual task is a bit more complicated: I need to support resuming in case application was shut down during uploading. Also, I can't use TransferManager - I need to use low-level API for particular reason.
The code there is pretty straight-forward, but the problem comes with List<PartETag> partETags part. When finalizing resumed upload, I need to have this collection, previously filled during the upload process. And, obviously, if I'm trying to finalize upload after application restart, I don't have this collection anymore.
So the question is: how do I finalize resumed upload? Is it possible to obtain List<PartETag> partETags from the server using some API? What I have is only a MultipartUpload object.
Get the list of multipart uploads in progress
MultipartUploadListing multipartUploadListing =
s3Client.listMultipartUploads(new ListMultipartUploadsRequest(bucketName));
## for uploadId and keyName
Get the list of parts for each uploadId and key
PartsListing partsListing =
s3Client.listParts(new ListPartsRequest(bucketName, key, uploadId));
Get the List of part summary
List<PartSummary> parts = partsListing.getParts();
From PartSummary getETag() and getPartNumber()
for(PartSummary part: parts)
{
part.getETag();
part.getPartNumber();
}
Amazon S3 SDK Package
AmazonS3 client
I'm using the AWS Transfer Manager to backup a lot of files to S3. Sometimes the backup fails in the middle in the middle, and I don't want to re-upload all the files, but only the ones that haven't been uploaded yet.
Is there something baked in the Transfer Manager or the S3 Put Request that would let me do that automatically, or is my only solution to check the MD5 of the file with a HEAD request first, and see if it's different before starting the upload.
Thanks!
Rather than coding your own solution, you could use the AWS Command-Line Interface (CLI) to copy or sync the files to Amazon S3.
For example:
aws s3 sync <directory> s3://my-bucket/<directory>
The sync command will only copy files that are not in the destination. So, just run it on a regular basis and it will copy all the files to the S3 bucket!
You can do that using the continue with block. For every upload you can define a retry strategy in the failed upload case. For e.g.:
[[transferManager upload:uploadRequest] continueWithBlock:^id(AWSTask *task) {
if (task.error){
// Handle failed upload here
}
if (task.result) {
//File uploaded successfully.
}
return nil;
}];
You could also create a list of tasks and then use
NSMutableArray *tasks = [NSMutableArray new];
AWSTask * taskForUpload = [transferManager upload:uploadRequest];
[tasks addObject:taskForUpload];
// add more tasks as required
[[AWSTask taskForCompletionOfAllTasks:tasks] continueWithBlock:^id(AWSTask *task) {
if (task.error != nil) {
// Handler error / failed uploads here
} else {
// Handle successful uploads here
}
return nil;
}];
This will perform all the tasks in the list and then give you list of errors which you can retry.
Thanks,
Rohan
I have a Java batch process which scans a directory and automatically uploads videos to YouTube using the v3 API. The jobs processes a few hundred videos a day. Of those uploaded 20-50% result in the grey ellipse icon and eventually the error "Failed (unable to convert video file)".
The videos are all mp4 format. The videos all utilize the same API process which I will outline below. The videos range between ~70MB & 150MB.
The process:
Authorize for Upload "https://www(dot)googleapis(dot)com/auth/youtube.upload", Set privacy to public, Set Snippet (Set channel, Set Title, Set Description, Set Tags)
InputStream buffInStream = new BufferedInputStream(new FileInputStream(fileName));
AbstractInputStreamContent mediaContent = new InputStreamContent(MarketingConstants.YT_VIDEO_FORMAT, buffInStream);
YouTube.Videos.Insert videoInsert = youtube.videos().insert("snippet,statistics,status", videoMetadata, mediaContent);
videoInsert.setNotifySubscribers(false);
MediaHttpUploader uploader = videoInsert.getMediaHttpUploader();
// Set direct upload to TRUE and the job is remarkably efficient 2500kb/second (FAST)
// Set to False, and the job is horribly inefficient 70kb/second (SLOW)
uploader.setDirectUploadEnabled(true);
Video returnedVideo = videoInsert.execute();
Upon successful completion, update video data:
Authorize for Update "https://www(dot)googleapis(dot)com/auth/youtube", Get previously uploaded Video Id, Get snippet for said Video Id, Replace description with updated one (the job creates a tagged url for the description that incorporates the video id, hence the reason to update).
snippet.setDescription(newDescription);
// Update the video resource by calling the videos.update() method
YouTube.Videos.Update updateVideosRequest = youtube.videos().update("snippet,status", video);
Video videoResponse = updateVideosRequest.execute();
Finally, the process adds the video to a specific playlist within the original Channel based on content specifics. To do this, it will Authorize for playlist update "https://www(dot)googleapis(dot)com/auth/youtube", find associated playlist id based on category (these are properties within the process), and update.
ResourceId resourceId = new ResourceId();
// Identifies this as a video, required for adding to playlist
resourceId.setKind("youtube#video");
resourceId.setVideoId(videoId);
PlaylistItemSnippet playlistItemSnippet = new PlaylistItemSnippet();
playlistItemSnippet.setPlaylistId(playlistId);
playlistItemSnippet.setResourceId(resourceId);
PlaylistItem playlistItem = new PlaylistItem();
playlistItem.setSnippet(playlistItemSnippet);
//Add to the playlist
YouTube.PlaylistItems.Insert playlistItemsInsert = youtube.playlistItems().insert("snippet,contentDetails", playlistItem);
PlaylistItem returnedPlaylistItem = playlistItemsInsert.execute();
The only difference I can see with the uploaded videos that fail versus succeed is in the logging.
For a successful upload:
2015-11-22 09:18:35:694|YouTubeMediaService.uploadFileToYouTube()|Upload in progress
2015-11-22 09:19:27:158|YouTubeMediaService.uploadFileToYouTube()|Upload Completed!
That was ~52 seconds.
For a failure upload:
2015-11-22 07:31:12:182|YouTubeMediaService.uploadFileToYouTube()|Upload in progress
2015-11-22 07:31:43:847|YouTubeMediaService.uploadFileToYouTube()|Upload Completed!
That was ~32 seconds.
It appears that the faster it uploads (closer to 30 seconds), the more likely it is to fail. I do see some that take longer and still fail, but this is the only anomaly I've discovered.
Originally the process would set the privacy to private, then only set to public after successfully updating the information, but Google suggested we remove that due to a known glitch that can occur with switching the privacy settings.
So here's my question:
What do you suggest I do to mitigate this issue and ultimately achieve a successful upload rate closer to 95% or higher?
Should I remove the privacy portion all together? Should I retry videos that upload too fast, e.g. remove the recently uploaded video, wait 10 seconds, then try again?
Has anyone else encountered this issue, specifically with batch/automatic uploading? Thank you for any assistance.This shows the uploaded and failed videos (titles removed)
I am trying to upload all photos in my sdcard folder to facebook album . I have written the following code for the same. is the list of all the image files. But, programs runs in to exception. I am not able to figure out the reason. Any inputs in this regard are welcome .
RequestBatch requestBatch = new RequestBatch();
for (final String requestId : fileNames) {
Bitmap image = BitmapFactory.decodeFile(requestId);
Request request = Request.newUploadPhotoRequest(Session.getActiveSession(), image, new Request.Callback() {
#Override
public void onCompleted(Response response) {
showPublishResult("Photo Post ", response.getGraphObject(), response.getError());
}
});
requestBatch.add(request);
}
requestBatch.executeAsync();
}
Update :
It is running into OutOfMemoryException. That means, the sdk is caching the files, and as a result this is happening . Is there any other way to achieve the same, rather than sending the bitmap image as request ?
The issue with above approach was executeAsync.
We need to create a new thread, make it a daemon(so that even on app exit, the upload can finish the queue), and to publish use executeAndWait. This way , all the files are serially uploaded.
If some one need the new code, Message here, i will post it