I am generating S3 pre signed URLs so that the client(mobile app) can PUT an image directly to S3 instead of going through a service. For my use case the expiry time of the pre signed URL needs to be configured for a longer window (10-20 mins). Therefore, I want to limit the size of file upload to S3 so that any malicious attacker can not upload large files to the S3 bucket. The client will get the URL from a service which has access to the S3 bucket. I am using AWS Java SDK.
I found that this can be done using POST forms for browser uploads but how can I do this using just signed S3 URL PUT?
I was using S3-Signed-URLS the first time and was also concerned about this.
And I think this whole signed Urls stuff is a bit of a pain because you cant put a maximum Object/Upload size limit on them.
I think thats something very important on file-uploads in general, that is just missing..
By not having this option you are forced to handle that problem with the expiry time etc. This gets really messy..
But it seems that you can use S3 Buckets also with normal Post-Requests, what has a content-length parameter in their policy.
So I'll probably exchange my Signed-URLS with POST-Routes in the future.
I think for proper, larger applications this is the way to go.(?)
What might help with your issue:
In the JavaScript SDK there is a method / function that gets you only the meta-data of the an S3-Object (including File Size) without downloading the whole file.
It's called s3.headObject()
I think, after the upload is done, it takes some time for AWS to process that newly uploaded file and then is available in your bucket.
What I did was, I set a timer after each upload to check the file-size and if its bigger 1mb, it will delete the file.
I think for production you wanna log that somewhere in a DB.
My FileNames also include the user-id of who uploaded the file.
That way, you can block an account after a too big upload if you wanted.
This here worked for me in javascript..
function checkS3(key) {
return new Promise((resolve, reject) => {
s3.headObject(headParams, (err, metadata) => {
console.log("setTimeout upload Url");
if (err && ["NotFound", "Forbidden"].indexOf(err.code) > -1) {
// console.log(err);
return reject(err);
//return resolve();
} else if (err) {
const e = Object.assign({}, Errors.SOMETHING_WRONG, { err });
// console.log(e);
// console.log(err);
return reject(e);
}
return resolve(metadata);
});
});
}
Related
I'm trying to implement multipart upload in Java, following this sample: https://docs.aws.amazon.com/AmazonS3/latest/dev/llJavaUploadFile.html
But my actual task is a bit more complicated: I need to support resuming in case application was shut down during uploading. Also, I can't use TransferManager - I need to use low-level API for particular reason.
The code there is pretty straight-forward, but the problem comes with List<PartETag> partETags part. When finalizing resumed upload, I need to have this collection, previously filled during the upload process. And, obviously, if I'm trying to finalize upload after application restart, I don't have this collection anymore.
So the question is: how do I finalize resumed upload? Is it possible to obtain List<PartETag> partETags from the server using some API? What I have is only a MultipartUpload object.
Get the list of multipart uploads in progress
MultipartUploadListing multipartUploadListing =
s3Client.listMultipartUploads(new ListMultipartUploadsRequest(bucketName));
## for uploadId and keyName
Get the list of parts for each uploadId and key
PartsListing partsListing =
s3Client.listParts(new ListPartsRequest(bucketName, key, uploadId));
Get the List of part summary
List<PartSummary> parts = partsListing.getParts();
From PartSummary getETag() and getPartNumber()
for(PartSummary part: parts)
{
part.getETag();
part.getPartNumber();
}
Amazon S3 SDK Package
AmazonS3 client
I'm using the AWS Transfer Manager to backup a lot of files to S3. Sometimes the backup fails in the middle in the middle, and I don't want to re-upload all the files, but only the ones that haven't been uploaded yet.
Is there something baked in the Transfer Manager or the S3 Put Request that would let me do that automatically, or is my only solution to check the MD5 of the file with a HEAD request first, and see if it's different before starting the upload.
Thanks!
Rather than coding your own solution, you could use the AWS Command-Line Interface (CLI) to copy or sync the files to Amazon S3.
For example:
aws s3 sync <directory> s3://my-bucket/<directory>
The sync command will only copy files that are not in the destination. So, just run it on a regular basis and it will copy all the files to the S3 bucket!
You can do that using the continue with block. For every upload you can define a retry strategy in the failed upload case. For e.g.:
[[transferManager upload:uploadRequest] continueWithBlock:^id(AWSTask *task) {
if (task.error){
// Handle failed upload here
}
if (task.result) {
//File uploaded successfully.
}
return nil;
}];
You could also create a list of tasks and then use
NSMutableArray *tasks = [NSMutableArray new];
AWSTask * taskForUpload = [transferManager upload:uploadRequest];
[tasks addObject:taskForUpload];
// add more tasks as required
[[AWSTask taskForCompletionOfAllTasks:tasks] continueWithBlock:^id(AWSTask *task) {
if (task.error != nil) {
// Handler error / failed uploads here
} else {
// Handle successful uploads here
}
return nil;
}];
This will perform all the tasks in the list and then give you list of errors which you can retry.
Thanks,
Rohan
I am using the wicket framework.
I have a requirement to send to the client browser several individual files (a zip file is not relevant).
I have added to my page an AJAXDownload class that extends AbstractAjaxBehavior - a solution for sending files to the client like this:
download = new AJAXDownload(){
#Override
protected IResourceStream getResourceStream(){
return new FileResourceStream(file){
#Override
public void close() throws IOException {
super.close();
file.delete();
}
};
}};
add(download);
At some other point in my code I am trying to initiate the download of several files to the client using an ajax request whilst looping through an arraylist of files and then each time triggering the AJAXDownload:
ArrayList<File> labelList = printLabels();
for(int i=0; i<labelList.size(); i++){
file = labelList.get(i);
//initiate the download
download.initiate(target);
}
However, it is only sending just one of these files to the client. I have checked and the files have definitely been created on the server side. But only one is of them is being sent to the client.
Can anyone give me an idea what I am doing wrong?
Thanks
You are doing everything correct!
I don't know how to solve your problem but I'll try to explain what happens so someone else could help:
The Ajax response has several entries like:
<evaluate>document.location=/some/path/to/a/file</evaluate>
wicket-ajax.js just loops over the evaluations and executes them. If there is one entry then everything is OK - you have the file downloaded. But if there are more then the browser receives several requests for changing its location in very short time. Apparently it drops all but one of them.
An obvious solution would be to use callbacks/promises - when a download finishes then trigger the next one. The problem is that there is no way how to receive a notification from the browser that such download finished. Or at least I don't know about it.
One can roll a solution based on timeouts (i.e. setTimeout) but it would be error prone.
I hope this information is sufficient for someone else to give you the solution!
I need to upload multiple files from jsp. I am using $ajaxFileUPload.js to take the file to server side. I am doing my file size validation in server side for each file. I need a message on validating the file, where i face a problem. I am not able to show that message. Could someone help me in this please?
I have not used the plugin but what I have done previously in similar situation is send different markers back to the client side like for an upload the exceeds the file limit size, you can start the response back with something like 'ERROR:' and then look for this marker in the function getting the response back and then branch to a different logic. You obviously have to parse the response and look for the marker.
Looking quickly at the plugin in Github, it looks like the usage is
$('input[type="file"]').ajaxfileupload({
'action': '/upload.php',
'params': {
'extra': 'info'
},
'onComplete': function(response) {
console.log('custom handler for file:');
alert(JSON.stringify(response));
},
'onStart': function() {
if(weWantedTo) return false; // cancels upload
},
'onCancel': function() {
console.log('no file selected');
}
});
So what I think you can do is in the onComplete function something like
if (response.search("ERROR:") != -1){
//error condition
//add your msg for the front end here
} else {
//non error condition, continue with your regular flow
}
Does this make sense and relate to what you are trying to do?
I am writing one application which store user data into file. However, when I try to open phone memory my application raise security exception and won't allow me to write or read data.
Here is my code.
try
{
FileConnection fc = (FileConnection)Connector.open("file:///C:/myfile.dat",Connector.READ_WRITE);
// If no exception is thrown, the URI is valid but the folder may not exist.
if (!fc.exists())
{
fc.create(); // create the folder if it doesn't exist
}
OutputStream os=fc.openOutputStream();
String s="hello how r u..";
byte[] b=s.getBytes();
os.write(b);
os.flush();
fc.close();
}
catch(Exception error )
{
Alert alert = new Alert(error.getMessage(), error.toString(), null, null);
alert.setTimeout(Alert.FOREVER);
alert.setType(AlertType.ERROR);
display.setCurrent(alert);
}
However I used SDycard to save data and it works fine. But is there any solution to escape from SecurityException when I try to access phone memory? And when I store data in SDCARD every time one message is prompting that ask user to allow application to read or write data. I also don't want this prompt message.
How to escape from this situation?
You will have to sign and certify your J2ME application. This would involve purchasing a certificate. I havent done this, so you would have to confirm this or wait for another answer in SO. But I am pretty sure that unless you sign your midlet the phone's security policy will prevent this.
One URL on how sign your midlet:
http://m-shaheen.blogspot.com/2009/07/1.html
Agree with #Sethu that OP is due to midlet signing.
The OP is divided into logical phases to address the issue:
Issue/Cause:
Whenever a restricted API (in this case JSR 75 api) is accessed by the midlet it needs the permission to validate its authenticity, this helps to keep away malicious code. In your case the midlet is not signed (explained in #2 how to sign) so it does not have the necessary permissions hence Application Management System is prompting for user consent for each such sensitive operation by your midlet. Read this link for more details.
Resolution
Its a multi step process, (a) Procure the certificate, refer this link, (b) Add the necessary permissions (for read: javax.microedition.io.Connector.file.read, for write: javax.microedition.io.Connector.file.write) under MIDlet-Permissions in the JAD file
Procurement of certificate
A detailed explaining is given in this link Java ME signing for dummies