I am using the Amazon Java SDK to upload files to Amazon s3
Whilst using version 1.10.62 of the artifact aws-java-sdk - the following code worked perfectly - Note all the wiring behind the scenes works
public boolean uploadInputStream(String destinationBucketName, InputStream inputStream, Integer numberOfBytes, String destinationFileKey, Boolean isPublic){
try {
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(numberOfBytes);
PutObjectRequest putObjectRequest = new PutObjectRequest(destinationBucketName, destinationFileKey, inputStream, metadata);
if (isPublic) {
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
} else {
putObjectRequest.withCannedAcl(CannedAccessControlList.AuthenticatedRead);
}
final Upload myUpload = amazonTransferManager.upload(putObjectRequest);
myUpload.addProgressListener(new ProgressListener() {
// This method is called periodically as your transfer progresses
public void progressChanged(ProgressEvent progressEvent) {
LOG.info(myUpload.getProgress().getPercentTransferred() + "%");
LOG.info("progressEvent.getEventCode():" + progressEvent.getEventCode());
if (progressEvent.getEventCode() == ProgressEvent.COMPLETED_EVENT_CODE) {
LOG.info("Upload complete!!!");
}
}
});
long uploadStartTime = System.currentTimeMillis();
long startTimeInMillis = System.currentTimeMillis();
long logGap = 1000 * loggingIntervalInSeconds;
while (!myUpload.isDone()) {
if (System.currentTimeMillis() - startTimeInMillis >= logGap) {
logUploadStatistics(myUpload, Long.valueOf(numberOfBytes));
startTimeInMillis = System.currentTimeMillis();
}
}
long totalUploadDuration = System.currentTimeMillis() - uploadStartTime;
float totalUploadDurationSeconds = Float.valueOf(totalUploadDuration) / 1000;
String uploadedPercentageStr = getFormattedUploadPercentage(myUpload);
boolean isUploadDone = myUpload.isDone();
if (isUploadDone) {
Object[] params = new Object[]{destinationFileKey, totalUploadDuration, totalUploadDurationSeconds};
LOG.info("Successfully uploaded file {} to Amazon. The upload took {} milliseconds ({} seconds)", params);
result = true;
}
LOG.debug("Post put the inputStream to th location {}", destinationFileKey);
} catch (AmazonServiceException e) {
LOG.error("AmazonServiceException:{}", e);
result = false;
} catch (AmazonClientException e) {
LOG.error("AmazonServiceException:{}", e);
result = false;
}
LOG.debug("Exiting uploadInputStream - result:{}", result);
return result;
}
Since I migrated to version 1.11.31 of the aws-java-sdk - this code stopped working
All classes remain intact and there were no warnings in my IDE
However - I do see the following logged to my console
[2016-09-06 22:21:58,920] [s3-transfer-manager-worker-1] [DEBUG] com.amazonaws.requestId - x-amzn-RequestId: not available
[2016-09-06 22:21:58,931] [s3-transfer-manager-worker-1] [DEBUG] com.amazonaws.request - Received error response: com.amazonaws.services.s3.model.AmazonS3Exception: Moved Permanently (Service: null; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: D67813C8A11842AE), S3 Extended Request ID: 3CBHeq6fWSzwoLSt3J7D4AUlOaoi1JhfxAfcN1vF8I4tO1aiOAjqB63sac9Oyrq3VZ4x3koEC5I=
The upload still continues but from the progress listener - the event code is 8 which stands for transfer failed
Does anyone have any idea what I need to do to get this chunk of code working again?
Thank you
Damien
try changing it to this:
public void progressChanged(ProgressEvent progressEvent) {
LOG.info(myUpload.getProgress().getPercentTransferred() + "%");
LOG.info("progressEvent.getEventCode():" + progressEvent.getEventType());
if (progressEvent.getEventType() == ProgressEventType.TRANSFER_COMPLETED_EVENT) {
LOG.info("Upload complete!!!");
}
}
It looks like you are running some deprecated code.
In com.amazonaws.event.ProgressEventType, value 8 refers to HTTP_REQUEST_COMPLETED_EVENT
COMPLETED_EVENT_CODE is deprecated
getEventCode is deprecated
refer to this -> https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/java/com/amazonaws/event/ProgressEvent.java
I updated my versions of the S3 library, generated new Access Keys and also a new bucket
Now everything works as expected
Related
In my java application I need to write data to S3, which I don't know the size in advance and sizes are usually big so as recommend in the AWS S3 documentation I am using the Using the Java AWS SDKs (low-level-level API) to write data to the s3 bucket.
In my application I provide S3BufferedOutputStream which is an implementation OutputStream where other classes in the app can use this stream to write to the s3 bucket.
I store the data in a buffer and loop and once the data is bigger than bucket size I upload data in the buffer as a a single UploadPartRequest
Here is the implementation of the write method of S3BufferedOutputStream
#Override
public void write(byte[] b, int off, int len) throws IOException {
this.assertOpen();
int o = off, l = len;
int size;
while (l > (size = this.buf.length - position)) {
System.arraycopy(b, o, this.buf, this.position, size);
this.position += size;
flushBufferAndRewind();
o += size;
l -= size;
}
System.arraycopy(b, o, this.buf, this.position, l);
this.position += l;
}
The whole implementation is similar to this: code repo
My problem here is that each UploadPartRequest is done synchronously, so we have to wait for one part to be uploaded to be able to upload the next part. And because I am using the AWS S3 low level API I can not benefit from the parallel uploading provided by the TransferManager
Is there a way to achieve the parallel upload using low level SDK?
Or some code changes that can be done to operate Asynchronously without corrupting the uploaded data and maintain order of the data?
Here's some example code from a class that I have. It submits the parts to an ExecutorService and holds onto the returned Future. This is written for the v1 Java SDK; if you're using the v2 SDK you could use an async client rather than the explicit threadpool:
// WARNING: data must not be updated by caller; make a defensive copy if needed
public synchronized void uploadPart(byte[] data, boolean isLastPart)
{
partNumber++;
logger.debug("submitting part {} for s3://{}/{}", partNumber, bucket, key);
final UploadPartRequest request = new UploadPartRequest()
.withBucketName(bucket)
.withKey(key)
.withUploadId(uploadId)
.withPartNumber(partNumber)
.withPartSize(data.length)
.withInputStream(new ByteArrayInputStream(data))
.withLastPart(isLastPart);
futures.add(
executor.submit(new Callable<PartETag>()
{
#Override
public PartETag call() throws Exception
{
int localPartNumber = request.getPartNumber();
logger.debug("uploading part {} for s3://{}/{}", localPartNumber, bucket, key);
UploadPartResult response = client.uploadPart(request);
String etag = response.getETag();
logger.debug("uploaded part {} for s3://{}/{}; etag is {}", localPartNumber, bucket, key, etag);
return new PartETag(localPartNumber, etag);
}
}));
}
Note: this method is synchronized to ensure that parts are not submitted out of order.
Once you've submitted all of the parts, you use this method to wait for them to finish and then complete the upload:
public void complete()
{
logger.debug("waiting for upload tasks of s3://{}/{}", bucket, key);
List<PartETag> partTags = new ArrayList<>();
for (Future<PartETag> future : futures)
{
try
{
partTags.add(future.get());
}
catch (Exception e)
{
throw new RuntimeException(String.format("failed to complete upload task for s3://%s/%s"), e);
}
}
logger.debug("completing multi-part upload for s3://{}/{}", bucket, key);
CompleteMultipartUploadRequest request = new CompleteMultipartUploadRequest()
.withBucketName(bucket)
.withKey(key)
.withUploadId(uploadId)
.withPartETags(partTags);
client.completeMultipartUpload(request);
logger.debug("completed multi-part upload for s3://{}/{}", bucket, key);
}
You'll also need an abort() method that cancels outstanding parts and aborts the upload. This, and the rest of the class, are left as an exercise for the reader.
You should look at using the AWS SDK for Java V2. You are referencing V1, not the newest Amazon S3 Java API. If you are not familiar with V2, start here:
Get started with the AWS SDK for Java 2.x
To perform Async operations via the Amazon S3 Java API, you use S3AsyncClient.
Now to learn how to upload an object using this client, see this code example:
import software.amazon.awssdk.core.async.AsyncRequestBody;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3AsyncClient;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
import software.amazon.awssdk.services.s3.model.PutObjectResponse;
import java.nio.file.Paths;
import java.util.concurrent.CompletableFuture;
// snippet-end:[s3.java2.async_ops.import]
// snippet-start:[s3.java2.async_ops.main]
/**
* To run this AWS code example, ensure that you have setup your development environment, including your AWS credentials.
*
* For information, see this documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class S3AsyncOps {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" S3AsyncOps <bucketName> <key> <path>\n\n" +
"Where:\n" +
" bucketName - the name of the Amazon S3 bucket (for example, bucket1). \n\n" +
" key - the name of the object (for example, book.pdf). \n" +
" path - the local path to the file (for example, C:/AWS/book.pdf). \n" ;
if (args.length != 3) {
System.out.println(USAGE);
System.exit(1);
}
String bucketName = args[0];
String key = args[1];
String path = args[2];
Region region = Region.US_WEST_2;
S3AsyncClient client = S3AsyncClient.builder()
.region(region)
.build();
PutObjectRequest objectRequest = PutObjectRequest.builder()
.bucket(bucketName)
.key(key)
.build();
// Put the object into the bucket
CompletableFuture<PutObjectResponse> future = client.putObject(objectRequest,
AsyncRequestBody.fromFile(Paths.get(path))
);
future.whenComplete((resp, err) -> {
try {
if (resp != null) {
System.out.println("Object uploaded. Details: " + resp);
} else {
// Handle error
err.printStackTrace();
}
} finally {
// Only close the client when you are completely done with it
client.close();
}
});
future.join();
}
}
That is uploading an object using the S3AsyncClient client. To perform a multi-part upload, you need to use this method:
https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3AsyncClient.html#createMultipartUpload-software.amazon.awssdk.services.s3.model.CreateMultipartUploadRequest-
TO see an example of Multipart upload using the S3 Sync client, see:
https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/main/java/com/example/s3/S3ObjectOperations.java
That is your solution - use S3AsyncClient object's createMultipartUpload method.
Using the sample code (Drive Java REST API V3) below, I am trying to download a portion of a file from google drive.
Drive.Revisions.Get get = service.revisions().get(fileId, revisionId)
.setFields(FilterConstants.OBJECT_REVISION);
MediaHttpDownloader downloader = get.getMediaHttpDownloader();
downloader.setContentRange(fromByte, toByte);
inputStream = get.executeMediaAsInputStream();
But this is not working for me. Can someone help me how to resolve this issue?
#Venkat, based on Partial download,
Partial download involves downloading only a specified portion of a file. You can specify the portion of the file you want to dowload by using a byte range with the Range header. For example:
Range: bytes=500-999
Sample:
GET https://www.googleapis.com/drive/v3/files/fileId
Range: bytes=500-999
The following code works for me. The trick was to set Range header correctly
private byte[] getBytes(Drive drive, String downloadUrl, long position, int byteCount) {
byte[] receivedByteArray = null;
if (downloadUrl != null && downloadUrl.length() > 0) {
try {
com.google.api.client.http.HttpRequest httpRequestGet = drive.getRequestFactory().buildGetRequest(new GenericUrl(downloadUrl));
httpRequestGet.getHeaders().setRange("bytes=" + position + "-" + (position + byteCount - 1));
com.google.api.client.http.HttpResponse response = httpRequestGet.execute();
InputStream is = response.getContent();
receivedByteArray = IOUtils.toByteArray(is);
response.disconnect();
System.out.println("google-http-client-1.18.0-rc response: [" + position + ", " + (position + receivedByteArray.length - 1) + "]");
} catch (IOException e) {
e.printStackTrace();
}
}
return receivedByteArray;
}
I'm trying to upload multiple files to Amazon S3 all under the same key, by appending the files. I have a list of file names and want to upload/append the files in that order. I am pretty much exactly following this tutorial but I am looping through each file first and uploading that in part. Because the files are on hdfs (the Path is actually org.apache.hadoop.fs.Path), I am using the input stream to send the file data. Some pseudocode is below (I am commenting the blocks that are word for word from the tutorial):
// Create a list of UploadPartResponse objects. You get one of these for
// each part upload.
List<PartETag> partETags = new ArrayList<PartETag>();
// Step 1: Initialize.
InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(
bk.getBucket(), bk.getKey());
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);
try {
int i = 1; // part number
for (String file : files) {
Path filePath = new Path(file);
// Get the input stream and content length
long contentLength = fss.get(branch).getFileStatus(filePath).getLen();
InputStream is = fss.get(branch).open(filePath);
long filePosition = 0;
while (filePosition < contentLength) {
// create request
//upload part and add response to our list
i++;
}
}
// Step 3: Complete.
CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(bk.getBucket(),
bk.getKey(),
initResponse.getUploadId(),
partETags);
s3Client.completeMultipartUpload(compRequest);
} catch (Exception e) {
//...
}
However, I am getting the following error:
com.amazonaws.services.s3.model.AmazonS3Exception: The XML you provided was not well-formed or did not validate against our published schema (Service: Amazon S3; Status Code: 400; Error Code: MalformedXML; Request ID: 2C1126E838F65BB9), S3 Extended Request ID: QmpybmrqepaNtTVxWRM1g2w/fYW+8DPrDwUEK1XeorNKtnUKbnJeVM6qmeNcrPwc
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1109)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:741)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:461)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:296)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3743)
at com.amazonaws.services.s3.AmazonS3Client.completeMultipartUpload(AmazonS3Client.java:2617)
If anyone knows what the cause of this error might be, that would be greatly appreciated. Alternatively, if there is a better way to concatenate a bunch of files into one s3 key, that would be great as well. I tried using java's builtin SequenceInputStream but that did not work. Any help would be greatly appreciated. For reference, the total size of all the files could be as large as 10-15 gb.
I know it's probably a bit late but worth giving my contribution.
I've managed to solve a similar problem using the SequenceInputStream.
The tricks is in being able to calculate the total size of the result file and then feeding the SequenceInputStream with an Enumeration<InputStream>.
Here's some example code that might help:
public void combineFiles() {
List<String> files = getFiles();
long totalFileSize = files.stream()
.map(this::getContentLength)
.reduce(0L, (f, s) -> f + s);
try {
try (InputStream partialFile = new SequenceInputStream(getInputStreamEnumeration(files))) {
ObjectMetadata resultFileMetadata = new ObjectMetadata();
resultFileMetadata.setContentLength(totalFileSize);
s3Client.putObject("bucketName", "resultFilePath", partialFile, resultFileMetadata);
}
} catch (IOException e) {
LOG.error("An error occurred while combining files. {}", e);
}
}
private Enumeration<? extends InputStream> getInputStreamEnumeration(List<String> files) {
return new Enumeration<InputStream>() {
private Iterator<String> fileNamesIterator = files.iterator();
#Override
public boolean hasMoreElements() {
return fileNamesIterator.hasNext();
}
#Override
public InputStream nextElement() {
try {
return new FileInputStream(Paths.get(fileNamesIterator.next()).toFile());
} catch (FileNotFoundException e) {
System.err.println(e.getMessage());
throw new RuntimeException(e);
}
}
};
}
Hope this helps!
I am trying to upload a file to a S3 container and before doing the upload, I am setting the metadata of the file. The upload fails with an error saying signature doesn't match. Below is the code I am using :
public URL send(File f, HashMap<String,String> metadata, String type) throws Exception {
String path = type+"/"+f.getName();
InitiateMultipartUploadRequest req = new InitiateMultipartUploadRequest(container, secretKey).withKey(path);
req.setCannedACL(CannedAccessControlList.AuthenticatedRead);
if (metadata != null) {
ObjectMetadata objectMetadata = new ObjectMetadata();
Set<String> keys = metadata.keySet();
Iterator<String> i = keys.iterator();
while (i.hasNext()) {
String key = i.next();
objectMetadata.addUserMetadata(key, metadata.get(key));
}
req.setObjectMetadata(objectMetadata);
}
InitiateMultipartUploadResult res = s3client.initiateMultipartUpload(req);
String uploadId = res.getUploadId();
long fileSize = f.length();
//check the size doesn't exceed max limit
if (fileSize > MAX_OBJ_SIZE) {
throw new Exception("Object size exceeds repository limit");
}
long chunkSize = 1024 * 1024 * 16;
int chunks = (int) (fileSize/chunkSize + 2);
List<PartETag> chunkList = new ArrayList<PartETag>();
long pos = 0;
try {
for (int i = 1; i < chunks; i++) {
if ((chunks -i) < 2) {
chunkSize = fileSize - pos;
}
UploadPartRequest upReq = new UploadPartRequest()
.withBucketName(container).withKey(path)
.withUploadId(uploadId).withPartNumber(i)
.withFileOffset(pos).withFile(f)
.withPartSize(chunkSize);
PartETag pTag = null;
// repeat the upload until it succeeds.
boolean repeat;
do {
repeat = false; // reset switch
try {
// Upload part and add response to our list.
pTag = s3client.uploadPart(upReq).getPartETag();
}
catch (Exception ex) {
repeat = true; // repeat
}
} while (repeat);
chunkList.add(pTag);
pos = pos + chunkSize;
}
CompleteMultipartUploadRequest compl = new CompleteMultipartUploadRequest(
container, secretKey, uploadId, chunkList).withKey(path);
CompleteMultipartUploadResult complRes = s3client.completeMultipartUpload(compl);
return new URL(URLDecoder.decode(complRes.getLocation(), "UTF-8"));
}
catch (Exception ex) {
s3client.abortMultipartUpload(new AbortMultipartUploadRequest(container,
secretKey, uploadId));
throw new Exception("File upload error: "+ex.toString());
}
}
Below is the error I am getting :
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 0805716BBD0662AB, AWS Error Code: SignatureDoesNotMatch, AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method., S3 Extended Request ID: wNAzUyrLZgWCazZFe3KpMHO0uh0FM5FF7fiwBzN1A2YDEYS5hKZBYh5nWSjIhnhG
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:767)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:414)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:228)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3316)
at com.amazonaws.services.s3.AmazonS3Client.initiateMultipartUpload(AmazonS3Client.java:2401)
at net.timbusproject.storage.awss3.S3Client.send(S3Client.java:134)
Line 134 in S3Client.java where the error is occurring is :
InitiateMultipartUploadResult res = s3client.initiateMultipartUpload(req);
The upload works fine if I am not attaching any metadata. i.e, if I comment the below line, the upload works :
req.setObjectMetadata(objectMetadata);
I am unable to figure out why the request fails when metadata is set. Am I missing any step in the upload process ?
I was able to work around this problem by URL encoding the metadata keys and values.
objectMetadata.addUserMetadata(URLEncoder.encode(key, "UTF-8"), URLEncoder.encode(metadata.get(key),"UTF-8"));
Obviously the metadata seems to have some offending characters which are messing with the AWS calls. This workaround will let upload complete without error and also updates the metadata but the strings remain url encoded, which can be a problem later.
I work on a sample java http server and a .Net client (on tablet).
using my http sever, the .Net client must be able to download files.
It's working perfectly, but now I have to be able to resume download after a connection disruption.
Here some code :
Java server : ( It is launched in a seperate thread, hence the run method).
public void run() {
try {
server = com.sun.net.httpserver.HttpServer.create(
new InetSocketAddress(
portNumber), this.maximumConnexion);
server.setExecutor(executor);
server.createContext("/", new ConnectionHandler(this.rootPath));
server.start();
} catch (IOException e1) {
//For debugging
e1.printStackTrace();
}
}
my HttpHandler : (only the part dealing with GET request)
/**
* handleGetMethod : handle GET request. If the file specified in the URI is
* available, send it to the client.
*
* #param httpExchange
* #throws IOException
*/
private void handleGetMethod(HttpExchange httpExchange) throws IOException {
File file = new File(this.rootPath + this.fileRef).getCanonicalFile();
if (!file.isFile()) {
this.handleError(httpExchange, 404);
} else if (!file.getPath().startsWith(this.rootPath.replace('/', '\\'))) { // windows work with anti-slash!
// Suspected path traversal attack.
System.out.println(file.getPath());
this.handleError(httpExchange, 403);
} else {
//Send the document.
httpExchange.sendResponseHeaders(200, file.length());
System.out.println("file length : "+ file.length() + " bytes.");
OutputStream os = httpExchange.getResponseBody();
FileInputStream fs = new FileInputStream(file);
final byte[] buffer = new byte[1024];
int count = 0;
while ((count = fs.read(buffer)) >= 0) {
os.write(buffer, 0, count);
}
os.flush();
fs.close();
os.close();
}
}
And now my .Net Client: (simplified)
try{
Stream response = await httpClient.GetStreamAsync(URI + this.fileToDownload.Text);
FileSavePicker savePicker = new FileSavePicker();
savePicker.SuggestedStartLocation = PickerLocationId.DocumentsLibrary;
// Dropdown of file types the user can save the file as
savePicker.FileTypeChoices.Add("Application/pdf", new List<string>() { ".pdf" });
// Default file name if the user does not type one in or select a file to replace
savePicker.SuggestedFileName = "new doc";
StorageFile file = await savePicker.PickSaveFileAsync();
if (file != null)
{
const int BUFFER_SIZE = 1024*1024;
using (Stream outputFileStream = await file.OpenStreamForWriteAsync())
{
using (response)
{
var buffer = new byte[BUFFER_SIZE];
int bytesRead;
do
{
bytesRead = response.Read(buffer, 0, BUFFER_SIZE);
outputFileStream.Write(buffer, 0, bytesRead);
} while (bytesRead > 0);
}
outputFileStream.Flush();
}
}
}
catch (HttpRequestException hre)
{ //For debugging
this.Display.Text += hre.Message;
this.Display.Text += hre.Source;
}
catch (Exception ex)
{
//For debugging
this.Display.Text += ex.Message;
this.Display.Text += ex.Source;
}
So, to resume the download I would like to use some seek operation within the .Net client part.
But every time I try something like response.Seek(offset, response.Position); , an error occurs informing that the Stream does not support seek operations.
Yes, It does not, but how I can specify (in my server side) to use seekable Stream?
Does the method HttpExchange.setStreams can be useful?
Or, I do not need to modify the stream but to configure my HttpServer instance?
Thanks.
Well use Range, Accept-Range and Content-Range fields works. There is just a little bit of work to do in order to send the correct part of the file and to set the response's headers.
The server may inform client that it support the Range field by setting the Accept-Range field:
responseHeader.set("Accept-Ranges", "bytes");
And then set the Content-range field when partial file are sent :
responseHeader.set("Content-range", "bytes " + this.offSet + "-" + this.range + "/" + this.fileLength);
Finally the return code must be set to 206 (Partial Content).
For more information about Range, Accept-Range and Content-Range fields see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
NB : Opera 12.16 use the field "Range" to resume download but it seems that IE 10 and Firefox 22 do not use this field. May be some seekable streams as I was looking for originally. If anyone have an answer to this, I will be glad to read it =).