How to send file in parts using header "Range"? - java

I would like to send big file by dividing it to small parts and send them separately.
I tried to use the hedder "Range" and got "org.apache.http.client.NonRepeatableRequestException: Cannot retry request with a non-repeatable request entity".
// create authenticate client
DefaultHttpClient client = new DefaultHttpClient();
// create HTTP put with the file
HttpPut httpPut = new HttpPut(url);
final File recordingFile = new File(mDir, mName);
long fileLength = recordingFile.length();
for (int i=0; i < fileLength; i += 4096) {
int length = Math.min(4096, (int)recordingFile.length() - i);
InputStreamEntity entity = new InputStreamEntity(inputStream, length);
httpPut.setEntity(entity);
httpPut.addHeader("Connection", "Keep-Alive");
httpPut.addHeader("Range", "bytes=" + i + "-" + (i + length));
// Execute
HttpResponse res = client.execute(httpPut);
int statusCode = res.getStatusLine().getStatusCode();
}
I also tried "Content-Range" header (instead of "Range") and I got the same exception.
httpPut.addHeader("Content-Range", "bytes=" + i + "-" + (i + length) + "/" + fileLength);
httpPut.addHeader("Accept-Ranges", "bytes");

You repeatedly send multiple of 4096 bits. E.g. let's take the first two steps:
i = 0
Send range 0-4096
i = 4096
Send range 4096-8192.
Fix this lines:
for (int i=0; i <= fileLength; i += 4097) {
int length = Math.min(4096, (int)recordingFile.length() - i + 1);
/*...*/
}
and it should work fine.
Update:
Maybe the problem is that for some reasons (e.g. authentication failure) it tries to resend the same chunk again, in which case the inputstream is already consumed.
Try using a ByteArrayEntity instead of InputStreamEntity, something like this:
ByteArrayInputStream bis = new ByteArrayInputStream(recordingFile);
for (int i=0; i <= fileLength; i += 4097) {
int length = Math.min(4096, (int)recordingFile.length() - i + 1);
byte[] bytes = new byte[length];
bis.read(bytes);
ByteArrayEntity entity = ByteArrayEntity(bytes);
/*...*/
}

Related

How to append batches of stream data to one CSV file using Java NIO Package

I am reading data from API response in batches of bytes which is of Content-Type = text/CSV and using Java's NIO package to transfer bytes between two Channels. I want to write API responses in batches to the same File (APPEND). With below code append doesn't seem to work correctly, it's more of overriding results.
And once all the data is written then I also want to print the number of total lines in CSV.
Version - Java 8
private void downloadFile_NIO(String encoded_token) throws Exception
{
long start_Range = 0;
long end_Range = 5000;
long batch_size = 5000;
long totalsize = 1080612174;
long counter = 0;
//Path path = Paths.get(FILE_NAME);
//long lines = 0;
while (start_Range <= totalsize)
{
URL url = new URL(FILE_URL);
HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();
connection.setRequestProperty("Authorization", "Bearer " + encoded_token );
connection.setDoOutput(true);
connection.setRequestMethod("GET");
ReadableByteChannel readableByteChannel = Channels.newChannel(connection.getInputStream());
FileOutputStream fileOutputStream=new FileOutputStream(new File(FILE_NAME),true);
fileOutputStream.getChannel().transferFrom(readableByteChannel, 0, Long.MAX_VALUE);
// lines = Files.lines(path).count();
// System.out.println("lines->" + lines);
System.out.println();
fileOutputStream.close();
readableByteChannel.close();
counter = counter + 1;
if (counter < 2)
{
start_Range = start_Range + batch_size + 1;
end_Range = end_Range + batch_size;
}
else
{
start_Range = start_Range + batch_size;
end_Range = end_Range + batch_size;
}
}
}

File downloading - negative file length

I am trying to download a file (mp3) from my server.
I want to show the downloading progress, but I am facing a problem that whole time the file size is -1.
The screenshot:
My code:
try {
URL url = new URL(urls[0]);
// URLConnection connection = url.openConnection();
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setRequestMethod("GET");
connection.setDoOutput(true);
connection.connect();
int fileSize = connection.getContentLength();
if (fileSize == -1)
fileSize = connection.getHeaderFieldInt("Length", -1);
InputStream is = new BufferedInputStream(url.openStream());
OutputStream os = new FileOutputStream(myFile);
byte data[] = new byte[1024];
long total = 0;
int count;
while ((count = is.read(data)) != -1) {
total += count;
Log.d("fileSize", "Lenght of file: " + fileSize);
Log.d("total", "Lenght of file: " + total);
// publishProgress((int) (total * 100 / fileSize));
publishProgress("" + (int) ((total * 100) / fileSize));
os.write(data, 0, count);
}
os.flush();
os.close();
is.close();
} catch (Exception e) {
e.printStackTrace();
}
I get the garbage value for the fileSize which return -1 (int fileSize = connection.getContentLength();)
Inspect the headers the server is sending. Very probably the server is sending Transfer-Encoding: Chunked and no Content-Length header at all. This is a common practice in HTTP/1.1. If the server isn't sending the length, the client obviously can't know it. If this is the case, and you have no control over the server code, the best thing to do is probably display a spinner type of indicator only.

HttpClient request succeeds with timeout defined, but hangs without

I found a strange hack-y way to fix my code and I was wondering if anyone could explain why it works. I am writing code that communicates with a REST API to upload a video file split into multiple HTTP requests.
I was having a problem with one of my video part requests connecting, but never responding. The program uploads the video in five parts, but it would always hang on the third part of the five parts. I decided to add a request hard timeout to force the program to skip that hanging part. Well, magically after adding that timer, there is no more hangup!
Any ideas why this is the case? The request doesn't actually timeout, yet the addition of this code keeps my program chugging.
private void uploadParts(String assetId) throws IOException {
//set up post request
HttpClient client = HttpClientBuilder.create().build();
String url = "";
//prepare video
File video = new File("files/video.mp4");
BufferedInputStream bis = new BufferedInputStream(new FileInputStream(video));
int partMaxSize = 1024 * 1024 * 5;
byte[] buffer = new byte[partMaxSize];
double fileSize = video.length();
System.out.println(fileSize);
System.out.println(fileSize / partMaxSize);
int parts = (int) Math.ceil(fileSize / partMaxSize);
System.out.println(parts);
for(int i = 1; i < parts+1; i++) {
String partNumber = i + "";
System.out.println("part: " + partNumber);
int partSize = (int) (i < parts ? partMaxSize : fileSize);
fileSize -= partSize;
int tmp = 0;
tmp = bis.read(buffer);
url = String.format("https://www.site.com/upload/multipart/%s/%s", assetId, partNumber);
final HttpPut request = new HttpPut(url);
request.addHeader("Authorization", "Bearer " + accessToken);
request.addHeader("Content-Type", "application/octet-stream");
request.setEntity(new ByteArrayEntity(buffer));
//Magical code start
int hardTimeout = 5; // seconds
TimerTask task = new TimerTask() {
#Override
public void run() {
if (request != null) {
request.abort();
}
}
};
new Timer(true).schedule(task, hardTimeout * 1000);
//Magical code end
HttpResponse response = client.execute(request);
System.out.println(response.getStatusLine().getReasonPhrase());
}
bis.close();
}
If i leave out the magical code section, my code hangs on the third part. If I include it, the program runs through fine.
I found the answer! Turns out HttpClient only allows a certain number of connections at a time. According to my code, the default maximum connections is 2. I needed to close each connection after they were complete and the upload ran fine.
Fixed code adds request connection release.
private void uploadParts(String assetId) throws IOException {
//set up post request
HttpClient client = HttpClientBuilder.create().build();
String url = "";
//prepare video
File video = new File("files/video.mp4");
BufferedInputStream bis = new BufferedInputStream(new FileInputStream(video));
int partMaxSize = 1024 * 1024 * 5;
byte[] buffer = new byte[partMaxSize];
double fileSize = video.length();
System.out.println(fileSize);
System.out.println(fileSize / partMaxSize);
int parts = (int) Math.ceil(fileSize / partMaxSize);
System.out.println(parts);
for(int i = 1; i < parts+1; i++) {
String partNumber = i + "";
System.out.println("part: " + partNumber);
int partSize = (int) (i < parts ? partMaxSize : fileSize);
fileSize -= partSize;
int tmp = 0;
tmp = bis.read(buffer);
url = String.format("https://www.site.com/upload/multipart/%s/%s", assetId, partNumber);
final HttpPut request = new HttpPut(url);
request.addHeader("Authorization", "Bearer " + accessToken);
request.addHeader("Content-Type", "application/octet-stream");
request.setEntity(new ByteArrayEntity(buffer));
//Magical code start
int hardTimeout = 5; // seconds
TimerTask task = new TimerTask() {
#Override
public void run() {
if (request != null) {
request.abort();
}
}
};
new Timer(true).schedule(task, hardTimeout * 1000);
//Magical code end
HttpResponse response = client.execute(request);
request.releaseConnection();
System.out.println(response.getStatusLine().getReasonPhrase());
}
bis.close();
}
The timer was working because it was closing my old connections after 10 seconds. Thank you for your input, guys.

Java. BufferedInputStream working with images

I'm trying to write server and client sides on java.
So, client side sends request like GET / HTTP/1.0, server side responses(if file exists) like HTTP/1.0 200 OK, put in header content-type and content length and writes to the BufferedOuputStream the stream from FileInputStream.
Server side:
String endLine = "\r\n";
File f = new File(fileName);
FileInputStream fstream;
fstream = new FileInputStream(f);
response = "HTTP/1.0 200 OK" + endLine;
header = "Content-type: "+ contentType + endLine + "Content-length: " + f.length() + endLine + endLine;
bout.write(response.getBytes());
bout.write(header.getBytes());
int lol;
while((lol = fstream.read(buffer)) != -1) {
bout.write(buffer,0,lol);
}
System.out.println("Message sent");
bout.flush();
socket.close();
Client side:
byte[] res = new byte[bufferSize];
int got;
int i=0;
int temp = 0;
int j = 0;
while((got = bis.read(res))!=-1){
for(j=0;j<res.length;j++){
//dividing from header
if(res[j]=='\n'&&res[j-1]=='\r'&&res[j-2]=='\n'&&res[j-3]=='\r'){
temp = j+1;
}
}
fout.write(res,temp,got-temp);
i++;
}
So, with .html files it works fine, but with images...
Found the solution. The error was on offsets:
fout.write(res,temp,got-temp);
This line adds the offsets on every iteration. I need only on first:
if(i==0){
fout.write(res,temp,got-temp);
}else{
fout.write(res,0,got);
}
You should not parse the contents checking for new lines etc when transferring binary data, your pictures also contains these \n r characters

Downloading a file from spring controllers with resume support

Downloading a file from spring controllers
Above is the original article, however i wish to have resume support, means that i can dowmload 51% 1st, and then download another 49% on other time.
environment tomcat 7.0.39
i tried some, but still failed.
here is my code , or maybe you can share your code
InputStream fis =new FileInputStream(filepath+file_name);
response.setHeader("Accept-Ranges", "bytes");
long length = (int) new File(filepath+file_name).length();
long start = 0;
if (request.getHeader("Range") != null)
{
response.setStatus(javax.servlet.http.HttpServletResponse.SC_PARTIAL_CONTENT);// 206
start = Long.parseLong(request.getHeader("Range")
.replaceAll("bytes=", "").replaceAll("-", ""));
}
response.setHeader("Content-Length", new Long(length - start).toString());
if (start != 0)
{
response.setHeader("Content-Range", "bytes "
+ new Long(start).toString() + "-"
+ new Long(length - 1).toString() + "/"
+ new Long(length).toString());
}
response.setContentType("application/octet-stream");
fis.skip(start);
byte[] b = new byte[1024];
int i;
while ((i = fis.read(b)) != -1) {
response.getOutputStream().write(b, 0, i);
response.flushBuffer();
}
fis.close();
fixed, this is my edited version
long length = (int) new File(filepath+file_name).length();
long start = 0;
response.setHeader("Accept-Ranges", "bytes");
response.setStatus(javax.servlet.http.HttpServletResponse.SC_PARTIAL_CONTENT);// 206
if (request.getHeader("Range") != null)
{
int x = request.getHeader("Range").indexOf("-");
start = Long.parseLong(request.getHeader("Range").substring(0, x)
.replaceAll("bytes=", ""));
}
response.setHeader("Content-Length", new Long(length - start).toString());
if(start == 0)
response.setHeader("Content-Range", "bytes 0-" +new Long(length - 1).toString()+"/"+length);
else
response.setHeader("Content-Range", "bytes "+start+"-"+new Long(length - 1).toString()+"/"+length);
fis.skip(start);
byte[] b = new byte[1024];
int i;
while ((i = fis.read(b)) != -1) {
response.getOutputStream().write(b, 0, i);
response.flushBuffer();
}
fis.close();
I've build a solution to use HTTP Byte Range with or without Spring.
If you are interested, check it out at https://gist.github.com/davinkevin/b97e39d7ce89198774b4
That helps me to use it inside a Spring application using mainly #RestController

Categories

Resources