I'm trying to do something relatively simple. I need to make a simple PUT request with a file in the body in order to upload a file to a server not in my control. Here's the code I have so far:
connection = ((HttpURLConnection)new URL(ticket.getEndpoint()).openConnection());
connection.setRequestMethod("PUT");
connection.setRequestProperty("Content-Type", "video/mp4");
connection.setRequestProperty("Content-Length", String.valueOf(getStreamFile().length()));
connection.setUseCaches(false);
connection.setDoOutput(true);
connection.connect();
outputStream = connection.getOutputStream();
streamFileInputStream = new FileInputStream(getStreamFile());
streamFileBufferedInputStream = new BufferedInputStream(streamFileInputStream);
byte[] streamFileBytes = new byte[getBufferLength()];
int bytesRead = 0;
int totalBytesRead = 0;
while ((bytesRead = streamFileBufferedInputStream.read(streamFileBytes)) > 0) {
outputStream.write(streamFileBytes, 0, bytesRead);
outputStream.flush();
totalBytesRead += bytesRead;
notifyListenersOnProgress((double)totalBytesRead / (double)getStreamFile().length());
}
outputStream.close();
logger.debug("Wrote {} bytes of {}, ratio: {}",
new Object[]{totalBytesRead, getStreamFile().length(),
(double)totalBytesRead / (double)getStreamFile().length()});
I'm watching my network manager and nothing near the size of my file gets sent. In fact, I don't know if anything is being sent at all, but I don't see any errors thrown.
I need to be able to send this request and also measure the status of the upload synchronously, so as to be able to inform my listeners of the upload progress. How can I modify my existing example to just work�
Try setting the content-type param to multipart/form-data. W3C forms.
Related
I have an android app that downloads and uses a file at runtime. The file is valid as I can download it via the browser and open it up, etc. However my app kept reporting that the file is corrupted.
After investigation I discovered the server (which I have no control over) is returning an incorrect "Content-Length:" (~180 vs ~120000). The header is the culprit as I confirmed the issue by downloading the file with curl - which also resulted in a truncated file.
After some research I concluded that my use of BufferedInputStream to append to a ByteArrayBuffer is autosizing the byte array to the url connections content length. To circumvent this, I tried to use ByteArrayOutputStream instead, however this solved nothing.
Anybody know of a way to download a file if the Content-Length is incorrectly set? A browser can.
Here's my latest attempt:
public static void downloadFileFromRemoteUrl(String urlString, String destination){
try {
URL url = new URL(urlString);
File file = new File(destination);
URLConnection urlConnection = url.openConnection();
InputStream inputStream = urlConnection.getInputStream();
byte[] buffer = new byte[1024];
int curLength = 0;
int newLength = 0;
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
while((newLength = inputStream.read(buffer))>0)
{
curLength += newLength;
byteArrayOutputStream.write(buffer, 0, newLength);
}
FileOutputStream fos = new FileOutputStream(file);
fos.write(byteArrayOutputStream.toByteArray());
fos.close();
android.util.Log.d("DB UPDATE", "Done downloading database. Size: " + byteArrayOutputStream.toByteArray().length);
}
catch (MalformedURLException e) {
e.printStackTrace();
}
catch (IOException e) {
e.printStackTrace();
}
}
After some research I concluded that my use of BufferedInputStream to append to a ByteArrayBuffer is autosizing the byte array to the url connections content length.
Nonsense. You are crediting those classes with paranormal powers. How could an output stream possibly become aware of the Content-length header? The URLConnection's input stream is being terminated at the content-length. Correctly.
To circumvent this, I tried to use ByteArrayOutputStream instead, however this solved nothing.
Of course not.
Anybody know of a way to download a file if the Content-Length is incorrectly set?
You could use a Socket and engage in HTTP yourself, which is less trivial than it sounds. But the problem is at the server and that's where it should be fixed. Complain. Or else #Zong Yu is correct and the page is HTML containing JavaScript, say.
NB You don't need to read the entire file into memory:
while((newLength = inputStream.read(buffer))>0)
{
curLength += newLength;
fos.write(buffer, 0, newLength);
}
My final "solution" was to realize I was dealing with a 301 redirect response and not the actual resource! I updated the section that handles my url, checking for a 301 and if exists, update the url. The new url contained the Content-Length that corresponded with the file I was downloading.
// start by creating an http url connection object
HttpURLConnection httpURLConnection = (HttpURLConnection) url.openConnection();
// determine if this is a redirect
boolean redirect = false;
int status = httpURLConnection.getResponseCode();
if (status != HttpURLConnection.HTTP_OK) {
if (status == HttpURLConnection.HTTP_MOVED_TEMP
|| status == HttpURLConnection.HTTP_MOVED_PERM
|| status == HttpURLConnection.HTTP_SEE_OTHER)
redirect = true;
}
// if it is, we need a new url
if (redirect) {
String newUrl = httpURLConnection.getHeaderField("Location");
httpURLConnection = (HttpURLConnection) new URL(newUrl).openConnection();
}
Try Fetch. Fetch is an in app download manager for Android. It's very easy to use. Find the GitHub page here. The project comes with several demos that you can try out. Disclaimer: I'm the creator of Fetch, and it is open source.
Im trying to fetch the length of data that I get from URLConnection.
Since Im measuring how much data the is transferred, I dont want to know the size of the uncompressed data, but the compressed one. Unfortunately InputStream automatically decompresses gzip compressed data.
I have to manually download the whole file, in case the output is chunked and I cant get the length via connection.getContentLength();
The code is here
try {
connection = (HttpURLConnection) (new URL(url)).openConnection();
connection.connect();
int contentLength = connection.getContentLength();
if (contentLength == -1 && connection != null) {
InputStream input = connection.getInputStream();
byte[] buffer = new byte[4096];
int count = 0, len;
while ((len = input.read(buffer)) > 0) {
count += len;
}
contentLength = count;
}
totalSize += contentLength;
}
You can see the example for this file: http://www.google-analytics.com/analytics.js
When I check the header in Chrome it says Content-Length: 11181. However I am unable to get this content length by URLConnection (it returns -1) so I attempt to download the file. However, my output is 25421 bytes, which is the size of the uncompressed file.
Thank you for any kind of help.
You'll have to set the Accept-Encoding header to "gzip, deflate", to let the server know that your client accepts compressed data.
String url = "https://www.google-analytics.com/analytics.js";
HttpURLConnection connection = (HttpURLConnection) (new URL(url)).openConnection();
connection.setRequestProperty("Accept-Encoding", "gzip, deflate");
connection.connect();
int contentLength = connection.getContentLength();
System.out.println("Content-Length: " + contentLength);
Without this header you're forcing the site to return plaintext data. If the data is too large, the site may return the response in chunks, and in this case the response will not have a Content-Length header.
From developer.mozilla, Transfer-Encoding, chunked:
Data is sent in a series of chunks. The Content-Length header is omitted in this case and at the beginning of each chunk you need to add the length of the current chunk in hexadecimal format, followed by '\r\n' and then the chunk itself, followed by another '\r\n'. The terminating chunk is a regular chunk, with the exception that its length is zero. It is followed by the trailer, which consists of a (possibly empty) sequence of entity header fields.
If the response is chunked, I'm afraid you'll have to read all the data to know its size. Each chunk is preceded by a hexadecimal number that indicates the chunk size; I suppose you could use this number to calculate the total data size, but you would still have to read all the data, so there is no benefit in doing that. We can check if the response is chunked with the Transfer-Encoding header.
String url = "https://www.google-analytics.com/analytics.js";
HttpURLConnection connection = (HttpURLConnection) (new URL(url)).openConnection();
connection.connect();
String transferEncoding = connection.getHeaderField("Transfer-Encoding");
System.out.println("Transfer-Encoding: " + transferEncoding);
In this case, you'll have to store the raw response data in a byte array, in order to find the size of the zipped data.
InputStream input = connection.getInputStream();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
int n;
while ((n = input.read(buffer)) > 0) {
baos.write(buffer, 0, n);
}
byte[] zippedData = baos.toByteArray();
System.out.println(zippedData.length);
So, I came up with a 'hack' that may reveal the data size of a chunked response, without reading it. If we use the Range header, the server may respond with a Content-Range header. This header would contain the bytes sent and the total bytes of the content. Note that this is not a reliable method to detect the content size, it will not work if the server doesn't support range requests.
String url = "https://www.google-analytics.com/analytics.js";
HttpURLConnection connection = (HttpURLConnection) (new URL(url)).openConnection();
connection.setRequestProperty("Accept-Encoding", "gzip, deflate");
connection.setRequestProperty("Range", "bytes=0-1");
connection.connect();
int contentLength = connection.getContentLength();
String contentRange = connection.getHeaderField("Content-Range");
if (contentRange != null) {
contentLength = Integer.parseInt(contentRange.split("/")[1]);
}
System.out.println("Content-Length: " + contentLength);
I am trying to upload some bytes to the server for 15 seconds.I have written the following code to write the bytes to output stream :
long uploadedBytes=0;
ByteArrayInputStream byteArrayInputStream=null;
OutputStream outputStream=null;
try {
byte[] randomData=generateBinData(5*1024);
byte[] bytes = new byte[(int) 1024 * 5];
URL url = new URL(urls[0]);
HttpURLConnection connection =
(HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setUseCaches(false);
connection.setRequestMethod("POST");
connection.setRequestProperty("Connection", "Keep-Alive");
outputStream = connection.getOutputStream();
byteArrayInputStream = new ByteArrayInputStream(randomData);
long startTime=System.currentTimeMillis();
while(byteArrayInputStream.read(bytes) > 0
&& timeDiff < 15000) {
outputStream.write(bytes, 0, bytes.length);
uploadedBytes += bytes.length;
byteArrayInputStream = new ByteArrayInputStream(randomData);
timeDiff = System.currentTimeMillis() - startTime;
int progress=(int)(timeDiff *100 / 15000);
publishProgress(progress);
}
But the progress for the above upload is running very fast and it shows large amount of bytes uploaded within seconds.Which is not according to my 2g mobile network connection.
For example it shows :
uploadedBytes =9850880 and with time difference(timeDiff) = 3 sec.
if i run the same code for 15 seconds it terminates the whole application.
Please help me to find where i am goind wrong.
thanks ...waiting for reply
Unless you set chunked or streaming transfer mode, HttpURLConnection buffers all the output before sending any of it, so it can get a Content-Length. So what you're seeing is the progress of the buffering, not of the transfer. Set chunked transfer mode and you will see a difference.
Your copy loop is wrong. It should be like this:
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
Your code will probably work in this specific case but that's not a reason not to get it right for all cases.
check your random byte length. i think the generateBinData() method is not generating 5Kb of data.
sure the uploadedBytes is huge. say, if a write to outputstream takes 10 milisec to write 5Kb(5*1024) of data,in 3 second you should able to write only 153600 bytes.
Reason for app termination - check if any read operation throws exception.
I'm writing a simple client-server system and the question is: how to structure my client code in order to get POST request-response working in a loop?
At the moment it looks something like this (and it's is NOT a loop right now):
open HttpURLConnection
set properties
setDoOutput(true)
writing to output stream
closing output stream
new DataInputStream
reading response
exiting method
I'm not sure which objects do I have to save for the next iterations and which ones I should close.
you need to save the connection object and you should make use of setDoInput(true) for reading data but if you just want to read responseCode and responseMessage you dont need InputStream. check the code below.
HttpURLConnection connection =(HttpURLConnection)new URL("url").openConnection();
connection.setDoOutput(true);
connection.setRequestProperty("Content-type", "text/xml"); // depend on you
connection.setRequestProperty("Accept", "text/xml, application/xml"); // depend on you
connection.setRequestMethod("POST");
OutputStreamWriter writer = new OutputStreamWriter(connection.getOutputStream());
writer.write(yaml);
writer.close();
int statusCode = connection.getResponseCode();
String message = connection.getResponseMessage();
for InputStreamReader
connection.setDoInput(true);
InputStreamReader reader = new InputStreamReader(connection.getInputStream());
char[] cbuf = new char[100];
reader.read(cbuf);
// there are 3 read method you can choose as per your convenience
//and put a check for end of line in while loop for reading whole content.
reader.close();
After managing my own 'research' on this subject (thanks to Google and Nokia Developer forums) I've come to the final view of my code. It's a file upload loop:
path = Paths.get(requestString);
in = Files.newInputStream(path);
int i = 0;
while ((bytesRead = in.read(buf)) != -1) {
URL u = new URL(defaultURL);
huc =
(HttpURLConnection) u.openConnection();
huc.setRequestMethod("POST");
huc.setDoOutput(true);
huc.setDoInput(true);
os = huc.getOutputStream();
os.write(buf, 0, bytesRead);
os.flush();
os = null;
// thanks to dku.rajkumar for the following block of code !
InputStreamReader reader =
new InputStreamReader(huc.getInputStream());
char[] cbuf = new char[400];
reader.read(cbuf);
reader.close();
String s = new String(cbuf);
messagebuffer.append(s + "\n\n");
huc.disconnect();
Thread.sleep(16);
}
[Java 1.5; Eclipse Galileo]
HttpsURLConnection seems to stall when the getInputStream() method is called. I've tried using different websites to no avail (currently https://www.google.com). I should point out I'm using httpS.
The code below has been modified based on what I've learned from other StackOverflow answers. However, no solutions I've tried thus far have worked.
I'd be very grateful for a nudge in the right direction :)
public static void request( URL url, String query )
{
try{
HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();
//connection.setReadTimeout( 5000 ); //<-- uncommenting this line at least allows a timeout error to be thrown
connection.setDoInput(true);
connection.setDoOutput(true);
connection.setUseCaches(false);
System.setProperty("http.keepAlive", "false");
connection.setRequestMethod( "POST" );
// setting headers
connection.setRequestProperty("Content-length",String.valueOf (query.length()));
connection.setRequestProperty("Content-Type","application/x-www-form-urlencoded"); //WAS application/x-www- form-urlencoded
connection.setRequestProperty("User-Agent", "Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)");
////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////
System.out.println( "THIS line stalls" + connection.getInputStream() );
////////////////////////////////////////////////////////////////////////////////////
}catch( Exception e ) {
System.out.println( e );
e.printStackTrace();
}
Typical errors look like:
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at com.sun.net.ssl.internal.ssl.InputRecord.readFully(InputRecord.java:293)
at com.sun.net.ssl.internal.ssl.InputRecord.read(InputRecord.java:331)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:782)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:739)
at com.sun.net.ssl.internal.ssl.AppInputStream.read(AppInputStream.java:75)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
at java.io.BufferedInputStream.read(BufferedInputStream.java:313)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:681)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:626)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:983)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:234)
at https_understanding.HTTPSRequest.request(HTTPSRequest.java:60)
at https_understanding.Main.main(Main.java:17)
connection.setDoOutput(true);
This means that you have to open, write to, and close the connection's output stream before you attempt to read from its input stream. See the docs.
I reproduced the problem in Android 2.2: when downloading from a web-server over wireless and a HTTPS URL, the error is a socket "read time out" at URLConnection.getInputStream()
To fix it, use url.openStream() for the InputStream instead of connection.getInputStream()
Bonus: you can get the length of the file you're downloading so you can show a % complete indicator
code sample:
private final int TIMEOUT_CONNECTION = 5000;//5sec
private final int TIMEOUT_SOCKET = 30000;//30sec
file = new File(strFullPath);
URL url = new URL(strURL);
URLConnection ucon = url.openConnection();
//this timeout affects how long it takes for the app to realize there's a connection problem
ucon.setReadTimeout(TIMEOUT_CONNECTION);
ucon.setConnectTimeout(TIMEOUT_SOCKET);
//IMPORTANT UPDATE:
// ucon.getInputStream() often times-out over wireless
// so, replace it with ucon.connect() and url.openStream()
ucon.connect();
iFileLength = ucon.getContentLength();//returns -1 if not set in response header
if (iFileLength != -1)
{
Log.i(TAG, "Expected Filelength = "+String.valueOf(iFileLength)+" bytes");
}
//Define InputStreams to read from the URLConnection.
// uses 5KB download buffer
InputStream is = url.openStream();//ucon.getInputStream();
BufferedInputStream inStream = new BufferedInputStream(is, 1024 * 5);
outStream = new FileOutputStream(file);
bFileOpen = true;
byte[] buff = new byte[5 * 1024];
//Read bytes (and store them) until there is nothing more to read(-1)
int total=0;
int len;
int percentdone;
int percentdonelast=0;
while ((len = inStream.read(buff)) != -1)
{
//write to file
outStream.write(buff,0,len);
//calculate percent done
if (iFileLength != -1)
{
total+=len;
percentdone=(int)(total*100/iFileLength);
//limit the number of messages to no more than one message every 10%
if ( (percentdone - percentdonelast) > 10)
{
percentdonelast = percentdone;
Log.i(TAG,String.valueOf(percentdone)+"%");
}
}
}
//clean up
outStream.flush();//THIS IS VERY IMPORTANT !
outStream.close();
bFileOpen = false;
inStream.close();
Also don't set the content-length header. Java will do that for you.