I am currently developing an Android application and encounter the following problem.
I am making an HTTP request to a server that is supposed to send me back XML content that I then parse. I noticed recurring errors while parsing long XML strings so I decided to display the result of my requests and discovered that the string (or the stream?) that I receive is randomly truncated. Sometimes I get the whole string, sometimes half, sometimes a third, and it seems to follow a certain pattern in the amount of characters that are truncated, what I mean by that is that I sometimes get 320 characters after a request then 156 after the next then 320 twice, then 156 again (these aren't the actual numbers but it follows a pattern).
Here is my code for the request and conversion of the InputStream into a string:
private String downloadUrlGet(String myurl) throws IOException {
InputStream is = null;
// Only display the first 20000 characters of the retrieved
// web page content.
int len = 20000;
try {
URL url = new URL(myurl);
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setReadTimeout(10000 /* milliseconds */);
conn.setConnectTimeout(15000 /* milliseconds */);
conn.setRequestMethod("GET");
conn.setDoInput(true);
conn.setRequestProperty("Content-Type", "application/xml");
// Starts the query
conn.connect();
int response = conn.getResponseCode();
Log.d(DEBUG_TAG, "The response is: " + response);
is = conn.getInputStream();
// Convert the InputStream into a string
String contentAsString = readIt(is, len);
return contentAsString;
// Makes sure that the InputStream is closed after the app is
// finished using it.
} finally {
if (is != null) {
is.close();
}
}
}
// Reads an InputStream and converts it to a String.
private String readIt(InputStream stream, int len) throws IOException, UnsupportedEncodingException {
Reader reader = null;
reader = new InputStreamReader(stream, "UTF-8");
char[] buffer = new char[len];
reader.read(buffer);
return new String(buffer);
}
The length of the XML that I try to retrieve is much less than 20000.
I tried to use HttpURLConnection.setChunkedStreamingMode() with 0 and various other numbers as parameter but it didn't change anything.
Thanks in advance for any suggestions.
You are making the usual mistake of assuming that read() fills the buffer. See the Javadoc. It isn't obliged to do that. It isn't obliged to transfer more than one byte as a matter of fact. You need to read in a loop until you have encountered end of stream (read() returns -1).
Related
I have some Java code that takes a url and then returns the data (and at later point BufferedImage is constructed from it, the problem is it works for most urls from particular website but not all.
The urls are actually redirects so for example I pass
//https://coverartarchive.org/release/bdd0e35c-ce68-3f5b-b957-f83ab5846111/front
it will actually redirect to
//https://ia600301.us.archive.org/31/items/mbid-bdd0e35c-ce68-3f5b-b957-f83ab5846111/mbid-bdd0e35c-ce68-3f5b-b957-f83ab5846111-6094238097.jpg
and return the correct data
But if I pass the seemingly very similar url
//http://coverartarchive.org/release/6b105b89-21ee-414a-b98f-b2756c92b0bc/front
Then although this is what is the url seen if pasted into web-browser
//https://ia902907.us.archive.org/33/items/mbid-6b105b89-21ee-414a-b98f-b2756c92b0bc/mbid-6b105b89-21ee-414a-b98f-b2756c92b0bc-3167310704.jpg
My code only returns 169 bytes
If I pass the url it redirects to directly
//https://ia902907.us.archive.org/33/items/mbid-6b105b89-21ee-414a-b98f-b2756c92b0bc/mbid-6b105b89-21ee-414a-b98f-b2756c92b0bc-3167310704.jpg
then it works okay, but I dont have this url so not a solution.
This is quite old code, maybe a better way to do it now, but is my code broken or is the website broken ?
private static byte[] convertUrlToByteArray(URL url) throws IOException
{
//Get imagedata, we want to ensure we just write the data as is as long as in a supported format
URLConnection connection = url.openConnection();
connection.setConnectTimeout(URL_TIMEOUT);
connection.setReadTimeout(URL_TIMEOUT);
// Since you get a URLConnection, use it to get the InputStream
InputStream in = connection.getInputStream();
// Now that the InputStream is open, get the content length
int contentLength = connection.getContentLength();
// To avoid having to resize the array over and over and over as
// bytes are written to the array, provide an accurate estimate of
// the ultimate size of the byte array
ByteArrayOutputStream tmpOut;
if (contentLength != -1)
{
tmpOut = new ByteArrayOutputStream(contentLength);
}
else
{
tmpOut = new ByteArrayOutputStream(16384); // Pick some appropriate size
}
byte[] buf = new byte[1024];
while (true)
{
int len = in.read(buf);
if (len == -1)
{
break;
}
tmpOut.write(buf, 0, len);
}
in.close();
tmpOut.close(); // No effect, but good to do anyway to keep the metaphor alive
return tmpOut.toByteArray();
}
This question already has answers here:
Why does LogCat not show the full message?
(2 answers)
Closed 8 years ago.
I work with HttpUrlConnection in my App and in my common Java Test and I implemented a method and that Method (common for both of them, so, identical!!!) behaves in Android case in another way.
Both of them can right receive an identical response from Server but in Java Test I can show this response while in Android App is chunked to 3200 Chars.
That's my Code
private String sendPost() throws Exception{
String url = "http://www.something.com/my_page.jsp?";
URL obj = new URL(url);
HttpURLConnection con = (HttpURLConnection) obj.openConnection();
//add request header
con.setRequestMethod("POST");
String urlParameters ="param1=val1¶m2=val2";
// Send post request
con.setDoOutput(true);
DataOutputStream wr = new DataOutputStream(con.getOutputStream());
wr.writeBytes(urlParameters);
wr.flush();
wr.close();
BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));
String inputLine;
response = new StringBuffer();
while ((inputLine = in.readLine()) != null) {
response.append(inputLine);
}
in.close();
// return result
Log.i("TAG", "sendPost:: Response length : " + response.length()); // <- This line returns the same length!!!
return response.toString();
}
All I can get of this object con from Class HttpUrlConnection like ContentLength, ContentType, etc is the same in both of these cases, therefore I suspect, there must be an intern Setting/Parameter of String/StringBuffer in Android, which distinguishes these case but I don't know what. readLine reads the same or at least the same number of chars cause the length of response is the same in both of cases.
If you could say me, what is wrong, I'd be very thankful.
Kind Regards
I can't understand your description of the symptoms; i.e. why you think that something is being truncated.
However, I can assure you that it is NOT due to a limit on the length of String or StringBuffer.
Those two classes do have a limit, but it is 2**31 (i.e. >2 billion) characters. You will typically get an OutOfMemoryError before your buffer gets that big.
In Grails web application, I am trying to post minutiae (finger print) byte array from applet to server using rest API.
This what i tried so for
private String post(String purl,String customerId, byte[] regMin1,byte[] regMin2) throws Exception {
StringBuilder parameters = new StringBuilder();
parameters.append("customerId=");
parameters.append(customerId);
parameters.append("®Min1=");
parameters.append(URLEncoder.encode(new String(regMin1),"UTF-8"));
parameters.append("®Min2=");
parameters.append(URLEncoder.encode(new String(regMin2),"UTF-8"));
URL url = new URL(purl);
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setDoInput(true);
connection.setRequestMethod("POST");
connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");
connection.setRequestProperty("Content-Length",Integer.toString(parameters.toString().getBytes().length));
DataOutputStream wr = new DataOutputStream(connection.getOutputStream ());
wr.writeBytes(parameters.toString());
wr.flush();
wr.close();
BufferedReader in = new BufferedReader(new InputStreamReader(
connection.getInputStream()));
StringBuilder builder = new StringBuilder();
String aux = "";
while ((aux = in.readLine()) != null) {
builder.append(aux);
}
in.close();
connection.disconnect();
return builder.toString();
}
I can post regMin1, regMin2 successfully but fingerprint verification always failing. I doubt, am i posting correctly.
This looks like a very bad idea to me:
parameters.append(URLEncoder.encode(new String(regMin1),"UTF-8"));
...
parameters.append(URLEncoder.encode(new String(regMin2),"UTF-8"));
If regMin1 and regMin2 aren't actually UTF-8 text (and my guess is that they're not) you'll almost certainly be losing data here.
Don't treat arbitrary binary data as if it's encoded text.
Instead, convert regMin1 and regMin2 to base64 - that way you'll end up with ASCII characters which you can then decode on the server to definitely get the original binary data. You can use a URL-safe version of base64 to avoid having to worry about further encoding the result.
There's a good public domain base64 library you can use for this if you don't have anything else to hand. So for example:
parameters.append("®Min1=")
.append(Base64.encodeBytes(regMin1, Base64.URL_SAFE))
.append("®Min2=")
.append(Base64.encodeBytes(regMin2, Base64.URL_SAFE));
Note that you'd want to decode with the URL_SAFE option as well - don't just try to decode it as "normal" base64 data.
(You might still want to convert this to a POST request, and you'd definitely have an easier time if you could use a better HTTP library, but they're slightly separate concerns.)
This is the code I'm using to get response text.
private static String request(String urlstr){
// create connection
try {
URL url = new URL(urlstr);
URLConnection conn = url.openConnection();
StringBuilder response = new StringBuilder();
conn.setUseCaches(false);
conn.setRequestProperty("User-Agent", USER_AGENT);
// read response
BufferedReader in = new BufferedReader(new InputStreamReader(conn.getInputStream()));
String line = null;
while ((line = in.readLine()) != null) {
response.append(line);
}
in.close();
return response.toString();
} catch (Exception e){
return null;
}
}
The problem is that when querying the very same request (simple get request, response is json) with my chrome browser I get the response almost 1 second faster than with this code in my application.
I wonder if theres anything I'm doing wrong in my code? Or it is chrome handling that request faster somehow?
Maybe there are some techniques to make this process faster?
Thanks
You seem to read the response line by line, but in the end you append every line to one single response, so it is not really required to read the response line by line. You can also do
char[] cbuf = new char[1024];
int len;
while ((len = in.read(cbuf)) != -1)
response.append(cbuf, 0, len);
Like this the response can be read in much larger chunks and you don't have the overhead of the readLine() method that has to look for newline characters in the input and split the content into lines.
You could also do a
new StringBuilder(connection.getContentLength());
to avoid that the StringBuilder has to increase it's capacity every time new content is appended. The StringBuilder is using a char[] internally and every time the array is not big enough for the new content it has to be copied to a new array with a larger size.
I am currently working with android and i am using a http connection with some headers (i havent included them or the real url for security purposes) to get a JSON response from an API, and feeding that response back into the application. The problem that i am having is that when using the getContentLength method of the http request, the wrong length is being returned (wrong length returned is 1225, the correct length in characters of the JSON array is 3365).
I have a feeling that the JSON is not fully loaded when my reader starts to read it, and as such is only reading the loaded JSON at that point. Is there any way around this, possibly using a delay on the HTTP connection or waiting until it is fully loaded to read the data?
URL url = new URL("https://www.exampleofurl.com");
HttpURLConnection request = (HttpURLConnection) url.openConnection();
request.connect();
int responseCode = request.getResponseCode();
if(responseCode == HttpURLConnection.HTTP_OK) {
InputStream inputStream = request.getInputStream();
InputStreamReader reader = new InputStreamReader(inputStream);
long contentLength2 = Long.parseLong(request.getHeaderField("Content-Length"));
Log.i("contentLength: ", "Content: " + contentLength2);
I generally don't recommend always relying on "Content-Length" as it may not be available (you get -1), or perhaps affected by intermediate proxy.
Why don't you just read your stream until it is exhausted into memory buffer (say, StringBuilder) and then get the actual size, for example :
BufferedReader br = new BufferedReader(inputStream); // inputStream in your code
String line;
StringBuilder sb = new StringBuilder();
while ((line = br.readLine()) != null) {
sb.append(line);
}
// finished reading
System.out.println("data size = " + sb.length());
JSONObject data = new JSONObject(sb.toString());
// and don't forget finally clauses with closing streams/connections