Im using using following Method to catch Data from a webapi:
public static String sendRequest(String requestURL, String data)
throws IOException {
URL url = new URL(requestURL + "?" + data);
URLConnection conn = url.openConnection();
conn.setReadTimeout(10000);
BufferedReader in = new BufferedReader(new InputStreamReader(
conn.getInputStream()));
String inputLine;
StringBuilder answerBuilder = new StringBuilder("");
try {
while ((inputLine = in.readLine()) != null)
answerBuilder.append(inputLine);
in.close();
} catch (Exception e) {
}
return answerBuilder.toString();
}
With some requests, this leads to a OutOfMemoryError caused by a too small HeapSize:
(...)Caused by: java.lang.OutOfMemoryError: (Heap Size=17927KB, Allocated=14191KB, Bitmap Size=2589KB)
at java.lang.AbstractStringBuilder.enlargeBuffer(AbstractStringBuilder.java:95)
at java.lang.AbstractStringBuilder.append0(AbstractStringBuilder.java:132)
at java.lang.StringBuilder.append(StringBuilder.java:272)
at java.io.BufferedReader.readLine(BufferedReader.java:423)
at com.elophant.utils.HTTPUtils.sendRequest(HTTPUtils.java:23)
at (..)
I already swapped from normal String operations like String answer += inputLine to a StringBuilder but this didnt help. How can i solve this Problem? Increasing maximum heap size via export JVM_ARGS="-Xmx1024m -XX:MaxPermSize=256m" isnt an option as its an android app.
Use a file for temporary storage like when a hard drive starts paging because it's out of memory.
One solution would be to persist the content being downloaded to a storage.
Depending on what you are download you could parse it during its read and store it in a SQL Lite DataBase. This would allow you to use Query language to handle data afterwards. This would be really useful if file being downloaded is a JSON or XML.
In JSON you could get the InputStream as you already do and read stream with the JSON Reader. For every record read from the JSON you can store in a table (or more tables depending on how each record is structured). The good thing from this approach is that at the end you don't need file handling and you already have your data distributed in tables within your database ready to be queried.
you should write the stringbuilder content into a file and clear it from time to time.
If your String is really that large, you will need to store it in a file temporarily and process it in chunks (or handle it in chunks while you receive it)
Not for the faint-hearted, but write your own MyString class that uses a byte for each char~ 50% memory savings! And consequnetely, MyStringBuilder class. Only assuming you are dealing with ASCII.
Related
I'm trying to get content from a jpg file so I can encrypt that content and save it in another file that is later decrypted.
I'm trying to do so by reading the jpg file as if it were a text file with this code:
String aBuffer = "";
try {
File myFile = new File(pathRoot);
FileInputStream fIn = new FileInputStream(myFile);
BufferedReader myReader = new BufferedReader(new InputStreamReader(fIn));
String aDataRow = "";
while ((aDataRow = myReader.readLine()) != null) {
aBuffer += aDataRow;
}
myReader.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
But this doesn't give the content the file has, just a short string and weirdly enough it also looks like just reading the file corrupts it.
What could I do so I can achieve the desired behavior?
Image files aren't text - but you're treating the data as textual data. Basically, don't do that. Use the InputStream to load the data into a byte array (or preferably, use Files.readAllBytes(Path) to do it rather more simply).
Keep the binary data as binary data. If you absolutely need a text representation, you'll need to encode it in a way that doesn't lose data - where hex or base64 are the most common ways of doing that.
You mention encryption early in the question: encryption also generally operates on binary data. Any encryption methods which provide text options (e.g. string parameters) are just convenience wrappers which encode the text as binary data and then encrypt it.
and weirdly enough it also looks like just reading the file corrupts it.
I believe you're mistaken about that. Just reading from a file will not change it in any way. Admittedly you're not using try-with-resources statements, so you could end up keeping the file handle open, potentially preventing another process from reading it - but the content of the file itself won't change.
I'm trying to read a bz2 file using Apache Commons Compress.
The following code works for a small file.
However for a large file (over 500MB), it ends after reading a few thousands lines without any error.
try {
InputStream fin = new FileInputStream("/data/file.bz2");
BufferedInputStream bis = new BufferedInputStream(fin);
CompressorInputStream input = new CompressorStreamFactory()
.createCompressorInputStream(bis);
BufferedReader br = new BufferedReader(new InputStreamReader(input,
"UTF-8"));
String line = "";
while ((line = br.readLine()) != null) {
System.out.println(line);
}
} catch (Exception e) {
e.printStackTrace();
}
Is there another good way to read a large compressed file?
I was having the same problem with a large file, until I noticed that CompressorStreamFactory has a couple of overloaded constructors that take a boolean decompressUntilEOF parameter.
Simply changing to the following may be all that's missing...
CompressorInputStream input = new CompressorStreamFactory(true)
.createCompressorInputStream(bis);
Clearly, whoever wrote this factory seems to think it's better to create new compressor input streams at certain points, with the same underlying buffered input stream so that the new one picks up where the last one left off. They seem to think that's a better default, or preferred way of doing it over allowing one stream to decompress data all the way to the end of the file. I've no doubt they are cleverer than me, and I haven't worked out what trap I'm setting for future me by setting this parameter to true. Maybe someone will tell me in the comments! :-)
I know this has been asked before, but since I haven't been able to find an answer with a definitive conclusion or at least one that shows the pros and cons of the possibles approaches, I have to ask :
When it comes to read data from the Internet, a webservice for instance, what is the correct or more efficient way to read this data?
From all the books I have glanced over, I've found at least 4 ways to read data:
1) Reading a specific amount of characters at a time.
In this case the data is read in chunks of 4026 characters
BufferedReader reader = new BufferedReader(
new InputStreamReader(in, encoding));
char[] buffer = new char[4096];
StringBuilder sb = new StringBuilder();
int downloadedBytes = 0;
int len1 = 0;
while ((len1 = reader.read(buffer)) > 0) {
sb.append(buffer);
}
return sb.toString();
2) Read the data knowing the content lenght
int length =(HttpURLConnection) urlConnection.getContentLength();
InputStream inputStream = urlConnection.getInputStream();
BufferedReader bufferedReader =new BufferedReader(new InputStreamReader(inputStream));
StringBuilder stringBuilder = new StringBuilder(length);
char[] buffer = new char[length];
int charsRead;
while ((charsRead = bufferedReader.read(buffer)) != -1) {
stringBuilder.append(buffer, 0, charsRead);
}
return stringBuilder.toString();
3) Read the data line by line :
BufferedReader reader=new BufferedReader(new InputStreamReader(c.getInputStream()));
StringBuilder buf=new StringBuilder();
String line=null;
while ((line=reader.readLine()) != null) {
buf.append(line);
}
return(buf.toString());
4) Read the data character by character:
InputStream in = mConnection.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(
in, enconding));
int ch;
StringBuilder sb = new StringBuilder();
while ((ch=reader.read()) > 0) {
sb.append((char)ch);
}
return sb.toString().trim();
I have tried 3 of these 4 different techniques, except for number 3 (Read the data line by line ) and out of the three techniques only the fourth has given me good results.
The first method, didn't work for me because when I read large amounts of data, as it often cut the data giving me as a result invalid json strings or string with white spaces at the end.
The second approach, well I wasn't able to use that method because getContentLength is not always reliable and if the value is not set , there's nothing we can do about it , well that's my case.
I didn't tried the third method because I wasn't sure about the fact of reading data "line" by "line". Does this apply to data that contains an array of json objects or only to files that indeed contain lines??
Being the last technique the last choice I was left with, I tried it and it worked, BUT I don't think that reading a large amount of data character by character would be efficient at all.
So now I would really appreciate your opinions and ideas. What approach do you use when it comes to reading data from webservices? and more importantly why?
Thanks.
P.D. I know I could've easily used DefaultHttpClient, but the doc clearly encourages not to do so.
For Android 2.3 (Gingerbread) and later, HttpURLConnection is the best
choice. Its simple API and small size makes it great fit for Android.
Transparent compression and response caching reduce network use,
improve speed and save battery.
Ive tried all the methods that you have mentioned. One problem if faced was the reply not being read completely. After some research, the most efficient/fastest way i found was to go about it like this
DefaultHttpClient client = new DefaultHttpClient();
HttpGet httpGet = new HttpGet(url);
httpGet.setHeader("Accept", "application/json");
httpGet.setHeader("Content-type", "application/json");
//ive put json header because im using json
try {
HttpResponse execute = client.execute(httpGet);
String responseStr = EntityUtils.toString(execute.getEntity());
}
responseStr will contain the webservice reply and it reads it in one go. Hope this helps
If the data volume is not too big, it doesn't really matter, what approach you use. If it is, then it makes sense to use buffering - and read data in chunks.
2nd approach is not too good, as you not always can get ContentLength.
Then, if your data is text/html/JSON you can use 3rd approach, as you don't have to bother yourself with the chunk size. Also, you can print the incoming data line-by-line to aim debugging.
If your data is a binary/base64 stream like image, you should use 1st approach and read data in 4k (usually used) blocks.
UPDATE:
BTW, instead of the dreaded DefaultHttpClient I'm using the AndroidHttpClient as a singleton and it works smooth :)
It matters. Best for performance is to read from InputStream into a buffer of a reasonable size. This way you transfer a decent amount of data at one time, rather then repeating the same operation thousand times. Do not always rely on Content-length header value. For gzipped content it might show incorrect size.
I was curious as to what was the best and FASTEST way to get a response from the server, say if I used a for loop to load a url that returned an XML file, which way could I use to load the url get the response 10 times in a row? speed is the most important thing. I know it can only go as fast as your internet but I need a way to load the url as fast as my internet will allow and then put the who output of the url in a string so i can append to JTextArea.. This is the code Ive been using but seek faster alternatives if possible
int times = Integer.parseInt(jTextField3.getText());
for(int abc = 0; abc!=times; abc++){
try {
URL gameHeader = new URL(jTextField2.getText());
InputStream in = gameHeader.openStream();
byte[] buffer = new byte[1024];
try {
for(int cwb; (cwb = in.read(buffer)) != -1;){
jTextArea1.append(new String(buffer, 0, cwb));
}
} catch (IOException e) {}
} catch (MalformedURLException e) {} catch (IOException e) {}
}
is there anything that would be faster than this?
Thanks
-CLUEL3SS
This seems like a job for Java NIO (Non-blocking IO). This article is from Java 1.4 but still will give you a good understanding of how to setup NIO. Since then NIO have evolved a lot and you may need to look up the API for Java 6 or Java 7 to find out whats new.
This solution is probably best as an async option. Basically it will allow you to load 10 URLs without waiting for each one to be complete before moving on and loading an other.
You can't load text this way as the 1024 byte boundary could break an encoded character in two.
Copy all the data to ByteArrayInputStream and use toString() on it or read Text as Text using BufferedReader.
Use a BufferedReader; use a much larger buffer size than 1024; don't swallow exceptions. You could also try re-using the same URL object instead of creating a new one each time, might help with connection pooling.
But why would you want to read the same URL 10 times in a row?
I receive gziped JSON from web service and then i unzip it (size of unziped JSON is 3.2MB).
I need to transform received InputStream to String so i can then create JSONObject and parse it. I do it with this code:
public static String InputStreamToString(InputStream in)
throws IOException {
BufferedInputStream bis = new BufferedInputStream(in);
ByteArrayOutputStream buf = new ByteArrayOutputStream();
int result = bis.read();
while(result != -1) {
byte b = (byte)result;
buf.write(b);
result = bis.read();
}
return buf.toString();
}
I receive java.lang.OutOfMemoryError on the last line: "return buf.toString();" on the emulator and device with 288MB Ram.
What shall i do?
Reading in a byte at a time is so 1990's. Either use HttpClient and BasicResponseHandler, or at least read the data in respectable chunks and append them using a StringBuilder.
Assuming you are still having the problem, the issue is that there is no single block of memory that is big enough for your string, based upon other things your app has been doing. The Android garbage collector is not a compacting collector, so it is possible to have lots of free heap space yet not enough for a specific allocation request.
In that case, you may need to switch to some sort of streaming JSON parser. If you happen to be targeting only Honeycomb and higher, you can use JSONReader. Otherwise, Jackson reportedly works on Android and apparently has a streaming mode.
You can try to create a new JSONObject using
new JSONObject(new JSONTokener(in))
instead of converting in to a String directly. However, this will probably only delay the problem. If you don't have enough memory to load a 3.2 meg string into memory, you probably won't have enough memory to load that as a json object, which will take more memory than the simple string.