Reading a single file using 3 thread - java

i want three thread reading the single file for example the size of the file is 900 kb i want the first thread read the file 1kb to 300 and in the same fashion the other thread do (2 thread read from 301 kb to 600 kb AND 3 thread read 601kb to 900kb) does this approach make reading faster output has to be shown on the console may be the output get mixed it does not matter for me The main matter is that how to read the faster as comparison to single thread plz plz give me a suggestion or coding if somebody have plz

I'm not a Java expert, but I believe that if your goal is performance, you should not bother reading a single megabyte file in several threads. Most of the time is probably spent in doing the actual IO operation, that is reading from the disk (recall that disk operations are millions times slower than memory operations). Of course, on some occasions, it could be faster (e.g. on Linux system, the file data could have been cached, it it has been read or written some time before).
But when reading (rather small, i.e. megabyte sized) files, most of the time is spent in the system, and your coding won't impact that.
And reading a megabyte file should go fast on today's machines. You might use some dirty system tricks to improve it (e.g. the Linux readahead system call), if absolutely necessary.
Actually, your question surprises me. Reading one megabyte is quick today!
Regards.
Basile Starynkevitch

public static void main(String[] args){
//
String filePath = args[0];
//Create runnable objects
//Load the file
BufferedReader br = new BufferedReader(new FileReader(filePath));
//share this object among threads you want
MyFileReader mf1 = new MyFileReader(br);
MyFileReader mf2 = new MyFileReader(br);
MyFileReader mf3 = new MyFileReader(br);
new Thread(mf1).start();
new Thread(mf2).start();
new Thread(mf3).start();
//code to detect thread ends
//close br here
}
public class MyFileReader implements Runnable{
private BufferedReader br=null;
public MyFileReader(BufferedReader br){
this.br = br
}
public void run(){
String line=null;
while((line=br.readLine())!=null){
//do your thing here e.g.
System.out.println(line);
}
}

Related

Performance : BufferedOutputStream vs FileOutputStream in Java

I have read that BufferedOutputStream Class improves efficiency and must be used with FileOutputStream in this way -
BufferedOutputStream bout = new BufferedOutputStream(new FileOutputStream("myfile.txt"));
and for writing to the same file below statement is also works -
FileOutputStream fout = new FileOutputStream("myfile.txt");
But the recommended way is to use Buffer for reading / writing operations and that's the reason only I too prefer to use Buffer for the same.
But my question is how to measure performance of above 2 statements. Is their any tool or kind of something, don't know exactly what? but which will be useful to analyse it's performance.
As new to JAVA language, I am very curious to know about it.
Buffering is only helpful if you are doing inefficient reading or writing. For reading, it's helpful for letting you read line by line, even when you could gobble up bytes / chars faster just using read(byte[]) or read(char[]). For writing, it allows you to buffer pieces of what you want to send through I/O with the buffer, and to send them only on flush (see PrintWriter (PrintOutputStream(?).setAutoFlush())
But if you are just trying to read or write as fast as you can, buffering doesn't improve performance
For an example of efficient reading from a file:
File f = ...;
FileInputStream in = new FileInputStream(f);
byte[] bytes = new byte[(int) f.length()]; // file.length needs to be less than 4 gigs :)
in.read(bytes); // this isn't guaranteed by the API but I've found it works in every situation I've tried
Versus inefficient reading:
File f = ...;
BufferedReader in = new BufferedReader(f);
String line = null;
while ((line = in.readLine()) != null) {
// If every readline call was reading directly from the FS / Hard drive,
// it would slow things down tremendously. That's why having a buffer
//capture the file contents and effectively reading from the buffer is
//more efficient
}
These numbers came from a MacBook Pro laptop using an SSD.
BufferedFileStreamArrayBatchRead (809716.60-911577.03 bytes/ms)
BufferedFileStreamPerByte (136072.94 bytes/ms)
FileInputStreamArrayBatchRead (121817.52-1022494.89 bytes/ms)
FileInputStreamByteBufferRead (118287.20-1094091.90 bytes/ms)
FileInputStreamDirectByteBufferRead (130701.87-956937.80 bytes/ms)
FileInputStreamReadPerByte (1155.47 bytes/ms)
RandomAccessFileArrayBatchRead (120670.93-786782.06 bytes/ms)
RandomAccessFileReadPerByte (1171.73 bytes/ms)
Where there is a range in the numbers, it varies based on the size of the buffer being used. A larger buffer results in more speed up to a point, typically somewhere around the size of the caches within the hardware and operating system.
As you can see, reading bytes individually is always slow. Batching the reads into chunks is easily the way to go. It can be the difference between 1k per ms and 136k per ms (or more).
These numbers are a little old, and they will vary wildly by setup but they will give you an idea. The code for generating the numbers can be found here, edit Main.java to select the tests that you want to run.
An excellent (and more rigorous) framework for writing benchmarks is JMH. A tutorial for learning how to use JMH can be found here.

Buffer reader using input stream gives Out of Memory Error Android Java

i am getting error while reading stream. Stream also contain images along with string data.My code is,
static String convertStreamToString(java.io.InputStream is) {
Reader reader = new InputStreamReader(is);
BufferedReader r = new BufferedReader(reader);
StringBuilder total = new StringBuilder();
String line = null;
try {
while ((line = r.readLine()) != null) {
total.append(line);
}
} catch (IOException e) {
e.printStackTrace();
}
System.out.println(total);
return line;
}
In while loop it results in consol as its heap memory is increasing and at last give error "out of memory"
Hopes for your suggestion
You are simply reading your whole file and storing it into the StringBuilder. Your file just too big to fit in RAM, and just results in an OutOfMemory error when the JVM is unable to allocate any more space to your StringBuilder.
Either find a way not to store the whole file in memory or use a smaller file (the former solution clearly being the better).
Also, note that you are not closing your buffered reader after use. The correct way to do this is to declare your resources using the try-with resources, for example :
try(Reader reader = new InputStreamReader(is);
BufferedReader r = new BufferedReader(reader)){
//your code here
}
You do need to close your stream, as others have noted. But you have a couple of other things to think about:
What do you mean when you say that Stream also contain images along with string data? You're not doing anything to deal with this. You're reading it all as Strings. If the file really does contain images and strings, then it's quite likely that the images are much bigger than the strings; and if you just want the strings then you need to find a way to filter out the images. That will probably solve your out-of-memory problem, and also prevent you from ending up with nonsense in your output. Unfortunately, you haven't given us enough information to help with the detail of this: it will depend on the format of the file/stream.
If the file is huge, and contains more string data than you can store in memory, then no amount of careful closing of streams and the like will help. You'll need to process the file as you go, rather than storing it in RAM. But how you do this will rather depend on what you're trying to do with the data.
I'd start by attacking the first problem if I were you. At the moment, it sounds as though even if you solve the memory problem, you'll still end up with nonsensical output because the images will get decoded as Strings.

How many filereaders can concurrently read from the same file?

I have a massive 25GB CSV file. I know that there are ~500 Million records in the file.
I want to do some basic analysis with the data. Nothing too fancy.
I don't want to use Hadoop/Pig, not yet atleast.
I have written a java program to do my analysis concurrently. Here is what I am doing.
class MainClass {
public static void main(String[] args) {
long start = 1;
long increment = 10000000;
OpenFileAndDoStuff a = new OpenFileAndDoStuff[50];
for(int i=0;i<50;i++) {
a[i] = new OpenFileAndDoStuff("path/to/50GB/file.csv",start,start+increment-1);
a[i].start();
start += increment;
}
for(OpenFileAndDoStuff obj : a) {
obj.join();
}
//do aggregation
}
}
class OpenFileAndDoStuff extends Thread {
volatile HashMap<Integer, Integer> stuff = new HashMap<>();
BufferedReader _br;
long _end;
OpenFileAndDoStuff(String filename, long startline, long endline) throws IOException, FileNotFoundException {
_br = new BufferedReader(new FileReader(filename));
long counter=0;
//move the bufferedReader pointer to the startline specified
while(counter++ < start)
_br.readLine();
this._end = end;
}
void doStuff() {
//read from buffered reader until end of file or until the specified endline is reached and do stuff
}
public void run() {
doStuff();
}
public HashMap<Integer, Integer> getStuff() {
return stuff;
}
}
I thought doing this I could open 50 bufferedReaders, all reading 10 million lines chucks in parallel and once all of them are done doing their stuff, I'd aggregate them.
But, the problem I face is that even though I ask 50 threads to start, only two start at a time and can read from the file at a time.
Is there a way I can make all 50 of them open the file and read form it at the same time ? Why am I limited to only two readers at a time ?
The file is on a windows 8 machine and java is also on the same machine.
Any ideas ?
Here is a similar post: Concurrent reading of a File (java preffered)
The most important question here is what is the bottleneck in your case?
If the bottleneck is your disk IO, then there isn't much you can do at the software part. Parallelizing the computation will only make things worse, because reading the file from different parts simultaneously will degrade disk performance.
If the bottleneck is processing power, and you have multiple CPU cores, then you can take an advantage of starting multiple threads to work on different parts of the file. You can safely create several InputStreams or Readers to read different parts of the file in parallel (as long as you don't go over your operating system's limit for the number of open files). You could separate the work into tasks and run them in parallel
See the referred post for an example that reads a single file in parallel with FileInputStream, which should be significantly faster than using BufferedReader according to these benchmarks: http://nadeausoftware.com/articles/2008/02/java_tip_how_read_files_quickly#FileReaderandBufferedReader
One issue I see is that when a Thread is being asked to read, for example, lines 80000000 through 90000000, you are still reading in the first 80000000 lines (and ignoring them).
Maybe try java.io.RandomAccessFile.
In order to do this, you need all of the lines to be the same number of Bytes. If you cannot adjust the structure of your file, then this would not be an option. But if you can, this should allow for greater concurrency.

Java OutOfMemoryError in reading a large text file

I'm new to Java and working on reading very large files, need some help to understand the problem and solve it. We have got some legacy code which have to be optimized to make it run properly.The file size can vary from 10mb to 10gb only. only trouble start when file starting beyond 800mb size.
InputStream inFileReader = channelSFtp.get(path); // file reading from ssh.
byte[] localbuffer = new byte[2048];
ByteArrayOutputStream bArrStream = new ByteArrayOutputStream();
int i = 0;
while (-1 != (i = inFileReader.read(buffer))) {
bArrStream.write(localbuffer, 0, i);
}
byte[] data = bArrStream.toByteArray();
inFileReader.close();
bos.close();
We are getting the error
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2271)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
Any help would be appreciated?
Try to use java.nio.MappedByteBuffer.
http://docs.oracle.com/javase/7/docs/api/java/nio/MappedByteBuffer.html
You can map a file's content onto memory without copying it manually. High-level Operating Systems offer memory-mapping and Java has API to utilize the feature.
If my understanding is correct, memory-mapping does not load a file's entire content onto memory (meaning "loaded and unloaded partially as necessary"), so I guess a 10GB file won't eat up your memory.
Even though you can increase the JVM memory limit, it is needless and allocating a huge memory like 10GB to process a file sounds overkill and resource intensive.
Currently you are using a "ByteArrayOutputStream" which keeps an internal memory to keep the data. This line in your code keeps appending the last read 2KB file chunk to the end of this buffer:
bArrStream.write(localbuffer, 0, i);
bArrStream keeps growing and eventually you run out of memory.
Instead you should reorganize your algorithm and process the file in a streaming way:
InputStream inFileReader = channelSFtp.get(path); // file reading from ssh.
byte[] localbuffer = new byte[2048];
int i = 0;
while (-1 != (i = inFileReader.read(buffer))) {
//Deal with the current read 2KB file chunk here
}
inFileReader.close();
The Java virtual machine (JVM) runs with a fixed upper memory limit, which you can modify thus:
java -Xmx1024m ....
e.g. the above option (-Xmx...) sets the limit to 1024 megabytes. You can amend as necessary (within limits of your machine, OS etc.) Note that this is different from traditional applications which would allocate more and more memory from the OS upon demand.
However a better solution is to rework your application such that you don't need to load the whole file into memory at one go. That way you don't have to tune your JVM, and you don't impose a huge memory footprint.
You can't read 10GB Textfile in memory. You have to read X MB first, do something with it and than read the next X MB.
The problem is inherent in what you're doing. Reading entire files into memory is always and everywhere a bad idea. You're really not going to be able to read a 10GB file into memory with current technology unless you have some pretty startling hardware. Find a way to process them line by line, record by record, chunk by chunk, ...
Is it mandatory to get entire ByteArray() of output stream?
byte[] data = bArrStream.toByteArray();
Best approach is read line by line & write it line by line. You can use BufferedReader or Scanner to read large files as below.
import java.io.*;
import java.util.*;
public class FileReadExample {
public static void main(String args[]) throws FileNotFoundException {
File fileObj = new File(args[0]);
long t1 = System.currentTimeMillis();
try {
// BufferedReader object for reading the file
BufferedReader br = new BufferedReader(new FileReader(fileObj));
// Reading each line of file using BufferedReader class
String str;
while ( (str = br.readLine()) != null) {
System.out.println(str);
}
}catch(Exception err){
err.printStackTrace();
}
long t2 = System.currentTimeMillis();
System.out.println("Time taken for BufferedReader:"+(t2-t1));
t1 = System.currentTimeMillis();
try (
// Scanner object for reading the file
Scanner scnr = new Scanner(fileObj);) {
// Reading each line of file using Scanner class
while (scnr.hasNextLine()) {
String strLine = scnr.nextLine();
// print data on console
System.out.println(strLine);
}
}
t2 = System.currentTimeMillis();
System.out.println("Time taken for scanner:"+(t2-t1));
}
}
You can replace System.out with your ByteArrayOutputStream in above example.
Please have a look at below article for more details: Read Large File
Have a look at related SE question:
Scanner vs. BufferedReader
ByteArrayOutputStream writes to an in-memory buffer. If this is really how you want it to work, then you have to size the JVM heap after the maximum possible size of the input. Also, if possible, you may check the input size before even start processing to save time and resources.
The alternative approach is a streaming solution, where the amount of memory used at runtime is known (maybe configurable but still known before the program starts), but if it's feasible or not depends entirely on you application's domain (because you can't use an in-memory buffer anymore) and maybe the architecture of the rest of your code if you can't/don't want to change it.
Try using a large buffer read size may be 10 mb and then check.
Read the file iteratively linewise. This would significantly reduce memory consumption. Alternately you may use
FileUtils.lineIterator(theFile, "UTF-8");
provided by Apache Commons IO.
FileInputStream inputStream = null;
Scanner sc = null;
try {
inputStream = new FileInputStream(path);
sc = new Scanner(inputStream, "UTF-8");
while (sc.hasNextLine()) {
String line = sc.nextLine();
// System.out.println(line);
}
// note that Scanner suppresses exceptions
if (sc.ioException() != null) {
throw sc.ioException();
}
} finally {
if (inputStream != null) {
inputStream.close();
}
if (sc != null) {
sc.close();
}
}
Run Java with the command-line option -Xmx, which sets the maximum size of the heap.
See here for details..
Assuming that you are reading large txt file and the data is set line by line , use line by line reading approach. As I know you can read up to 6GB may be more.
...
// Open the file
FileInputStream fstream = new FileInputStream("textfile.txt");
BufferedReader br = new BufferedReader(new InputStreamReader(fstream));
String strLine;
//Read File Line By Line
while ((strLine = br.readLine()) != null) {
// Print the content on the console
System.out.println (strLine);
}
//Close the input stream
br.close();
Refrence for the code fragment
Short answer,
without doing anything, you can push the current limit by a factor of 1.5. It means that, if you are able to process 800MB, you can process 1200 MB. It also means that if by some trick with java -Xm .... you can move to a point where your current code can process 7GB, your problem is solved, because the 1.5 factor will take you to 10.5GB, assuming you have that space available on your system and that JVM can get it.
Long answer:
The error is pretty self-descriptive. You hit the practical memory limit on your configuration. There is a lot of speculating about the limit that you can have with JVM, I do not know enough about that, since I can not find any official information. However, you will somehow be limited by constraints like the available swap, the kernel address space usage, the memory fragmentation, etc.
What is happening now is that ByteArrayOutputStream objects are created with a default buffer of size 32 if you do not supply any size (this is your case). Whenever you call the write method on the object, there is an internal machinery that is started. The openjdk implementation release 7u40-b43 that seems to match perfectly with the output of your error, uses an internal method ensureCapacity to check that the buffer has enough room to put the bytes you want to write. If there is not enough room, another internal method grow is called to grow the size of the buffer. The method grow defines the appropriate size and calls the method copyOf from the class Arrays to do the job.
The appropriate size of the buffer is the maximum between the current size and the size riquired to hold all the content (the present content and the new content to be write).
The method copyOf from the class Arrays (follow the link) allocates the space for the new buffer, copy the content of the old buffer to the new one and return it to grow.
Your problem occurs at the allocation of the space for the new buffer, After some write, you got to a point where the available memory is exhausted: java.lang.OutOfMemoryError: Java heap space.
If we look into details, you are reading by chunks of 2048. So
your first write to the grows the size of the buffer from 32 to 2048
your second call will double it to 2*2048
your third call will take it to 2^2*2048, you have to time to write two more times before the need of allocating.
then 2^3*2048, you will have the time for 4 mores writes before allocating again.
at some point, your buffer will be of size 2^18*2048 which is 2^19*1024 or 2^9*2^20 (512 MB)
then 2^19*2048 which is 1024 MB or 1 GB
Something that is unclear in your description is that you can somehow read up to 800MB, but can no go beyond. You have to explain that to me.
I expect that your limit be exactly a power of 2 (or close if we use power of 10 units somewere). In that regard, I expect you to start having trouble immediatly above one of these: 256MB, 512 MB, 1GB, 2GB, etc.
When you hit that limit, it does not mean that you are out of memory, it simply means that it is not possible to allocate another buffer of twice the size of the buffer you already have. This observation opens room for improvement in your work: find the maximum size of buffer that you can allocate and reserve it upfront by calling the appropriate constructor
ByteArrayOutputStream bArrStream = new ByteArrayOutputStream(myMaxSize);
It has the advantage of reducing the overhead background memory allocation that happens under the hood to keep you happy. By doing this, you will be able to go to 1.5 the limit you have right now. This is simply because the last time the buffer was increased, it went from half the current size to the current size, and at some point you had both the current buffer and the old one together in memory. But you will not be able to go beyond 3 times the limit you are having now. The explanation is exactly the same.
That been said, I do not have any magic suggestion to solve the problem apart from process your data by chunks of given size, one chunk at a time. Another good approach will be to use the suggestion of Takahiko Kawasaki and use MappedByteBuffer. Keep in mind that in any case you will need at least 10 GB of physical memory or swap memory to be able to load a file of 10GB.
see
After thinking about it, I decided to put a second answer. I considered the advantages and disadvantages of putting this second answer, and the advantages are worth going for it. So here it is.
Most of the suggested considerations are forgetting a given fact: There is a builtin limit in the size of arrays (including ByteArrayOutputStream) that you can have in Java. And that limit is dictated by the bigest int value which is 2^31 - 1(little bit less than 2Giga). This means that you can only read a maximum of 2 GB (-1 byte) and put it in a single ByteArrayOutputStream. The limit might actually be smaller for array size if the VM wants more control.
My suggestion is to use an ArrayList of byte[] instead of a single byte[] holding the full content of the file. And also remove the non necessary step of putting in ByteArrayOutputStream before putting it in a final data array. Here is an example based on your original code:
InputStream inFileReader = channelSFtp.get(path); // file reading from ssh.
// good habits are good, define a buffer size
final int BUF_SIZE = (int)(Math.pow(2,30)); //1GB, let's not go close to the limit
byte[] localbuffer = new byte[BUF_SIZE];
int i = 0;
while (-1 != (i = inFileReader.read(localbuffer))) {
if(i<BUF_SIZE){
data.add( Arrays.copyOf(localbuffer, i) )
// No need to reallocate the reading buffer, we copied the data
}else{
data.add(localbuffer)
// reallocate the reading buffer
localbuffer = new byte[BUF_SIZE]
}
}
inFileReader.close();
// Process your data, keep in mind that you have a list of buffers.
// So you need to loop over the list
Simply running your program should work fine on 64 bits system with enough physical memory or swap. Now if you want to speed it up to help the VM size correctly the heap at the beginning, run with the options -Xms and -Xmx. For example if you want a heap of 12GB to be able to handle 10GB file, use java -Xms12288m -Xmx12288m YourApp

How to deal with reading and processing huge text files without getting OutofMemoryError

I wrote some straightforward code to read text files (>1g) and do some processing on Strings.
However, I have to deal with Java heap space problems since I try to append Strings (using StringBuilder) that are getting to big on memory usage at some point. I know that I can increase my heap space with, e. g. '-Xmx1024', but I would like to work with only little memory usage here.How could I change my code below to manage my operations?
I am still a Java novice and maybe I made some mistakes in my code which may seem obvious to you.
Here's the code snippet:
private void setInputData() {
Pattern pat = Pattern.compile("regex");
BufferedReader br = null;
Matcher mat = null;
try {
File myFile = new File("myFile");
FileReader fr = new FileReader(myFile);
br = new BufferedReader(fr);
String line = null;
String appendThisString = null;
String processThisString = null;
StringBuilder stringBuilder = new StringBuilder();
while ((line = br.readLine()) != null) {
mat = pat.matcher(line);
if (mat.find()) {
appendThisString = mat.group(1);
}
if (line.contains("|")) {
processThisString = line.replace(" ", "").replace("|", "\t");
stringBuilder.append(processThisString).append("\t").append(appendThisString);
stringBuilder.append("\n");
}
}
// doSomethingWithTheString(stringBuilder.toString());
} catch (Exception ex) {
ex.printStackTrace();
} finally {
try {
if (br != null)br.close();
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
Here's the error message:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2367)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
at java.lang.StringBuilder.append(StringBuilder.java:132)
at Test.setInputData(Test.java:47)
at Test.go(Test.java:18)
at Test.main(Test.java:13)
You could do a dry run, without appending, but counting the total string length.
If doSomethingWithTheString is sequential there would be other solutions.
You could tokenize the string, reducing the size. For instance Huffman compression looks for already present sequences reading a char, possible extends the table and then yields a table index. (The open source OmegaT translation tool uses such a strategy at one spot for tokens.) So it depends on the processing you want to do. Seeing the reading of a kind of CSV a dictionary seems feasible.
In general I would use a database.
P.S. you can save half the memory, writing all to a file, and then rereading the file in one string. Or use a java.nio ByteBuffer on the file, a memory mapped file.
You can't use StringBuilder in this case. It holds data in memory.
I think you should consider saving the result into file in every line.
i.e. Use FileWriter instead of StringBuilder.
The method doSomethingWithTheString() should probably need to change so that it accepts an InputStream as well. While reading the original file content and transforming it line by line you should write the transformed content to a temporary file line by line. Then an input stream to that temporary file could be send to the doSomethingWithTheString() method. Probably the method needs to be renamed as doSomethingWithInputStream().
From your example it is not clear what you are going to do with your enormous string once you have modified it. However since your modifications do not appear to span multiple lines I'd just write the modified data to a new file.
In order to do that create and open a new FileWriter object before your while cycle, move your stringBuffer declaration to the beginning of the cycle and write stringBuffer to your new file at the end of the cycle.
If, on the other hand, you do need to combine data coming from different lines consider using a database. Which kind depends on the nature of your data. If it has a record-like organization you might adopt a relational database, such as Apache Derby or MySQL, otherwise you might check out so called No SQL databases, such as Cassandra or MongoDB.
The general strategy is to design your application so that it doesn't need to hold the entire file (or too large a proportion of it) in memory.
Depending on what your application does:
You could write the intermediate data to a file and read it back again a line at a time to process it.
You could pass each line read to the processing algorithm; e.g. by calling doSomethingWithTheString(...) on each line individually rather than all of them.
But if you need to have the entire file in memory, you are between a rock and a hard place.
The other thing to note is that using a StringBuilder like that may require up to 6 times as much memory as the file size. It goes like this.
When the StringBuilder needs to expand its internal buffer it does this by making a char array twice the size of the current buffer, and copying from the old to the new. At that point you have 3 times as much buffer space allocated as you have before the buffer expansion started. Now suppose that there was just one more character to append to the buffer.
If the file is in ASCII (or another 8 bit charset), the StringBuilder's buffer needs twice that amount of memory ... because it consists of char not byte values.
If you have a good estimate of the number of characters that will be in the final string (e.g. from the file size), you can avoid the x3 multiplier by giving a capacity hint when you create the StringBuilder. However, you mustn't underestimate, 'cos if you underestimate just slightly ...
You could also use a byte-oriented buffer (e.g. a ByteArrayOutputStream) instead of a StringBuilder ... and then read it with a ByteArrayInputStream / StreamReader / BufferedReader pipeline.
But ultimately, holding a large file in memory doesn't scale as the file size increases.
Are you sure there is a line terminator in the file? If not, your while loop will just keeps looping and leads to your error. If so, it might worth trying reading a fixed number of bytes at a time so that the reader won't grow infinitely.
I suggest the use of Guavas FileBackedOutputStream. You gain the advantage of having an OutputStream that will eat up disk io instead of main memory. Of course access will be slower due to the disk io, but, if you are dealing with such a large stream, and you are unable to chunk it into a more managable size, it is a good option.

Categories

Resources