IO slow on Android? [closed] - java

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
This should show the internal and external memory speeds, but unfortunately it says it's 0.02 or 0.04 MB/s? Is this an enormous inefficiency in Android, or coding error?
findViewById(R.id.buttonStorageSpeed).setOnClickListener(new OnClickListener() {
#Override
public void onClick(View arg0) {
double SDSpeed = MBPSTest(getExternalCacheDir()); // /sdcard
double MemSpeed = MBPSTest(getCacheDir()); // /data/data
final AlertDialog dialog = new AlertDialog.Builder(thisContext)
.setTitle("Memory speed")
.setMessage( "Internal:"+String.valueOf(MemSpeed) + "\n" + "SD:"+String.valueOf(SDSpeed))
.create();
dialog.show();
}
});
/**
* Test MB/s write speed in some directory.
* Writes 4MB, deletes it after.
* #param outdir
* #return
*/
private double MBPSTest(File outDir) {
long start = System.currentTimeMillis();
try {
for (int fnum = 0; fnum < 1024; fnum++) {
File out = new File(outDir,"TESTspeed"+String.valueOf(fnum));
FileOutputStream fos = new FileOutputStream(out);
//Write 4k files
for (int i=0; i < 1024; i++) {
fos.write(65);//A
fos.write(69);//E
fos.write(73);//I
fos.write(79);//O
}
fos.flush();
fos.close();
//System.out.println("Wrote file.");
}
//Wrote 4MB
//Toast.makeText(getApplicationContext(), "Wrote External at: "+String.valueOf(4.0 / (elapsed/1000.0) )+" MB/S", Toast.LENGTH_LONG).show();
} catch (FileNotFoundException e) {
e.printStackTrace();
return 0;
} catch (IOException e) {
e.printStackTrace();
return 0;
}
long elapsed = System.currentTimeMillis() - start;
//Clean up:
for (int fnum = 0; fnum < 1024; fnum++) {
File out = new File(outDir, "TESTspeed"+String.valueOf(fnum));
out.delete();
}
// 4 MB / (seconds)
return 4.0 / (elapsed/1000.0);
}

As well as the overhead Jonathon mentioned of opening and closing the file a lot, you're also calling write(int) for every single byte.
Either use write(byte[]) with a big buffer, or use a BufferedOutputStream to wrap the FileOutputStream. There may be some buffering in FileOutputStream already, but equally there may not be. You may well find that once you've got fewer write operations (but still the same amount of data) it's much faster.

You're introducing a ton of overhead here:
for (int fnum = 0; fnum < 1024; fnum++) {
File out = new File(outDir,"TESTspeed"+String.valueOf(fnum));
FileOutputStream fos = new FileOutputStream(out);
//Write 4k files
for (int i=0; i < 1024; i++) {
fos.write(65);//A
fos.write(69);//E
fos.write(73);//I
fos.write(79);//O
}
fos.flush();
fos.close();
//System.out.println("Wrote file.");
}
//Wrote 4MB
You're closing and opening the file every 4k (1024 times per file). Instead you should open it just once outside the loop.
This is still far from being a scientific test. You're making a bunch of API calls that aren't going to show the real speed of the device. Also, you might have a bunch of filesystem re-sizing overhead going on.
A better method might be:
Open the file
Write the desired size of data
Seek to the beginning of the file
Flush
Start timer
Write desired size of data in as big of a chunk as possible
Stop timer
Cleanup

Related

How to read file part by part while writing it?

I'm working on an app that records video and I need to send already written data in videofile to server in base64 string without stopping record process. Does anyone know how to make it with less memory consumption?
For now I'm doing it this way
private void sendNewVideos(String path) {
try {
Log.i(TAG, "VIDEO PATH - " + path);
FileWriter fileWriter = new FileWriter(new File(pathToFolder + "/temp.txt"));
String base64String = new String();
File file = new File(path);
Long size = 0L;
base64String = Base64.encodeToString(readFile(file, size), Base64.DEFAULT);
fileWriter.append(base64String);
fileWriter.flush();
boolean flag = true;
while (flag) {
if (size < file.length()) {
base64String = Base64.encodeToString(readFile(file, size), Base64.DEFAULT);
fileWriter.append(base64String);
fileWriter.flush();
size = file.length();
}
}
fileWriter.close();
} catch (IOException e) {
e.printStackTrace();
}
}
private byte[] readFile(File file, Long size) {
try {
RandomAccessFile randomAccessFile = new RandomAccessFile(file, "r");
randomAccessFile.seek(size);
FileChannel fileChannel = randomAccessFile.getChannel();
ByteBuffer buffer = ByteBuffer.allocate(1024 * 1024 * 2);
while (fileChannel.read(buffer) > 0) {
buffer.flip();
byte[] temp = new byte[buffer.limit()];
for (int i = 0; i < buffer.limit(); i++) {
temp[i] = buffer.get(i);
}
buffer.clear();
return temp;
}
fileChannel.close();
randomAccessFile.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
Writing to file is just to check how it works. But after some time recording stops. Sometimes LogCat shows something like this
I/art: Thread[3,tid=23425,WaitingInMainSignalCatcherLoop,Thread*=0x7fe42c410800,peer=0x22c08080,"Signal Catcher"]: reacting to signal 3
I/art: Wrote stack traces to '/data/anr/traces.txt'
I think that's because of either memory leak or just out of memory problem.
Some kind of solutions.
Don't use Base64 for encoding video for sending via network (even wi-fi) as it increases amount of data approximately 10 times which is not very good for battery and could kill or hang you process/service.
Avoid reading file that is in process of written as it could and would slowdown IO operation speed.
If you still need to send data from such file use some kind of next algorithm:
get access to file (for example with buffered input stream);
read part of file to buffer;
do as simpler work with it as possible. For, example, send buffer to server in separate thread with HTTPUrlConnection. You can find example here.
Control used memory otherwise system try to kill you process.

Reading file >4GB file in java

I have mainframe data file which is greater than 4GB. I need to read and process the data for every 500 bytes. I have tried using FileChannel, however I am getting error with message Integer.Max_VALUE exceeded
public void getFileContent(String fileName) {
RandomAccessFile aFile = null;
FileChannel inChannel = null;
try {
aFile = new RandomAccessFile(Paths.get(fileName).toFile(), "r");
inChannel = aFile.getChannel();
ByteBuffer buffer = ByteBuffer.allocate(500 * 100000);
while (inChannel.read(buffer) > 0) {
buffer.flip();
for (int i = 0; i < buffer.limit(); i++) {
byte[] data = new byte[500];
buffer.get(data);
processData(new String(data));
buffer.clear();
}
}
} catch (Exception ex) {
// TODO
} finally {
try {
inChannel.close();
aFile.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
Can you help me out with a solution?
The worst problem of you code is the
catch (Exception ex) {
// TODO
}
part, which implies that you won’t notice any exceptions thrown by your code. Since there is nothing in the JRE printing a “Integer.Max_VALUE exceeded” message, that problem must be connected to your processData method.
It might be worth noting that this method will be invoked way too often with repeated data.
Your loop
for (int i = 0; i < buffer.limit(); i++) {
implies that you iterate as many times as there are bytes within the buffer, up to 500 * 100000 times. You are extracting 500 bytes from the buffer in each iteration, processing a total of up to 500 * 500 * 100000 bytes after each read, but since you have a misplaced buffer.clear(); at the end of the loop body, you will never experience a BufferUnderflowException. Instead, you will invoke processData each of the up to 500 * 100000 times with the first 500 bytes of the buffer.
But the whole conversion from bytes to a String is unnecessarily verbose and contains unnecessary copy operations. Instead of implementing this yourself, you can and should just use a Reader.
Besides that, your code makes a strange detour. It starts with a Java 7 API, Paths.get, to convert it to a legacy File object, create a legacy RandomAccessFile to eventually acquire a FileChannel. If you have a Path and want a FileChannel, you should open it directly via FileChannel.open. And, of course, use a try(…) { … } statement to ensure proper closing.
But, as said, if you want to process the contents as Strings, you surely want to use a Reader instead:
public void getFileContent(String fileName) {
try( Reader reader=Files.newBufferedReader(Paths.get(fileName)) ) {
CharBuffer buffer = CharBuffer.allocate(500 * 100000);
while(reader.read(buffer) > 0) {
buffer.flip();
while(buffer.remaining()>500) {
processData(buffer.slice().limit(500).toString());
buffer.position(buffer.position()+500);
}
buffer.compact();
}
// there might be a remaining chunk of less than 500 characters
if(buffer.position()>0) {
processData(buffer.flip().toString());
}
} catch(Exception ex) {
// the *minimum* to do:
ex.printStackTrace();
// TODO real exception handling
}
}
There is no problem with processing files >4GB, I just tested it with a 8GB file. Note that the code above uses the UTF-8 encoding. If you want to retain the behavior of your original code of using whatever happens to be your system’s default encoding, you may create the Reader using
Files.newBufferedReader(Paths.get(fileName), Charset.defaultCharset())
instead.

Java: How to efficiently create multiple nested zip files?

I am trying to create several zip files in a multi-threaded environment (actually I tried to write about 400 zip files using an FixedThreadPool ExecutorService serving 16 threads). Each of these zip files may contain thousands of other zip files.
Unfortunately after about two minutes my java process (jdk1.8.0_60_x64 on Windows 64bit) seems to end up in a memory leak. While the heap (according to Java Mission Control) only uses about 1 GB (actually between 500 MB and 1 GB), the java process in total uses about 40 GB of machine memory (seems to be a lot of native memory used). This number keeps increasing and after another while the process/my system practically stops working (I do not have that much memory).
After some research I found out it is possible to simulate the behavior using a rather small main method:
public static void main(String[] args) throws Throwable {
for (int k = 0; k < 16; k++) {
new Thread(Integer.toString(k)) {
#Override
public void run() {
try {
long bytes = 0;
ZipOutputStream zos = new ZipOutputStream(new BufferedOutputStream(new FileOutputStream(new File(getName() + ".tmp"))));
zos.setLevel(Deflater.NO_COMPRESSION);
Random rand = new SecureRandom();
for (int i = 0; i < 65535; i++) {
zos.putNextEntry(new ZipEntry("" + i));
ZipOutputStream inner = new ZipOutputStream(zos);
for (int j = 0; j < 10; j++) {
byte[] b = new byte[512];
bytes += b.length;
rand.nextBytes(b);
inner.putNextEntry(new ZipEntry("" + j));
inner.write(b);
inner.closeEntry();
}
inner.finish();
inner.flush();
zos.closeEntry();
zos.flush();
if (i % 1000 == 0) {
System.err.println(getName() + ": " + i + " (" + bytes + ") bytes");
}
}
zos.flush();
zos.close();
}
catch (Exception e) {
e.printStackTrace();
}
}
}.start();
}
}
Is there anything wrong with my code? Probably.
Is it a bad idea to use I/O operations in these many threads? I'm really not sure about it (actually I would like to gain performance, not loose performance). But on the other side, if I leave out all the zip stuff and just write on the FileOutputStream in even more threads no such problems occur. Do the zip entries overhead increase the size that much?
Is there anything wrong with the usage of my inner ZipOutputStream? As far as I understand I must not call close() for this class as it would close the outer stream named zos as well. Instead I am calling finish().

Limit Android Filesize

Background
I'm keeping a relatively large text file in android storage, and appending to it periodically- while limiting the file's size to some arbitrary size (say 2MB)
Hopefully I'm missing a function somewhere, or hopefully there is a better way to do this process.
Currently, when the file a goes over that arbitrary size, I create a temporary file b, copy the relevant portion of the file a (more or less the substring of the file a starting at byte xxx where xxx is the number of bytes too large the file a would be if I wrote the next bit of data to the log) plus the current data, then overwrite the file a with the second file b.
This is obviously terribly inefficient...
Another solution that I'm not terribly fond of is to keep two files, and toggle between the two of them, clearing the next when the current is full, and switching to that file for output.
However, it would be suuuuuper handy if I could just do something like this
File A = new File("output");
A.chip(500);
or maybe
A.subfile(500,A.length()-500);
TLDR;
Is there a function or perhaps library available for Android that can remove a portion of a file?
Did you already take a look at RandomAccessFile? Though you cannot remove portions of a file you can seek any position within the file and even set the length. So if you detect your file grows too large, just grab the relevant portion and jump to the beginning. Set length to 0 and write the new data.
EDIT:
I wrote a small demo. It shows if the file size is limeted to 10 bytes. If you pass in the values 10 to 15 as strings and separate them with commas, after 10,11,12, the file is written from the beginning, so after 15 it reads 13,14,15
public class MainActivity extends Activity {
private static final String TAG = MainActivity.class.getSimpleName();
private static final long MAX = 10;
private static final String FILE_TXT = "file.txt";
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
for (int i = 10; i <= 15; i++) {
if (i > 10) {
writeToFile(",");
}
writeToFile(Integer.toString(i));
}
}
private void writeToFile(String text) {
try {
File f = new File(getFilesDir(), FILE_TXT);
RandomAccessFile file = new RandomAccessFile(f, "rw");
long currentLength = file.length();
if (currentLength + text.length() > MAX) {
file.setLength(0);
}
file.seek(file.length());
file.write(text.getBytes());
file.close();
} catch (IOException e) {
Log.e(TAG, "writeToFile()", e);
}
printFileContents();
}
private void printFileContents() {
StringBuilder sb = new StringBuilder();
try {
FileInputStream fin = openFileInput(FILE_TXT);
int ch;
while ((ch = fin.read()) != -1) {
sb.append((char) ch);
}
fin.close();
} catch (IOException e) {
Log.e(TAG, "printFileContents()", e);
}
Log.d(TAG, "current content: " + sb.toString());
}
}

Java match/exceed performance of readline

For my application, I had to write a custom "readline" method since I wanted to detect and preserve the newline endings in an ASCII text file. The Java readLine() method does not tell which newline sequence (\r, \n, \r\n) or EOF was encountered, so I cannot put the exact same newline sequence when writing to the modified file.
Here is the SSCE of my test example.
public class TestLineIO {
public static java.util.ArrayList<String> readLineArrayFromFile1(java.io.File file) {
java.util.ArrayList<String> lineArray = new java.util.ArrayList<String>();
try {
java.io.BufferedReader br = new java.io.BufferedReader(new java.io.FileReader(file));
String strLine;
while ((strLine = br.readLine()) != null) {
lineArray.add(strLine);
}
br.close();
} catch (java.io.IOException e) {
System.err.println("Could not read file");
System.err.println(e);
}
lineArray.trimToSize();
return lineArray;
}
public static boolean writeLineArrayToFile1(java.util.ArrayList<String> lineArray, java.io.File file) {
try {
java.io.BufferedWriter out = new java.io.BufferedWriter(new java.io.FileWriter(file));
int size = lineArray.size();
for (int i = 0; i < size; i++) {
out.write(lineArray.get(i));
out.newLine();
}
out.close();
} catch (java.io.IOException e) {
System.err.println("Could not write file");
System.err.println(e);
return false;
}
return true;
}
public static java.util.ArrayList<String> readLineArrayFromFile2(java.io.File file) {
java.util.ArrayList<String> lineArray = new java.util.ArrayList<String>();
try {
java.io.FileInputStream stream = new java.io.FileInputStream(file);
try {
java.nio.channels.FileChannel fc = stream.getChannel();
java.nio.MappedByteBuffer bb = fc.map(java.nio.channels.FileChannel.MapMode.READ_ONLY, 0, fc.size());
char[] fileArray = java.nio.charset.Charset.defaultCharset().decode(bb).array();
if (fileArray == null || fileArray.length == 0) {
return lineArray;
}
int length = fileArray.length;
int start = 0;
int index = 0;
while (index < length) {
if (fileArray[index] == '\n') {
lineArray.add(new String(fileArray, start, index - start + 1));
start = index + 1;
} else if (fileArray[index] == '\r') {
if (index == length - 1) { //last character in the file
lineArray.add(new String(fileArray, start, length - start));
start = length;
break;
} else {
if (fileArray[index + 1] == '\n') {
lineArray.add(new String(fileArray, start, index - start + 2));
start = index + 2;
index++;
} else {
lineArray.add(new String(fileArray, start, index - start + 1));
start = index + 1;
}
}
}
index++;
}
if (start < length) {
lineArray.add(new String(fileArray, start, length - start));
}
} finally {
stream.close();
}
} catch (java.io.IOException e) {
System.err.println("Could not read file");
System.err.println(e);
e.printStackTrace();
return lineArray;
}
lineArray.trimToSize();
return lineArray;
}
public static boolean writeLineArrayToFile2(java.util.ArrayList<String> lineArray, java.io.File file) {
try {
java.io.BufferedWriter out = new java.io.BufferedWriter(new java.io.FileWriter(file));
int size = lineArray.size();
for (int i = 0; i < size; i++) {
out.write(lineArray.get(i));
}
out.close();
} catch (java.io.IOException e) {
System.err.println("Could not write file");
System.err.println(e);
return false;
}
return true;
}
public static void main(String[] args) {
System.out.println("Begin");
String fileName = "test.txt";
long start = 0;
long stop = 0;
start = java.util.Calendar.getInstance().getTimeInMillis();
java.io.File f = new java.io.File(fileName);
java.util.ArrayList<String> javaLineArray = readLineArrayFromFile1(f);
stop = java.util.Calendar.getInstance().getTimeInMillis();
System.out.println("Total time = " + (stop - start) + " ms");
java.io.File oj = new java.io.File(fileName + "_readline.txt");
writeLineArrayToFile1(javaLineArray, oj);
start = java.util.Calendar.getInstance().getTimeInMillis();
java.util.ArrayList<String> myLineArray = readLineArrayFromFile2(f);
stop = java.util.Calendar.getInstance().getTimeInMillis();
System.out.println("Total time = " + (stop - start) + " ms");
java.io.File om = new java.io.File(fileName + "_custom.txt");
writeLineArrayToFile2(myLineArray, om);
System.out.println("End");
}
}
Version 1 uses readLine(), whereas version 2 is my version, which preserves newline characters.
On a text file with about 500K lines, version1 takes about 380 ms, whereas version2 takes 1074 ms.
How can I speed-up the performance of version2?
I checked Google guava and apache-commons libraries but cannot find a suitable replacement for "readLine()" that will tell which newline character was encountered when reading a text file.
Whenever the issue regards a program's speed, the main thing you should keep in mind is that, for any continuous process within that program, the speed is nearly always limited by one of two things: CPU (processing power) or IO (memory allocation and transfer speed).
Usually either your CPU is faster than your IO, or the contrary. Because of this, your program's speed-limit is almost always dictated by one of them, and it's usually easy to know which:
A program that does a lot of calculations but makes only a few, small operations with files, is almost certainly CPU-bound.
A program that reads a lot of data from files, or writes a lot of data to them, but is not very demanding towards processing, is almost certainly IO-bound.
Things are kinda straightforward when trying to improve an CPU-bounded program's speed. It mostly comes down to achieving the same goal or effect while making less operations.
This, on the other hand, does not make the process any easier. In fact, it's usually much harder to optimize CPU-bounded programs than to optimize IO-bounded ones, because each CPU-related operation is usually unique, and has to be revised individually.
Although generally easier once you have the experience, things are not so straightforward with IO-bound programs. There are a lot more stuff to consider when dealing with IO-bound processes.
I'll be using Hard-Disk Drives (HDDs) as the basis, since the characteristics I'll mention affect HDDs the strongest (because they are mechanical), but you should keep in mind that many of the same concepts apply, to some extent, to almost every memory-storage hardware, including Solid-State Drives (SSDs) and even RAM!
These are the main performance characteristics of most memory-storage hardware:
Access time: Also known as response time, it is the time it takes before the hardware can actually transfer data.
For mechanical hardware such as HDDs, this is mostly related to the mechanical nature of the drive, in other words, it's rotating disk and moving "heads". As such, access time of mechanical drives can vary significantly between each-other.
For circuital hardware such as SSDs and RAM, this time is not dependent on moving parts, but rather electrical connections, so the access time is very quick and consistent, and you shouldn't worry about it.
Seek time: The time it takes for the hardware to seek (reach) the correct position within it's internal subdivisions, in order to read from or write to addresses in that section.
For mechanical drives, mainly rotary ones, the seek time measures the time it takes the head assembly on the actuator arm to travel to the track of the disk where the data will be read from or written to.
Average seek time ranges from 3 ms (~) for high-end server drives, to 15 ms (~) for mobile drives, with the most common desktop drives typically having a seek time around 9 ms (~).
With RAM and SSDs, there are no moving parts, so a measurement of the seek time is only testing the electronic circuits, and preparing a particular location on the memory in the device for the operation.
Typical SSDs will have a seek time between 0.08 to 0.16 ms (~), with RAM being even faster.
Command-Processing time: Also known as command overhead, it is the time it takes for the drive's electronics to set up the necessary communication between the various internal components, so it can read or write the data.
This is in the range of 0.003 ms (~) for both, mechanical and circuital devices, and is usually ignored in benchmarks.
Settle time: It is the time it takes for the heads to settle on the target track and stop vibrating, so that they do not read or write off-track.
This amount is usually very small (typically less than 0.1 ms), and typically included in benchmarks as part of the seek time.
Data-Transfer rate: Also called throughput, it covers both: The internal rate, which is the time it takes to move data between the disk surface and the controller on the drive. And the external rate, which is the time to move data between the controller on the drive and an external component in the host system. It has a few sub-factors within:
Media rate: Speed at which the drive can read bits from the media. In other words, the actual read/write speed.
Sector overhead: Additional time (bytes) needed for control structures and other information necessary to manage the drive, locate and validate data and perform other support functions.
Allocation speed: Similar to sector overhead, it's the time taken for the drive to determine the slots that will be written to, and to register them on it's address dictionary. Only needed for write operations.
Head-Switch time: Time required to electrically switch from one head to another; Only applies to multi-head drives and is about 1 to 2 ms.
Cylinder-switch time: Time required to move to an adjacent track; The name cylinder is used because typically all the tracks of a drive with more than one head or data surface are read before moving the actuator, implying the image of a circle or cylinder rather than a track. This time is exclusive to rotary mechanical drives, and is typically about about 2 to 3 ms.
This means that the main performance issues regarding IO are caused by going back-and-forth between IO and processing. An issue that can be enormously diminished by using buffers, and processing and reading/writhing in bigger chunks of data, rather than every byte.
As you can also see, although many of the speed characteristics are still present, RAM and SSDs do not have the same internal limits of HDDs, so their internal and external transfer rates often reach the maximum capabilities of the drive-to-host interface.
Chunk approach example:
This example will create a Test folder on the desktop, and generate a Test.txt file within.
The file is generated with an specified number of lines, each line containing the word "Test" repeated for an specific number of times (for file-size purposes). Each line is ended by "\r", "\n" or "\r\n", sequentially.
It's meaningless to save the results of each chunk in-memory cumulatively, as doing so would lead the whole file end up in-memory eventually, which is nearly the same problem of not using chunks to begin with.
As such, an output file is created in the same Test folder, to which the result of every chunk is stored at, once that chunk is finished.
The base file is read using buffers, and those buffers are additionally used as the chunks.
The process here is simply printing a textual version of the line-separator ("\\r", "\\n" or "\\r\\n"), followed by ": ", followed by the line contents; But for the last line, "EOF" is used instead.
To actually operate with chunks, it's probably easier to manage with a class-based approach, rather than a purely function-based one.
Anyways, here goes the code:
public static void main(String[] args) throws FileNotFoundException, IOException {
File file = new File(TEST_FOLDER, "Test.txt");
//These settings create a 122 MB file.
generateTestFile(file, 500000, 50);
long clock = System.nanoTime();
processChunks(file, 8 * (int) Math.pow(1024, 2));
clock = System.nanoTime() - clock;
float millis = clock / 1000000f;
float seconds = millis / 1000f;
System.out.printf(""
+ "%12d nanos\n"
+ "%12.3f millis\n"
+ "%12.3f seconds\n",
clock, millis, seconds);
}
public static File prepareResultFile(File source) {
String ofn = source.getName(); //Original File Name.
int extPos = ofn.lastIndexOf('.'); //Extension index.
String ext = ofn.substring(extPos); //Get extension.
ofn = ofn.substring(0, extPos); //Get name without extension reusing 'ofn'.
return new File(source.getParentFile(), ofn + "_Result" + ext);
}
public static void processChunks(File file, int buffSize)
throws FileNotFoundException, IOException {
//No need for buffers bigger than the file itself.
if (file.length() < buffSize) {
buffSize = (int)file.length();
}
byte[] buffer = new byte[buffSize];
BufferedInputStream bis = new BufferedInputStream(new FileInputStream(file), buffSize);
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(
prepareResultFile(file)), buffSize);
StringBuilder sb = new StringBuilder();
while (bis.read(buffer) > (-1)) {
//Check if a "\r\n" was split between chunks.
boolean skipFirst = false;
if (sb.length() > 0 && sb.charAt(sb.length() - 1) == '\r') {
if (buffer[0] == '\n') {
bos.write(("\\r\\n: " + sb.toString() + System.lineSeparator()).getBytes());
sb = new StringBuilder();
skipFirst = true;
}
}
for (int i = skipFirst ? 1 : 0; i < buffer.length; i++) {
if (buffer[i] == '\r') {
if (i + 1 < buffer.length) {
if (buffer[i + 1] == '\n') {
bos.write(("\\r\\n: " + sb.toString() + System.lineSeparator()).getBytes());
i++; //Skip '\n'.
} else {
bos.write(("\\r: " + sb.toString() + System.lineSeparator()).getBytes());
}
sb = new StringBuilder(); //Reset accumulator.
} else {
//A "\r\n" might be split between two chunks.
}
} else if (buffer[i] == '\n') {
bos.write(("\\n: " + sb.toString() + System.lineSeparator()).getBytes());
sb = new StringBuilder(); //Reset accumulator.
} else {
sb.append((char) buffer[i]);
}
}
}
bos.write(("EOF: " + sb.toString()).getBytes());
bos.flush();
bos.close();
bis.close();
System.out.println("Finished!");
}
public static boolean generateTestFile(File file, int lines, int elements)
throws IOException {
String[] lineBreakers = {"\r", "\n", "\r\n"};
BufferedOutputStream bos = null;
try {
bos = new BufferedOutputStream(new FileOutputStream(file));
for (int i = 0; i < lines; i++) {
for (int ii = 1; ii < elements; ii++) {
bos.write("test ".getBytes());
}
bos.write("test".getBytes());
bos.write(lineBreakers[i % 3].getBytes());
}
bos.flush();
System.out.printf("LOG: Test file \"%s\" created.\n", file.getName());
return true;
} catch (IOException ex) {
System.err.println("ERR: Could not write file.");
throw ex;
} finally {
try {
bos.close();
} catch (IOException ex) {
System.err.println("WRN: Could not close stream.");
Logger.getLogger(Q_13458142_v2.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
I don't know what IDE you are using, but if it's NetBeans, make a memory-profile of your code and compare to a profile of this one. You should notice a big difference in the amount of memory needed during processing.
Here, the chunk approach's memory usage, which includes not only the chunk itself but also the program's own variables and structures, does not go over 40 MB even tough we are dealing with a file bigger than 100 MB. As you can see:
It also spends very little time in GB, mostly less than 5% at any given point:
The second version doesn't seem to use BufferedReader or another form of buffer. It might be the cause of slow down.
Since you seem to read the whole file in memory, you can perhaps read it as a big string (with a buffer) then parse it in memory to analyze the line endings.
Your are doubling the out statements(one for line and one for newline):
Can you try below(use lineSeparator() to get the line separator and append before writing):
out.write(lineArray.get(i)+System.lineSeparator());
Don't reinvent the wheel.
Check the BufferedReader#readLine() code
Copy, paste, and make the changes you need to keep the line separator inside the line

Categories

Resources