Android ByteArrayBuffer holds more bytes than capacity - java

I found the following code in a project. It works and can read more than 20MB big files. From how the code is set up it should fail after 5000 bytes. Why does it work? The docs for ByteArrayBuffer indicate nothing like that. I have verified that the read-loop iterates for all bytes of each file.
http://developer.android.com/reference/org/apache/http/util/ByteArrayBuffer.html
URLConnection ucon = url.openConnection();
ucon.setReadTimeout(10000); // enables throw SocketTimeoutException
InputStream is = ucon.getInputStream();
long expectedFileSize = ucon.getContentLength();
Log.d("Downloader", "expected file size: " + expectedFileSize);
BufferedInputStream bis = new BufferedInputStream(is);
// Read bytes to the Buffer until there is nothing more to read(-1).
// This code can read 20MB and bigger mp3-files, ... why?!?
ByteArrayBuffer baf = new ByteArrayBuffer(5000);
int current = 0;
while ((current = bis.read()) != -1) {
baf.append((byte) current);
}
FileOutputStream fos = new FileOutputStream(file);
fos.write(baf.toByteArray());
fos.flush();
fos.close();

5000 is just an initial capacity. It gets resized automatically once reaches its limit

Related

how does the increment of buffer affect the download speed?

I have the following code to download from the internet
URLConnection ucon = url.openConnection();
InputStream is = ucon.getInputStream();
BufferedInputStream inStream = new BufferedInputStream(is, 8192);//this is value one
FileOutputStream outStream = new FileOutputStream(new File(location));
byte[] buff = new byte[8192];//this is value two
int len;
while ((len = inStream.read(buff)) != -1) {
outStream.write(buff, 0, len);
}
My question is what happens if I change the the buffer value : value one and two
how does the download speed change ? should they be the same value ? if they are not the same what should each of them indicate ?
I have read the following post :
How do you determine the ideal buffer size when using FileInputStream?
but I did not really understand it . can someone explain it for my please

java Why file size is different with operation system

I download file from website and check the size (the same if i check size in operation system in bytes).
connection.getContentLength();
int sizeBefore = connection.getContentLength();
BufferedInputStream bufferedInputStream = new BufferedInputStream(connection.getInputStream());
File destFile = new File(destFileName);
BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(
new FileOutputStream(destFile));
while ((i = bufferedInputStream.read()) != -1) {
bufferedOutputStream.write(i);
}
long sizeAfter = destFile.length();
bufferedOutputStream.flush();
bufferedInputStream.close();
if (sizeAfter == sizeBefore) {
log.debug("Downloaded file correct");
}
then I tryed check stored file by other way too (NIO):
long size = Files.size(destFile.toPath())));
The result is different with size from operation system.Why?
Binary prefixes:
http://en.wikipedia.org/wiki/Binary_prefix#Adoption_by_IEC_and_NIST
Windows uses 1024 bytes in a kilobyte (2^10) while Linux uses 1000 bytes in a kilobyte. This propagates in MB, GB, etc...
The lines
long sizeAfter = destFile.length();
bufferedOutputStream.flush();
bufferedInputStream.close();
should be
bufferedOutputStream.close(); // Close the file. Flushes too.
bufferedInputStream.close();
long sizeAfter = destFile.length(); // Check its size on disk.
Especially a BufferedOutputStream will write its buffer only when entirely filled.
The last buffer is most often actually written on close() calling flush().
You check the file size before closing stream. I you do after closing streams you will get the same size with operation system
connection.connect();
int sizeBefore = connection.getContentLength();
BufferedInputStream bufferedInputStream = new BufferedInputStream(connection.getInputStream());
File destFile = new File (destFileName);
BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(
new FileOutputStream(destFile));
while ((i = bufferedInputStream.read()) != -1) {
bufferedOutputStream.write(i);
}
bufferedOutputStream.flush();
bufferedInputStream.close();
long sizeAfter = destFile.length();
if (sizeAfter==sizeBefore) {
log.info("Downloaded correct");
}

ftp csv file download with java

I successfully download a csv file stored in ftp server using the code below.
URL url = new URL(ftpUrl);
URLConnection conn = url.openConnection();
InputStream inputStream = conn.getInputStream();
long filesize = conn.getContentLength();
byte[] buffer = new byte[4096];
int bytesRead = -1;
while ((bytesRead = inputStream.read(buffer)) != -1) {
String str = new String(buffer, "UTF-8");
out.println(str);
}
The csv file has 1276KB filesize and about 20.000 rows. The problem is that the generated csv file has some lines either blank or with missing information. The damaged rows occur about every 100 records. I tried to fix by increasing the buffer size but damaged rows still exist.
Any help is going to be appreciated :)
Sure! you have to tell how long is the string: except when the file size is a multiple of 4096, you will always get old items in buffer
You must use this syntax instead
String str = new String(buffer, 0, bytesRead, "utf-8");

avoiding garbage data while reading data using a byte buffer

I am trying to write a program to transfer a file between client and server using java tcp sockets I am using buffer size of 64K but The problem I am facing is that when when the tcp sometimes fail to send the whole 64K it sends the remaing part for example 32K in anther go
There for A garbage data of some Spaces or so is being taken by the buffer at reading side to make 64K complete and thus unnecessary data is making the file useless at receiving side.
Is there any solution to overcome this problem ???
I am using TCP protocol this code is using to send data to client
Server-side code
File transferFile = new File ("Document.txt");
byte [] bytearray = new byte [1024];
int byRead=0;
FileInputStream fin = new FileInputStream(transferFile);
BufferedInputStream bin = new BufferedInputStream(fin);
OutputStream os = socket.getOutputStream();
while(byRead>-1) {
byRead=bin.read(bytearray,0,bytearray.length);
os.write(bytearray,0,bytearray.length);
os.flush();
}
Client-side code
byte [] bytearray = new byte [1024];
InputStream is = socket.getInputStream();
FileOutputStream fos = new FileOutputStream("C:\\Users\\NetBeansProjects\\"+filename);
BufferedOutputStream bos = new BufferedOutputStream(fos);
bytesRead = is.read(bytearray,0,bytearray.length);
currentTot = bytesRead; System.out.println("Data is being read ...");
do {
bytesRead = is.read(bytearray, 0, (bytearray.length));
if(bytesRead == 0) continue;
if(bytesRead >= 0) currentTot += bytesRead;
bos.write(bytearray,0,bytearray.length);
} while(bytesRead > -1);
here I tried to skip the loop if the byte is empty by continue; statement but it is not
working.
bos.write(bytearray,0,bytearray.length);
This should be
bos.write(bytearray,0,bytesRead);
The region after 'bytesRead' in the buffer is undisturbed by the read. It isn't 'garbage'. It's just whatever was there before.
use CLIENT Side Code as below to get the total write bytes without garbage
int availableByte = socket.available();
if (availableByte > 0) {
byte[] buffer = new byte[availableByte];
int bytesRead = socketInputStream.read(buffer);
FileOutputStream fileOutputStream = new FileOutputStream(FilePath, true);
OutputStreamWriter outputStreamWriter = new OutputStreamWriter(fileOutputStream);
BufferedWriter bufferedWriter = new BufferedWriter(outputStreamWriter);
bufferedWriter.write(buffer.toString());
bufferedWriter.close();
}

Stream file from URL to File without storing it in the memory

I want to download a file from a URL and store it into the file system. However I have memory limitation and I don't want to store it in the memory before. I am not a java expert and I am a bit lost with all the class InputStream, BufferedReader, FileOutputStream, etc. Could you help me please ?
For now I have:
URLConnection ucon = url.openConnection();
ucon.connect();
InputStream is = ucon.getInputStream();
// Create a reader for the input stream.
BufferedReader br = new BufferedReader(isr);
// ?
FileOutputStream fos = context.openFileOutput(FILENAME, Context.MODE_PRIVATE);
// Here the content can be too big for the memory...
fos.write(content.getBytes());
fos.close();
Please, could you give me some clue ? I was thinking also to read it chunk by chunk, but I am not sure what would be the easiest with java...
you can use apache commons
org.apache.commons.io.FileUtils.copyURLToFile(URL, File)
I guess it may not work on android
I use this code
InputStream input = connection.getInputStream();
byte[] buffer = new byte[4096];
int cnt = - 1;
OutputStream output = new FileOutputStream(file);
while ( (cnt = input.read(buffer)) != -1)
{
output.write(buffer, 0, cnt);
}
output.close();

Categories

Resources