How to write file data correctly? - java

My application is unable to transfer data properly over a socket connection and write it to a file properly. Files over about 65,535 bytes get corrupted and are no longer recognized by the programs designed to run them.
I have been able to send small .doc and .txt files successfully, but .mp3 .wmv .m4a .avi and just about anything else does not work. Neither do larger docs.
I have looked all over the internet for a solution to this problem. I have repeatedly tweaked the I/O code to fix the problem but it still doesn't work! Here is the I/O code in the super class that handles sending and receiving files. If you need anymore information/other parts of code, let me know.
protected void sendFile() throws IOException {
byte[] bytes = new byte[(int) file.length()];
buffin = new BufferedInputStream(new FileInputStream(file));
int bytesRead = buffin.read(bytes,0,bytes.length);
System.out.println(bytesRead);
out = sock.getOutputStream();
out.write(bytes,0,fileBytes);
out.flush();
out.close();
}
protected void receiveFile() throws IOException {
byte[] bytes = new byte[fileBytes];
in = sock.getInputStream();
for(int i=0;i<fileBytes;i++) {
in.read(bytes);
}
fos = new FileOutputStream("/Datawire/"+fileName);
buffout = new BufferedOutputStream(fos);
buffout.write(bytes,0,fileBytes);
buffout.flush();
buffout.close();
}
UPDATED CODE (that works):
protected void sendFile() throws IOException {
if((file.length())<63000) {
byte[] bytes = new byte[(int)file.length()];
buffin = new BufferedInputStream(new FileInputStream(file));
buffin.read(bytes,0,bytes.length);
out = sock.getOutputStream();
out.write(bytes,0,bytes.length);
out.close();
} else {
byte[] bytes = new byte[32000];
buffin = new BufferedInputStream(new FileInputStream(file));
out = sock.getOutputStream();
int bytesRead;
while((bytesRead = buffin.read(bytes))>0) {
out.write(bytes,0,bytesRead);
}
out.close();
}
}
protected void receiveFile() throws IOException {
if(fileBytes<63000) {
byte[] bytes = new byte[32000];
in = sock.getInputStream();
System.out.println(in.available());
in.read(bytes,0,fileBytes);
fos = new FileOutputStream("/Datawire/"+fileName);
buffout = new BufferedOutputStream(fos);
buffout.write(bytes,0,bytes.length);
buffout.close();
} else {
byte[] bytes = new byte[16000];
in = sock.getInputStream();
fos = new FileOutputStream("/Datawire/"+fileName);
buffout = new BufferedOutputStream(fos);
int bytesRead;
while((bytesRead = in.read(bytes))>0) {
buffout.write(bytes,0,bytesRead);
}
buffout.close();
}
}

The issue is that you are sending only chunks of it. That is, you are only sending 64k of the file ever. If the file is ever larger then 64k the other end will never see it.
You want to continously read from the BufferedInputStream until the read() returns either less then the length or -1.

Your code is completely wrong. This is how to copy a stream in Java:
int count;
byte[] buffer = new byte[8192]; // more if you like but no need for it to be the entire file size
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
You should use this both when sending the file and when receiving the file. At present your sending method hopes that the entire file fits into memory; fits into INTEGER_MAX bytes; and is read in one chunk by the read method, without even checking the result. You can't assume any of those things. Your receive method is complete rubbish: it just keeps overwriting the same array, again without checking any read() results.
EDIT: Your revised code is just as bad, or worse. You are calling read() to check for EOS and then throwing that byte away, and then calling read() again and throwing away the read count it returns. You pointlessly have a different path for files < 64000, or 63000, or whatever it is, that has zero benefit except to give you two code paths to test, or possibly four, instead of one. The network only gives you 1460 bytes at a time at best anyway so what is the point? You already have (a) a BufferedInputStream with a default buffersize of 8192, and (b) my code that uses a byte[] buffer of any size you like. My code above works for any amount of data in two lines of executable code. Yours is 20. QED.

I suggest that you use some good library to read and write file contents as well as socket read/write. For example Apache Commons IO. If you insist on writig code yourself, do it smaller chunks rather than the whole file at once.

You have to consider that InputStream.read returns the number of bytes read which may be less than the total number of bytes in the file.
You would probably be better off just letting something like CopyUtils.copy take care of this for you.

You need to loop until bytesRead < 0. You need to make sure that fileBytes is => than the transferred file.
protected void receiveFile() throws IOException {
byte [] bytes = new byte [fileBytes];
InputStream is = sock.getInputStream();
FileOutputStream fos = new FileOutputStream("/Datawire/"+fileName);
BufferedOutputStream bos = new BufferedOutputStream(fos);
int bytesRead = is.read(bytes,0,bytes.length);
int current = bytesRead;
do {
bytesRead =
is.read(bytes, current, (bytes.length-current));
if(bytesRead >= 0) current += bytesRead;
} while(bytesRead > -1);
bos.write(bytes, 0 , current);
bos.flush();
bos.close();
}

Related

How InputStream really works while reading file from socket in java?

I have a simple program which gets BufferedInputStream from URL and I have seen that while reading from the underlying stream, read(bytes) calls goes to FileInputStream from BufferedInputStream (so for this I convinced my self saying as at the other end of socket , it is actually a file may be that's why it goes to FileInputStreams (Please let me know if my assumptions are correct about this )).
When read happens in FileInputStreams read() method the "path" variable is set to the location of my class file from where the read call is being invoked , well this is very confusing to me as I was expecting the file's actual URL location here which I am downloading ..
Please help me understand these things and how actually read() happens from a remote file ??
URL url = new URL("some url for downloading a file");
BufferedInputStream bis = new BufferedInputStream(url.openStream());
FileOutputStream fis = new FileOutputStream(file);
int size = 65536;
byte[] buffer = new byte[size];
int count;
while ((count = bis.read(buffer, 0, size)) != -1) {
fis.write(buffer, 0, count);
}
fis.close();
bis.close();

Too large files when downloading Piktogramms

I'm trying to download some images provided by a hoster. This is the method I use:
public static void downloadImage(String imageLink, File f) throws IOException
{
URL url = new URL(imageLink);
byte[] buffer = new byte[1024];
BufferedInputStream in = new BufferedInputStream(url.openStream(), buffer.length);
BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(f), buffer.length);
while (in.read(buffer) > 0)
out.write(buffer);
out.flush();
out.close();
in.close();
}
However, the file turn out too big. 5MB for a 80x60 jpg is too much in my opinion.
What could be the cause of this?
You are doing things wrong here: read() returns the number of bytes that were really read; thus you have to write exactly that number from your buffer array into your output stream.
Your code is corrupting your output; and simply writing out a buffer array ... that mostly consists of 0s!
Instead do something like:
int bytesRead;
while ( ( bytesRead = in.read(buffer)) > 0) {
byte outBuffer[] = new byte[bytesRead];
... then use arraycopy to move bytesRead bytes
out.write(outBuffer);
}
( this is meant as inspiration to get you going, more pseudo like than real code )

avoiding garbage data while reading data using a byte buffer

I am trying to write a program to transfer a file between client and server using java tcp sockets I am using buffer size of 64K but The problem I am facing is that when when the tcp sometimes fail to send the whole 64K it sends the remaing part for example 32K in anther go
There for A garbage data of some Spaces or so is being taken by the buffer at reading side to make 64K complete and thus unnecessary data is making the file useless at receiving side.
Is there any solution to overcome this problem ???
I am using TCP protocol this code is using to send data to client
Server-side code
File transferFile = new File ("Document.txt");
byte [] bytearray = new byte [1024];
int byRead=0;
FileInputStream fin = new FileInputStream(transferFile);
BufferedInputStream bin = new BufferedInputStream(fin);
OutputStream os = socket.getOutputStream();
while(byRead>-1) {
byRead=bin.read(bytearray,0,bytearray.length);
os.write(bytearray,0,bytearray.length);
os.flush();
}
Client-side code
byte [] bytearray = new byte [1024];
InputStream is = socket.getInputStream();
FileOutputStream fos = new FileOutputStream("C:\\Users\\NetBeansProjects\\"+filename);
BufferedOutputStream bos = new BufferedOutputStream(fos);
bytesRead = is.read(bytearray,0,bytearray.length);
currentTot = bytesRead; System.out.println("Data is being read ...");
do {
bytesRead = is.read(bytearray, 0, (bytearray.length));
if(bytesRead == 0) continue;
if(bytesRead >= 0) currentTot += bytesRead;
bos.write(bytearray,0,bytearray.length);
} while(bytesRead > -1);
here I tried to skip the loop if the byte is empty by continue; statement but it is not
working.
bos.write(bytearray,0,bytearray.length);
This should be
bos.write(bytearray,0,bytesRead);
The region after 'bytesRead' in the buffer is undisturbed by the read. It isn't 'garbage'. It's just whatever was there before.
use CLIENT Side Code as below to get the total write bytes without garbage
int availableByte = socket.available();
if (availableByte > 0) {
byte[] buffer = new byte[availableByte];
int bytesRead = socketInputStream.read(buffer);
FileOutputStream fileOutputStream = new FileOutputStream(FilePath, true);
OutputStreamWriter outputStreamWriter = new OutputStreamWriter(fileOutputStream);
BufferedWriter bufferedWriter = new BufferedWriter(outputStreamWriter);
bufferedWriter.write(buffer.toString());
bufferedWriter.close();
}

Why file gets corrupted on client side in java socket?

I have written a java code which sends a .exe file from the server to the client using FileInputStream and BufferedInputStream, but the file gets corrupted at the client side.
What could be the reason?
command1= ServerFrame.msg1+".exe";
File p=new File(command1);
FileInputStream f=new FileInputStream(p);
BufferedInputStream bis=new BufferedInputStream(f);
Integer d=bis.available();
int d1=d;
byte b[]=new byte[d];
bis.read(b,0,d1);
System.out.println(d1);
dos=new DataOutputStream(s.getOutputStream());
BufferedOutputStream bos=new BufferedOutputStream(s.getOutputStream());
dos.writeUTF(d.toString()); // sending length in long
bos.write(b,0,d1); // sending the bytess
bos.flush();
bis.close();
bos.close();
dos.close();
I suppose that s is your socket. There are few thing that can be wong in your code:
bis.available() returns the number of bytes that can be read without bocking, not the total size of the file, you should use a loop to read the file
you use the output stream in two different buffers and you write to both of them without flushing; also, why are you writing UTF?
Here is what you intend to do:
private void copy(InputStream in, OutputStream out) {
byte[] buf = new byte[0x1000];
int r;
while ((r = in.read(buf)) >= 0) {
out.write(b, 0, r);
}
}
InputStream in = new BufferedInputStream(new FileInputStream(path));
OutputStream out = new BufferedOutputStream(s.getOutputStream());
copy(in, out);
in.close();
out.close();
bis.available() returns the bytes available for read, it may not be the full content size, u have to read in a loop till it reaches EOF.
in case someone stuck with same problem, buffer size is the culprit in this case :
Integer d=bis.available();
byte b[]=new byte[d];
It should be lesser try 1024 or something:
byte b[]=new byte[1024];
hope this helps..

how to tune BufferedInputStream read()?

I am reading a BLOB column from a Oracle database, then writing it to a file as follows:
public static int execute(String filename, BLOB blob)
{
int success = 1;
try
{
File blobFile = new File(filename);
FileOutputStream outStream = new FileOutputStream(blobFile);
BufferedInputStream inStream = new BufferedInputStream(blob.getBinaryStream());
int length = -1;
int size = blob.getBufferSize();
byte[] buffer = new byte[size];
while ((length = inStream.read(buffer)) != -1)
{
outStream.write(buffer, 0, length);
outStream.flush();
}
inStream.close();
outStream.close();
}
catch (Exception e)
{
e.printStackTrace();
System.out.println("ERROR(img_exportBlob) Unable to export:"+filename);
success = 0;
}
}
The file size is around 3MB and it takes 40-50s to read the buffer. Its actually a 3D image data. So, is there any way by which I can reduce this time?
Given that the blob already has the concept of a buffer, it's possible that you're actually harming performance by using the BufferedInputStream at all - it may be making smaller read() calls, making more network calls than necessary.
Try getting rid of the BufferedInputStream completely, just reading directly from the blob's binary stream. It's only a thought, but worth a try. Oh, and you don't need to flush the output stream every time you write.
(As an aside, you should b closing streams in finally blocks - otherwise you'll leak handles if anything throws an exception.)

Categories

Resources