I am currently writing a client-server program that would allow me to upload a file from the client to the server. However, when I try this the file becomes corrupt and it appears not all the bytes are being transferred. Can someone tell me why this is happening? Thanks.
Here is part of the client code:
System.out.println("What file would you like to upload?");
String file=in.next();//get file name
outToServer.writeUTF(file);//send file name to server
File test= new File(file);//create file
byte[] bits = new byte[(int) test.length()]; //byte array to store file
FileInputStream fis= new FileInputStream(test); //read in file
//write bytes into array
int size=(int) test.length();//size of array
outToServer.write(size);//send size of array to Server
fis.read(bits);//read in byte values
fis.close();//close stream
outToServer.write(bits, 0, size);//writes bytes out to server
And here is the server code:
String filename= inFromClient.readUTF();//read in file name that is being uploaded
int size=inFromClient.read(); //read in size of file
byte[] bots=new byte[size]; //create array
inFromClient.read(bots); //read in bytes
FileOutputStream fos=new FileOutputStream(filename);
fos.write(bots);
fos.flush();
fos.close();
String complete="Upload Complete.";
outToClient.writeUTF(complete);
Try and use Java 7's Files.copy().
On the client side:
final Path source = Paths.get(file);
Files.copy(source, outToServer);
On the server side:
final Path destination = Paths.get(file);
Files.copy(inFromClient, destination);
See the javadoc for Files.
Usual mistake. You're assuming that read() fills the buffer. It isn't obliged to do that. See the Javadoc.
The canonical way to copy streams in Java is as follows:
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
Use this at both ends. You don't need a buffer the size of the file either. This works for any byte array with one or more elements.
Related
Requirement: compress a byte[] to get another byte[] using java.util.zip.ZipOutputStream BUT without using any files on disk or in-memory(like here https://stackoverflow.com/a/18406927/9132186). Is this even possible?
All the examples I found online read from a file(.txt) and write to a file(.zip). ZipOutputStream needs a ZipEntry to work with and that ZipEntry needs a file.
However, my use case is as follows: I need to compress a chunk (say 10MB) of a file at a time using a zip format and append all these compressed chunks to make a .zip file. But, when I unzip the .zip file then it is corrupted.
I am using in-memory files as suggested in https://stackoverflow.com/a/18406927/9132186 to avoid files on disk but need a solution without these files also.
public void testZipBytes() {
String infile = "test.txt";
FileInputStream in = new FileInputStream(infile);
String outfile = "test.txt.zip";
FileOutputStream out = new FileOutputStream(outfile);
byte[] buf = new byte[10];
int len;
while ((len = in.read(buf)) > 0) {
out.write(zipBytes(buf));
}
in.close();
out.close();
}
// ACTUAL function that compresses byte[]
public static class MemoryFile {
public String fileName;
public byte[] contents;
}
public byte[] zipBytesMemoryFileWORKS(byte[] input) {
MemoryFile memoryFile = new MemoryFile();
memoryFile.fileName = "try.txt";
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ZipOutputStream zos = new ZipOutputStream(baos);
ZipEntry entry = new ZipEntry(memoryFile.fileName);
entry.setSize(input.length);
zos.putNextEntry(entry);
zos.write(input);
zos.finish();
zos.closeEntry();
zos.close();
return baos.toByteArray();
}
Scenario 1:
if test.txt has small amount of data (less than 10 bytes) like "this" then unzip test.txt.zip yeilds try.txt with "this" in it.
Scenario 2:
if test.txt has larger amount of data (more than 10 bytes) like "this is a test for zip output stream and it is not working" then unzip test.txt.zip yields try.txt with broken pieces of data and is incomplete.
this 10 bytes is the buffer size in testZipBytes and is the amount of data that is compressed at a time by zipBytes
Expected (or rather desired):
1. unzip test.txt.zip does not use the "try.txt" filename i gave in the MemoryFile but rather unzips to filename test.txt itself.
2. unzipped data is not broken and yields the input data as is.
3. I have done the same with GzipOutputStream and it works perfectly fine.
Requirement: compress a byte[] to get another byte[] using java.util.zip.ZipOutputStream BUT without using any files on disk or in-memory(like here https://stackoverflow.com/a/18406927/9132186). Is this even possible?
Yes, you've already done it. You don't actually need MemoryFile in your example; just delete it from your implementation and write ZipEntry entry = new ZipEntry("try.txt") instead.
But you can't concatenate the zips of 10MB chunks of file and get a valid zip file for the combined file. Zipping doesn't work like that. You could have a solution which minimizes how much is in memory at once, perhaps. But breaking the original file up into chunks seems unworkable.
try
{
FileInputStream fis=new FileInputStream(new File("Binary.txt"));
byte[] infoBin=new byte[fis.available()];
fis.read(infoBin);
for (byte b : infoBin)
{
String bin=Integer.toBinaryString(b);
}
}
How to read a file and convert that file contents into binary then write the binary to a new file using java
After Binary conversion, i don't know how to write the string bin into the new file ?
//reading from file
byte[] array = Files.readAllBytes(Paths.get("Binary.txt"));
//saving to file
FileOutputStream fos = new FileOutputStream("Binary.txt");
fos.write(array );
fos.close();
How to read a file and convert that file contents into binary
They already are binary.
then write the binary to a new file using java
There's no need to waste memory, or assume the file fits into memory, or assume the file size fits into an int. Memorise the following loop for copying between streams in Java:
int count;
byte[] buffer = new byte[8192];
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
This question already has answers here:
Java multiple file transfer over socket
(3 answers)
Closed 5 years ago.
I have the following code which transfers file via Sockets. How do I send the file name?
Socket socket = new Socket("localhost", port);//machine name, port number
File file = new File(fileName);
// Get the size of the file
long length = file.length();
if (length > Integer.MAX_VALUE)
{
System.out.println("File is too large.");
}
byte[] bytes = new byte[(int) length];
FileInputStream fis = new FileInputStream(file);
BufferedInputStream bis = new BufferedInputStream(fis);
BufferedOutputStream out = new BufferedOutputStream(socket.getOutputStream());
int count;
while ((count = bis.read(bytes)) > 0)
{
out.write(bytes, 0, count);
}
out.flush();
out.close();
fis.close();
bis.close();
socket.close();
You can invent your own protocol for your socket. If all you need is a filename and data, DataOutputStream.writeUTF is easiest:
BufferedOutputStream out = new BufferedOutputStream(socket.getOutputStream());
try (DataOutputStream d = new DataOutputStream(out)) {
d.writeUTF(fileName);
Files.copy(file.toPath(), d);
}
The peer must use the same protocol, of course:
BufferedInputStream in = new BufferedInputStream(socket.getInputStream());
try (DataInputStream d = new DataInputStream(in)) {
String fileName = d.readUTF();
Files.copy(d, Paths.get(fileName));
}
Use a character that can never be in a file name - such as a null (0x00, \0, whatever you want to call it). Then send a 64 bit integer that indicates how long, in bytes, the file is (make sure you don't run into buffer overflows, little endian/big endian issues, etc... just test all edge cases). Then send the file data. Then the ending socket will know which part is the file name, the file length and the file data, and will even be ready for the next file name if you want to send another.
(if file names can be arbitrary characters including control characters, ouch! Maybe send a 64 bit integer length of file name, the file name, a 64 bit integer length of file data, the file data, repeat ad infinitum?)
EDIT: To send a 64 bit integer over a socket, send its constituent bytes in a specific order, and make sure sender and receiver agree on the order. One example of how to do this is How to convert a Java Long to byte[] for Cassandra?
I tried to wrap a buffer which cause MalfuctionUTF and putting it on try-with resource closes the underlining socket stream and cause connection reset exceptionFollowing code worked for me
Client
DataOutputStream d = new DataOutputStream(out);
d.writeUTF(filename);
d.writeLong(length);
Server
DataInputStream d = new DataInputStream(in);
filename = d.readUTF();
fileLength = d.readLong();
I need help on my homework, any help will be much appreciated. I can send small files without a problem. But when i try to send let’s say a 1GB file byte array sends OutOfMemoryError so i need a better solution to send file from server to client. How can i improve this code and send big files, please help me.
Server Code:
FileInputStream fis = new FileInputStream(file);
byte[] fileByte = new byte[fis.available()]; //This causes the problem.
bytesRead = fis.read(fileByte);
oos = new ObjectOutputStream(sock.getOutputStream());
oos.writeObject(fileByte);
Client Code:
ois = new ObjectInputStream(sock.getInputStream());
byte[] file = (byte[]) ois.readObject();
fos = new FileOutputStream(file);
fos.write(file);
Don't read the whole file into memory, use a small buffer and write while you are reading the file:
BufferedOutputStream bos = new BufferedOutputStream(sock.getOutputStream())
File file = new File("asd");
FileInputStream fis = new FileInputStream(file);
BufferedInputStream bis = new BufferedInputStream(fis);
byte[] buffer = new byte[1024*1024*10];
int n = -1;
while((n = bis.read(buffer))!=-1) {
bos.write(buffer,0,n):
}
Use Buffered* to optimize the writing and reading from Streams
Just split the array to smaller chunks so that you don't need to allocate any big array.
For example you could split the array into 16Kb chunks, eg new byte[16384] and send them one by one. On the receiving side you would have to wait until a chunk can be fully read and then store them somewhere and start with next chunk.
But if you are not able to allocate a whole array of the size you need on server side you won't be able to store all the data that you are going to receive anyway.
You could also compress the data before sending it to save bandwidth (and time), take a look at ZipOutputStream and ZipInputStream.
Here's how I solved it:
Client Code:
bis=new BufferedInputStream(sock.getInputStream());
fos = new FileOutputStream(file);
int n;
byte[] buffer = new byte[8192];
while ((n = bis.read(buffer)) > 0){
fos.write(buffer, 0, n);}
Server Code:
bos= new BufferedOutputStream(sock.getOutputStream());
FileInputStream fis = new FileInputStream(file);
BufferedInputStream bis = new BufferedInputStream(fis);
int n=-1;
byte[] buffer = new byte[8192];
while((n = bis.read(buffer))>-1)
bos.write(buffer,0,n);
Depending on whether or not you have to write the code yourself, there are existing libraries which solve this problem, e.g. rmiio. If you are not using RMI, just plain java serialization, you can use the DirectRemoteInputStream, which is kind of like a Serializable InputStream. (this library also has support for things like auto-magically compressing the data).
Actually, if you are only sending file data, you would be better off ditching the Object streams and use DataInput/DataOutput streams. first write an integer indicating the file length, then copy the bytes directly to the stream. on the receiving side, read the integer file length, then read exactly that many bytes.
when you copy the data between streams, use a small, fixed size byte[] to move chunks of data between the input and output streams in a loop. there are numerous examples of how to do this correctly available online (e.g. #ErikFWinter's answer).
I'm creating a TFTP server. I've got it tranfering files fine but most of the files wont open when they arrive at the other end. This is because the output of the ArrayList im using to store file bytes from every packet received adds a load of bytes to the start of the file. eg. "¬í sr java.util.ArrayListxÒ™Ça I sizexp w ur [B¬óøTà xp ü!". The reason for using the List in the first place is that the server im creating has no way to tell the file size of the file which is being received. Therefore as far as I can tell I cant use a byte[] as this needs to be initialised with a set length. Is there any way round this?
WRQ WRQ = new WRQ();
ACK ACK = new ACK();
DatagramPacket outPacket;
byte[] bytes;
byte[] fileOut;
List fileBytes = new ArrayList();
outPacket = WRQ.firstPacket(packet);
socket.send(outPacket);
socket.receive(packet);
while (packet.getLength() == 516){
bytes = WRQ.doWRQ(packet);
fileBytes.add(bytes);
outPacket = ACK.doACK(packet);
socket.send(outPacket);
socket.receive(packet);
}
bytes = WRQ.doWRQ(packet);
fileBytes.add(bytes);
outPacket = ACK.doACK(packet);
socket.send(outPacket);
ObjectOutputStream os;
ByteArrayOutputStream byteStream = new ByteArrayOutputStream(5000);
os = new ObjectOutputStream(new BufferedOutputStream(byteStream));
os.flush();
os.writeObject(fileBytes);
os.flush();
byte[] outFile = byteStream.toByteArray();
os.close();
FileOutputStream foStream = new FileOutputStream(filename);
foStream.write(outFile);
foStream.close();
You store byte arrays in an ArrayList, and then you write the whole ArrayList to a ByteArrayOutputStream wrapped in an ObjectOutputStream, using the writeObject() method.
This uses the native Object serialization mechanism to save the ArrayList object. It doesn't write every byte array in the list one after the other. To make it clear: it writes the class name, and the internal structure of the ArrayList, using the object serialization protocol.
You don't need an ArrayList. Write directly to a ByteArrayOutputStream, or even directly to a FileOutputStream. As is, you're trying to
write bytes to a list
write the bytes in the list to a byte array
write the byte array to a file.
It would be much more straightforward (and efficient) to write directly to the output file (wrapped in a BufferedOutputStream for buffering)