Incomplete File Copy Java NIO - java

I'm having problem with reading my file. Im quite new to NIO too. The actual size of the file I want to send to the server is almost 900MB and only received 3MB.
The server's side code for reading:
private void read(SelectionKey key) throws IOException{
SocketChannel socket = (SocketChannel)key.channel();
RandomAccessFile aFile = null;
ByteBuffer buffer = ByteBuffer.allocate(300000000);
try{
aFile = new RandomAccessFile("D:/test2/test.rar","rw");
FileChannel inChannel = aFile.getChannel();
while(socket.read(buffer) > 0){
buffer.flip();
inChannel.write(buffer);
buffer.compact();
}
System.out.println("End of file reached..");
}catch(Exception e){
e.printStackTrace();
}
}
This is my code for the write method of the client side:
private void write(SelectionKey key) throws IOException {
SocketChannel socket = (SocketChannel) key.channel();
RandomAccessFile aFile = null;
try {
File f = new File("D:/test.rar");
aFile = new RandomAccessFile(f, "r");
ByteBuffer buffer = ByteBuffer.allocate(300000000);
FileChannel inChannel = aFile.getChannel();
while (inChannel.read(buffer) > 0) {
buffer.flip();
socket.write(buffer);
buffer.compact();
}
aFile.close();
inChannel.close();
key.interestOps(SelectionKey.OP_READ);
} catch (Exception e) {
e.printStackTrace();
}
}

You're opening a new file every time the socket channel becomes readable. Every TCP segment that arrives, you're recreating the target file, and therefore throwing away whatever was received before.
The simple fix to that would be to open the file for append on every OP_READ, but it would remain ridicously inefficient. You should open the target file as soon as you know what it is, and close it when you read end of stream from the sender, or when you've read the entire contents if that isn't signalled by end of stream. You haven't disclosed your application protocol so I can't be more specific.
read() returns zero when there is no data available to be read without blocking. You're treating that as an end of file. It isn't.
The canonical way to write between channels is as follows:
while ((in.read(buffer) > 0 || buffer.position() > 0)
{
buffer.flip();
out.write(buffer);
buffer.compact();
}
However if the target is a non-blocking socket channel this gets considerably more complex: you have to manipulate whether you're selected for OP_WRITE or not depending on whether or not the last write() returned zero. You will find this explained in a large number of posts here, many by me.
I have never seen any cogent reason for non-blocking I/O in the client side, unless it connects to multiple servers (a web crawler for example). I would use blocking mode or java.net.Socket at the client, which will obviate the write complexity referred to above.
NB you don't need to close both the RandomAccessFile and the FileChannel that was derived from it. Either will do.

Related

Reading multiple files in loop from FTP server using Apache Commons Net FTPClient

I have list of files that needs to be read from FTP server.
I have a method readFile(String path, FTPClient client) which reads and prints the file.
public byte[] readFile(String path,FTPClient client){
InputStream inStream = null;
ByteArrayOutputStream os = null;
byte[] finalBytes = new byte[0];
int reply;
int len;
byte[] buffer = new byte[1024];
try{
os = new ByteArrayOutputStream();
inStream = client.retrieveFileStream(path);
reply = client.getReplyCode();
log.warn("In getFTPfilebytes() :: Reply code -"+reply);
while ((len = inStream.read(buffer)) != -1) {
// write bytes from the buffer into output stream
os.write(buffer, 0, len);
}
finalBytes = os.toByteArray();
if(inStream == null){
throw new Exception("File not found");
}
inStream.close();
}catch(Exception e){
}finally{
try{ inStream.close();} catch(Exception e){}
}
return finalBytes;
}
I am calling above method in loop of list which contains strings of file path.
Issue - In loop only first file is getting read properly. Afterwards, it does not read file and throws an exception. inStream gives NULL for second iteration/second file. Also while iterating first file reply code after retrieveFileStream is "125(Data connection already open; transfer starting.)"
In second iteration it gives "200 (The requested action has been successfully completed.)"
I am not able to understand what is wrong here.
Have not closing inputstream connection properly?
You have to call FTPClient.completePendingCommand and close the input stream, as the documentation for FTPClient.retrieveFileStream says:
Returns an InputStream from which a named file from the server
can be read. If the current file type is ASCII, the returned
InputStream will convert line separators in the file to
the local representation. You must close the InputStream when you
finish reading from it. The InputStream itself will take care of
closing the parent data connection socket upon being closed.
To finalize the file transfer you must call completePendingCommand and
check its return value to verify success.
If this is not done, subsequent commands may behave unexpectedly.
inStream = client.retrieveFileStream(path);
try {
while ((len = inStream.read(buffer)) != -1) {
// write bytes from the buffer into output stream
os.write(buffer, 0, len);
}
finalBytes = os.toByteArray();
} finally {
inStream.close()
if (!client.completePendingCommand()) {
// error
}
}
Btw, there are better ways for copying from InputStream to OutputStream:
Easy way to write contents of a Java InputStream to an OutputStream
According to the documentation of the FTPClient.retrieveFileStream() method,
You must close the InputStream when you finish reading from it. The InputStream itself will take care of closing the parent data connection socket upon being closed.
When you close the stream, your client connection will be closed too. So instead of using the same client over and over, you need to create a new client connection for each file.
I didn't see the output stream is not properly closed.
finalBytes is o bytes?
where you defined the buffer variable?
please log the path so that we can see the path is correct or not. I guess the stream which is not properly closed makes the issue

while downloading a file from a socket method download doesn't finish operating

I am writing a program which is able to move files through a socket using the following methods but when I am downloading a file from the socket method download in client program doesn't end operating and exiting the loop although the upload method in server side has finished operating.
here is my download method in client
public synchronized void downloadFile(String url) throws IOException{
try (FileOutputStream fileOutputStream = new FileOutputStream(url)) {
int countedBytes;
byte[] buffer = new byte[kbBlockSize];
while ((countedBytes = input.read(buffer)) > 0)
fileOutputStream.write(buffer, 0, countedBytes);//end while
fileOutputStream.flush();
}//end try with resources block
}//end method downloadFile
and here is the uploading method in server side
public synchronized void uploadFile(String url) throws IOException {
try (FileInputStream fileInputStream = new FileInputStream(url)) {
int countedBytes;
byte[] buffer = new byte[kbBlockSize];
while ((countedBytes = fileInputStream.read(buffer)) > 0)
output.write(buffer, 0, countedBytes);//end while
output.flush();
}//end try with resources block
}//end method uploadFile
but as the above method has finished working the download method doesn't
I'd be thankful if anyone could help.
input.read(buffer) will always be more than 0 unless inputstream is closed.
it will block till it has atleast one byte or EOF.
so either the server must close the stream or send the download size separately, like some kind of header.
for example in http one can set content-length or use chunked encoding to signal the data size.
Why do you want to create a new protocol, instead of using something like http or ftp?

How to read a file from server multiple times with objectinputstream in Java

ObjectOutputStream oos = new ObjectOutputStream(c.getOutputStream());
ObjectInputStream ois = new ObjectInputStream(c.getInputStream());
File file = new File("lol.txt");
if(!file.exists()){
file.createNewFile();
}
byte[] textBytes;
while((textBytes = (byte[])ois.readObject()) != null){
Files.write(file.toPath(), textBytes);
}
//do stuff...
byte[] textBytes;
while((textBytes = (byte[])ois.readObject()) != null){
Files.write(file.toPath(), textBytes);
}
How can I read a file on a server multiple times? Should this code work? Will it not get stuck in the first loop check?
The server is writing it to the client like this.
byte[] fileBytes = Files.readAllBytes(fp.toPath());
oos.writeObject(fileBytes);
oos.flush();
A repeatable read on a Socket's InputStream is not possible, because it is a buffer based, blocking implementation, using a descriptor to indicate where the current position in the buffer is.
When you read from the InputStream the descriptor moves the amount of bytes you have read. It's not possible to rewind the descriptor to the previous position, because the read bytes maybe already overwritten by new received bytes.
You server client communication must be in this way
(I use the following abbreviations: S -> a Server, C -> a Client):
C: Request file from S
S: Send file to C
C: Request file from S
S: Send file to C
(and so on)

Jar File is corrupt after sending over socket

So i am building a program which needs an auto-updating feature built in to it, as i was finished up and tested it out, it seems when i send the jar file over the socket and write it to the newly made jar file it is missing 5KB (everytime... even when the size changes) size from it and becomes corrupt.
Here is my code:
package server.update;
import java.io.*;
import java.net.Socket;
public class UpdateThread extends Thread
{
BufferedInputStream input; //not used
BufferedInputStream fileInput;
BufferedOutputStream output;
public UpdateThread(Socket client) throws IOException
{
super("UpdateThread");
output = new BufferedOutputStream(client.getOutputStream());
input = new BufferedInputStream(client.getInputStream());
}
public void run()
{
try
{
File perm = new File(System.getProperty("user.dir")+"/GameClient.jar");
//fileInput = new BufferedInputStream(new FileInputStream(perm));
fileInput = new BufferedInputStream(new FileInputStream(perm));
byte[] buffer = new byte[1024];
int numRead;
while((numRead = fileInput.read(buffer)) != -1)
output.write(buffer, 0, numRead);
fileInput.close();
input.close();
output.close();
this.interrupt();
}
catch(Exception e)
{e.printStackTrace();}
}
}
This is the class that will wait for a connection from the client and then push the update to them as soon as it connects. File Perm is the jar file that i want to send over and for whatever reason it seems to either miss the last 5 bytes or the client doesn't read the last 5 (i don't know which). Here is the client's class of receiving the information here:
public void getUpdate(String ip) throws UnknownHostException, IOException
{
System.out.println("Connecting to update socket");
update = new Socket(ip,10004);
BufferedInputStream is = new BufferedInputStream(update.getInputStream());
BufferedOutputStream os = new BufferedOutputStream(update.getOutputStream());
System.out.println("Cleaning GameClient.jar file");
File updated = new File(System.getProperty("user.dir")+"/GameClient.jar");
if(updated.exists())
updated.delete();
updated.createNewFile();
BufferedOutputStream osf = new BufferedOutputStream(new FileOutputStream(updated));
System.out.println("Writing to GameClient.jar");
byte[] buffer = new byte[1024];
int numRead = 0;
while((numRead = is.read(buffer)) != -1)
osf.write(buffer, 0, numRead);
System.out.println("Finished updating...");
is.close();
os.close();
update.close();
osf.close();
}
Any help is appreciated. Thanks!
You have too many closes. Remove update.close() and is.close(). These both close the socket, which prevents the buffered stream 'osf' from being auto-flushed when closed. Closing either the input stream or the output stream or a socket closes the other stream and the socket. You should therefore only close the outermost output stream you have wrapped around the socket, in this case osf, and maybe the socket itself in a finally block to be sure.
Thanks to MDR for the answer, it worked!!
I had to change the following lines of code in the UpdateThread class:
Before:
fileInput.close();
input.close();
output.close();
this.interrupt();
After:
fileInput.close();
output.flush();
output.close();
input.close();
this.interrupt();
You must flush the stream before closing, also i switched the order because if you closed the inputstream attached to the socket it will close the socket and then will not move on to closing the outputstream or flushing it.
Thanks again!
Have you considered using an http library to delegate all of the connection handling and reading/writing to known working code? You're reinventing a lot of wheels here. Additionally at some point you're going to want to ensure the content you're receiving is authentic and undamaged (you're doing that by loading the class, which is somewhat dangerous, especially when you're exchanging data in cleartext!) Again, using a library and its methods would allow you to choose HTTPS, allowing TLS to do much of your work.
I'd also suggest that your server tell the client some metadata in advance, regardless- perhaps the content length and possibly a hash or checksum so the client can detect failures in the transfer.
This question seems to have answers relevant to your situation as well. Good luck!

How can I make sure I received whole file through socket stream?

Ok, So I'm making a Java program that has a server and client and I'm sending a Zip file from server to client. I have sending the file down, almost. But recieving I've found some inconsistency. My code isn't always getting the full archive. I'm guessing it's terminating before the BufferedReader has the full thing. Here's the code for the client:
public void run(String[] args) {
try {
clientSocket = new Socket("jacob-custom-pc", 4444);
out = new PrintWriter(clientSocket.getOutputStream(), true);
in = new BufferedInputStream(clientSocket.getInputStream());
BufferedReader inRead = new BufferedReader(new InputStreamReader(in));
int size = 0;
while(true) {
if(in.available() > 0) {
byte[] array = new byte[in.available()];
in.read(array);
System.out.println(array.length);
System.out.println("recieved file!");
FileOutputStream fileOut = new FileOutputStream("out.zip");
fileOut.write(array);
fileOut.close();
break;
}
}
}
} catch(IOException e) {
e.printStackTrace();
System.exit(-1);
}
}
So how can I be sure the full archive is there before it writes the file?
On the sending side write the file size before you start writing the file. On the reading side Read the file size so you know how many bytes to expect. Then call read until you have gotten everything you expect. With network sockets it may take more than one call to read to get everything that was sent. This is especially true as your data gets larger.
HTTP sends a content-length: x+\n in bytes. This is elegant, it might throw a TimeoutException if the conn is broken.
You are using a TCP socket. The ZIP file is probably larger than the network MTU, so it will be split up into multiple packets and reassembled at the other side. Still, something like this might happen:
client connects
server starts sending. The ZIP file is bigger than the MTU and therefore split up into multiple packets.
client busy-waits in the while (true) until it gets the first packets.
client notices that data has arrived (in.available() > 0)
client reads all available data, writes it to the file and exits
the last packets arrive
So as you can see: Unless the client machine is crazily slow and the network is crazily fast and has a huge MTU, your code simply won't receive the entire file by design. That's how you built it.
A different approach: Prefix the data with the length.
Socket clientSocket = new Socket("jacob-custom-pc", 4444);
DataInputStream dataReader = new DataInputStream(clientSocket.getInputStream());
FileOutputStream out = new FileOutputStream("out.zip");
long size = dataReader.readLong();
long chunks = size / 1024;
int lastChunk = (int)(size - (chunks * 1024));
byte[] buf = new byte[1024];
for (long i = 0; i < chunks; i++) {
dataReader.read(buf);
out.write(buf);
}
dataReader.read(buf, 0, lastChunk);
out.write(buf, 0, lastChunk);
And the server uses DataOutputStream to send the size of the file before the actual file. I didn't test this, but it should work.
How can I make sure I received whole file through socket stream?
By fixing your code. You are using InputStream.available() as a test for end of stream. That's not what it's for. Change your copy loop to this, which is also a whole lot simpler:
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
Use with any buffer size greater than zero, typically 8192.
In.available() just tells you that there is no data to be consumed by in.read() without blocking (waiting) at the moment but it does not mean the end of stream. But, they may arrive into your PC at any time, with TCP/IP packet. Normally, you never use in.available(). In.read() suffices everything for the reading the stream entirely. The pattern for reading the input streams is
byte[] buf;
int size;
while ((size = in.read(buf)) != -1)
process(buf, size);
// end of stream has reached
This way you will read the stream entirely, until its end.
update If you want to read multiple files, then chunk you stream into "packets" and prefix every one with an integer size. You then read until size bytes is received instead of in.read = -1.
update2 Anyway, never use in.available for demarking between the chunks of data. If you do that, you imply that there is a time delay between incoming data pieces. You can do this only in the real-time systems. But Windows, Java and TCP/IP are all these layers incompatible with real-time.

Categories

Resources