I would like to send image file from java server to android app using this code:
Server(Java):
File file = new File("./clique.jpg");
FileInputStream stream = new FileInputStream(file);
DataOutputStream writer = new DataOutputStream(socket.getOutputStream());
byte[] contextB = new byte[4096];
int n;
int i = 0;
while ( (n=stream.read(contextB))!=-1 ){
writer.write(contextB, 0, n);
writer.flush();
System.out.println(n);
i+=n;
}
writer.flush();
stream.close();
android app:
DataInputStream reader = new DataInputStream(socket.getInputStream());
byte[] buffer = new byte[4096];
ByteArrayOutputStream content = new ByteArrayOutputStream();
int n;
int i = 0;
reader = new DataInputStream(socket.getInputStream());
while ( (n=reader.read(buffer)) != null){
content.write(buffer, 0, n);
content.flush();
}
Utility.CreateImageFile(content.toByteArray());
What I noticed is that in the android app n which is the number of read bytes is not 4096 while I am sending from server byte blocks of 4096 size,also I can not get n=-1 which is the end of stream,it blocks until I close the app then I get n=-1.
Regarding the number of bytes you read at a time has nothing to do with the number of bytes you write -it very much depends on the network conditions and will be variable with every chunk (basically as many bytes managed to be transmitted in short period of time between your reads as many you will get in the read chunk.
Regarding the end of stream - in your server code you have forgotten to close the output stream (you only close the stream which is input stream - you should also close the writer which in turn will close the underlying output stream.
Two comments:
1) I would really recommend to use Buffered Readers/Writers wrapping the writes/readers - the code you will get will be nicer and you will not have to create/manage buffers yourself.
2) Use try {} finally and close your streams in finally clauses - this is the best practice that will make sure that you will close the streams and free resources even in case of problems while reading/writing.
You got a problem in your android code:
while ( (n=reader.read(buffer)) != null) {
n can not be null.
Use writer.close() instead of writer.flush() after your loop on the server.
Related
I am working on a client/server transfer protocol in java. The client is sending a simple text file, everything is going across the wire fine in wireshark, but once it gets to the server side, the first two letters are missing from the text file. I believe that it is overwriting the first buffer for some reason.
My goal is to make a while loop that reads the bytes in the buffer and then increments a count that'll place the next set of bytes....in the place if the ones already written
Here is the server's code that I currently have:
int bytesRead;
int current = 0;
InputStream in = s.getInputStream();
// Instantiating a new output stream object
OutputStream output = new FileOutputStream(myFile);
PrintStream stream = new PrintStream(output);
// Receive file 1024 bytes at a time
byte[] buffer = new byte[1024];
while ((bytesRead = in.read(buffer)) != -1) {
output.write(buffer, 0, bytesRead);
System.out.println(output.toString());
}
I have a problem with sending large string through socket from server to android client.
String is about 10MB.
Code for writing data to socket is this:
int socketTimeout = 200;
socket = new Socket(client.getHost(), client.getPort());
socket.setSoTimeout(socketTimeout);
OutputStreamWriter oos=new OutputStreamWriter(socket.getOutputStream());
String d = data.getData().toString() + "\n";
oos.write(d);
oos.flush();
Code for reading data from socket is this:
Socket s = params[0];
InputStream is = null;
try {
is = s.getInputStream();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int nRead;
byte[] data = new byte[32768];
while ((nRead = is.read(data, 0, data.length)) != -1) {
baos.write(data, 0, nRead);
}
return new String(baos.toByteArray());
}
So problem comes at line where I'm reading from inputStream where I get OutOfMemoryException. I tried using different examples of reading string from stream. I tried with BufferedInputStream, InputStreamReader, IOUtils, StringBuilder, BufferedReader ..etc. and all of them give me OutOfMemory exception when the string is large. I tested with smaller data something around 100K and it works perfectly.
Exception that I get on server-side is "Connection is reset by peer, socket write error."
You can read byte by byte in the client and write to a File byte by byte, in that way you are not holding the whole string in memory.
And then of course read that file by tokens or lines, not the whole string at once
By storing it in a ByteArrayOutputStream (or similar), you are coming up against the maximum heap size for the JVM in Android. This is a different size depending on the device. See: Android heap size on different phones/devices and OS versions
As has already been suggested, you should consider using a file stream to write the received data to disk.
Ok, So I'm making a Java program that has a server and client and I'm sending a Zip file from server to client. I have sending the file down, almost. But recieving I've found some inconsistency. My code isn't always getting the full archive. I'm guessing it's terminating before the BufferedReader has the full thing. Here's the code for the client:
public void run(String[] args) {
try {
clientSocket = new Socket("jacob-custom-pc", 4444);
out = new PrintWriter(clientSocket.getOutputStream(), true);
in = new BufferedInputStream(clientSocket.getInputStream());
BufferedReader inRead = new BufferedReader(new InputStreamReader(in));
int size = 0;
while(true) {
if(in.available() > 0) {
byte[] array = new byte[in.available()];
in.read(array);
System.out.println(array.length);
System.out.println("recieved file!");
FileOutputStream fileOut = new FileOutputStream("out.zip");
fileOut.write(array);
fileOut.close();
break;
}
}
}
} catch(IOException e) {
e.printStackTrace();
System.exit(-1);
}
}
So how can I be sure the full archive is there before it writes the file?
On the sending side write the file size before you start writing the file. On the reading side Read the file size so you know how many bytes to expect. Then call read until you have gotten everything you expect. With network sockets it may take more than one call to read to get everything that was sent. This is especially true as your data gets larger.
HTTP sends a content-length: x+\n in bytes. This is elegant, it might throw a TimeoutException if the conn is broken.
You are using a TCP socket. The ZIP file is probably larger than the network MTU, so it will be split up into multiple packets and reassembled at the other side. Still, something like this might happen:
client connects
server starts sending. The ZIP file is bigger than the MTU and therefore split up into multiple packets.
client busy-waits in the while (true) until it gets the first packets.
client notices that data has arrived (in.available() > 0)
client reads all available data, writes it to the file and exits
the last packets arrive
So as you can see: Unless the client machine is crazily slow and the network is crazily fast and has a huge MTU, your code simply won't receive the entire file by design. That's how you built it.
A different approach: Prefix the data with the length.
Socket clientSocket = new Socket("jacob-custom-pc", 4444);
DataInputStream dataReader = new DataInputStream(clientSocket.getInputStream());
FileOutputStream out = new FileOutputStream("out.zip");
long size = dataReader.readLong();
long chunks = size / 1024;
int lastChunk = (int)(size - (chunks * 1024));
byte[] buf = new byte[1024];
for (long i = 0; i < chunks; i++) {
dataReader.read(buf);
out.write(buf);
}
dataReader.read(buf, 0, lastChunk);
out.write(buf, 0, lastChunk);
And the server uses DataOutputStream to send the size of the file before the actual file. I didn't test this, but it should work.
How can I make sure I received whole file through socket stream?
By fixing your code. You are using InputStream.available() as a test for end of stream. That's not what it's for. Change your copy loop to this, which is also a whole lot simpler:
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
Use with any buffer size greater than zero, typically 8192.
In.available() just tells you that there is no data to be consumed by in.read() without blocking (waiting) at the moment but it does not mean the end of stream. But, they may arrive into your PC at any time, with TCP/IP packet. Normally, you never use in.available(). In.read() suffices everything for the reading the stream entirely. The pattern for reading the input streams is
byte[] buf;
int size;
while ((size = in.read(buf)) != -1)
process(buf, size);
// end of stream has reached
This way you will read the stream entirely, until its end.
update If you want to read multiple files, then chunk you stream into "packets" and prefix every one with an integer size. You then read until size bytes is received instead of in.read = -1.
update2 Anyway, never use in.available for demarking between the chunks of data. If you do that, you imply that there is a time delay between incoming data pieces. You can do this only in the real-time systems. But Windows, Java and TCP/IP are all these layers incompatible with real-time.
This question already has an answer here:
Receiving multiple images over TCP socket using InputStream
(1 answer)
Closed 9 years ago.
I am trying to send two images with a difference of 5 seconds between them from an android phone (client) to PC(server).
I am using InputStream to do this for me.
ServerSocket servsock = new ServerSocket(27508);
Socket sock = servsock.accept();
System.out.println("connection accepted ");
int count;
FileOutputStream fos = null;
BufferedOutputStream bos = null;
InputStream is = null;
is = sock.getInputStream();
int bufferSize = sock.getReceiveBufferSize();
byte[] bytes = new byte[bufferSize];
System.out.println("Here1");
fos = new FileOutputStream("D:\\fypimages\\image" + imgNum + ".jpeg");
bos = new BufferedOutputStream(fos);
imgNum++;
while ((count = is.read(bytes)) > 0)
{
bos.write(bytes, 0, count);
System.out.println("count: " + count);
}
bos.flush();
bytes = new byte[bufferSize];
System.out.println("Here2");
fos = new FileOutputStream("D:\\fypimages\\image" + imgNum + ".jpeg");
bos = new BufferedOutputStream(fos);
imgNum++;
while ((count = is.read(bytes)) > 0)
{
bos.write(bytes, 0, count);
System.out.println("count: " + count);
}
bos.flush();
System.out.println("Here3");
The problem is is.read(bytes) blocks the code only for the first image and then the program is terminated and it does not block for the second image.
I know it returns -1 when the first image is recieved completely, but how do I make it work for the second time ?
If read returns -1, it means other side closed the connection. But your basic problem seems to be, you're not handling the connection as stream. In a data stream, there are no inherent "packages", in this case no built-in way to distinguish one image from next.
You can proceed in at least 3 different ways:
Add your own simple protocol, for example: at sending side, write number of bytes in image, then write image bytes, then write number of bytes in next image, then write next image, etc, without closing the connection. And at receiving side, loop first reading the number of bytes, then reading that many bytes of image data.
Write one image per connection, then close the connection and create new connection for next image.
In this case, because data is JPEG images, just write all JPEG images as one data stream, then on receiving side, parse the JPEG format to see where the image boundaries are.
First choice is most efficient, and also is easily extended to deliver image name or other extra data in addition to image file length. Second is ok, and most simple and robust (for example, no need to worry about byte order, or worry about getting out of sync between sender and receiver), if there aren't too many images, but if there are hundreds of images, then re-connecting is going to slow things down a bit. Third choice is probably not the way to go with JPEGs, just listed is as a possiblity.
I´m having a problem, in my server, after I send a file with X bytes, I send a string saying this file is over and another file is coming, like
FILE: a SIZE: Y\r\n
send Y bytes
FILE a FINISHED\r\n
FILE b SIZE: Z\r\n
send Z byes
FILE b FINISHED\r\n
FILES FINISHED\r\n
In my client it does not recive properly.
I use readline() to get the command lines after reading Y or Z bytes from the socket.
With one file it works fine, with multiple files it rarely works (yeah, I dont know how it worked once or twice)
Here are some code I use to transfer binary
public static void readInputStreamToFile(InputStream is, FileOutputStream fout,
long size, int bufferSize) throws Exception
{
byte[] buffer = new byte[bufferSize];
long curRead = 0;
long totalRead = 0;
long sizeToRead = size;
while(totalRead < sizeToRead)
{
if(totalRead + buffer.length <= sizeToRead)
{
curRead = is.read(buffer);
}
else
{
curRead = is.read(buffer, 0, (int)(sizeToRead - totalRead));
}
totalRead = totalRead + curRead;
fout.write(buffer, 0, (int) curRead);
}
}
public static void writeFileInputStreamToOutputStream(FileInputStream in, OutputStream out, int bufferSize) throws Exception
{
byte[] buffer = new byte[bufferSize];
int count = 0;
while((count = in.read(buffer)) != -1)
{
out.write(buffer, 0, count);
}
}
just for note I could solve replacing readline to this code:
ByteArrayOutputStream ba = new ByteArrayOutputStream();
int ch;
while(true)
{
ch = is.read();
if(ch == -1)
throw new IOException("Conecção finalizada");
if(ch == 13)
{
ch = is.read();
if(ch == 10)
return new String(ba.toByteArray(), "ISO-8859-1");
else
ba.write(13);
}
ba.write(ch);
}
PS: "is" is my input stream from socket: socket.getInputStream();
still I dont know if its the best implementation to do, im tryinf to figure out
There's no readLine() calls in the code here, but to answer your question; Yes, calling BufferedReader.readLine() might very well leave stuff around in its internal buffer. It's buffering the input.
If you wrap one of your InputStream in a BufferedReader, you can't really get much sane behavior if you read from the BufferedReader and then later on read from the InputStream.
You could read bytes from your InputStream and parse out a text line from that by looking for a pair of \r\n bytes. When you got a line saying "FILE: a SIZE: Y\r\n" , you go on as usual, except the buffer you used to parse lines might contain the first few bytes of your file, so write those bytes out first.
Or you use the idea of FTP and use one TCP stream for commands and one TCP stream for the actual transfer, reading from the command stream with a BufferedReader.readLine(), and reading the data as you already do with an InputStream.
Yes, the main point of a BufferedReader is to buffer the data. It is reading input from its underlying Reader in bigger chunks to avoid having multiple small reads.
That it has a readLine() method is just a nice bonus which is made easily possible by the buffering.
You may want to use a DataInputStream (on top of a BufferedInputStream) and it's readLine() method, if you really have to mix text and binary data over the same connection - read the data from the same DataInputStream. (But take care about the encoding here.)
Call flush() on the OutputStream after you've written data that you want to be certain has been sent. So essentially at the end of each file call flush().
I guess you must flush your output stream in order to make sure any buffered bytes are properly sent down the stream. Closing the stream will equally have this process run.
The Javadocs for flush say:
Flushes this output stream and forces
any buffered output bytes to be
written out. The general contract of
flush is that calling it is an
indication that, if any bytes
previously written have been buffered
by the implementation of the output
stream, such bytes should immediately
be written to their intended
destination.