Java and send file through socket - java

I write a client-server application which will be sending an .xml file from the client to the server. I have a problem with sending large data. I notice that the server can get at most 1460 bytes. When I send a file with more than 1460 bytes the server gets only first 1460 bytes and nothng more. In effect I get uncompleted file. Here is my code:
client send:
public void sendToServer(File file) throws Exception
{
OutputStream output = sk.getOutputStream();
FileInputStream fileInputStream = new FileInputStream(file);
byte[] buffer = new byte[1024*1024];
int bytesRead = 0;
while((bytesRead = fileInputStream.read(buffer))>0)
{
output.write(buffer,0,bytesRead);
}
fileInputStream.close();
}
server get:
public File getFile(String name) throws Exception
{
File file=null;
InputStream input = sk.getInputStream();
file = new File("C://protokolPliki/" + name);
FileOutputStream out = new FileOutputStream(file);
byte[] buffer = new byte[1024*1024];
int bytesReceived = 0;
while((bytesReceived = input.read(buffer))>0) {
out.write(buffer,0,bytesReceived);
System.out.println(bytesReceived);
break;
}
return file;
}
Do anyone know what is wrong with this code? Thanks for any help.
EDIT:
Nothing help :(. I google about that and I think its may connected with TCP MSS with is equal 1460 bytes.

Make sure you call flush() on the streams.
A passerby asks: isn't close() enough?
You linked to the docs for Writer, and the info. on the close() method states..
Closes the stream, flushing it first. ..
So you are partly right, OTOH, the OP is clearly using an OutputStream and the docs for close() state:
Closes this output stream and releases any system resources associated with this stream. The general contract of close is that it closes the output stream. A closed stream cannot perform output operations and cannot be reopened.
The close method of OutputStream does nothing.
(Emphasis mine.)
So to sum up. No, calling close() on a plain OutputStream will have no effect, and might as well be removed by the compiler.

Although not relate to your question, the API document said FileInputStream.read returns -1 for end of file. You should use >=0 for the while loop.

The MTU (Maximum Transmission Unit) for Ethernet is around 1500 bytes. Consider sending the file in chunks (i.e. one line at a time or 1024 bytes at a time).
See if using 1024 instead of 1024 * 1024 for the byte buffer solves your problem.

In the code executed on the server side, there is a break instruction in the while loop. Therefore the code in the loop will only get executed once. Remove the break instruction and the code should work just fine.

Related

How to continue downloading file after disconnection?

I have simple java-server via sockets.
Server is read from client url of file which need to download.
FileOutputStream outStream= new FileOutputStream(SERVER_PATH + file.getName());
BufferedOutputStream out = new BufferedOutputStream(outStream);
byte buf[] = new byte[BATCH];
int read = 0;
while ((read = in.read(buf,0,BATCH))>=0){
out.write(buf,0,read);
}
how to continue to download file?
Your Question is a little ambiguous .!
After looking at the code, it looks like you are reading from a File in Client machine and Writing the same to the Server URL.
Assuming this situation,
The points that can help you resolve this are,
1. There will an IOException if the connection is lost. That means you have to handle the exception and reconnect to the Socket. May be after waiting for some time (!!)
2. Then you need to open the server File in Append mode and continue with out.write. As the out is not reset or lost with the Disconnection.
Thanks, Sunil

Transmiting/receiving compressed data with sockets: how to properly receive the data sent from the client

I have developed a client-server chat using the Sockets and it works great, but when I try to transmit data with Deflate compression it doesn't work: the output is "empty" (actually it's not empty, but I'll explain below).
The compression/decompression part is 100% working (I have already tested it), so the problem must be elsewhere in the transmission/receiving part.
I send the message from the client to the server using these methods:
// streamOut is an instance of DataOutputStream
// message is a String
if (zip) { // zip is a boolean variable: true means that compression is active
streamOut.write(Zip.compress(message)); // Zip.compress(String) returns a byte[] array of the compressed "message"
} else {
// if compression isn't active, the client sends the not compressed message to the server (and this works great)
streamOut.writeUTF(message);
}
streamOut.flush();
And I receive the message from the client to the server using these other methods:
// streamIn is an instace of DataInputStream
if (server.zip) { // same as before: true = compression is active
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] buf = new byte[512];
int n;
while ((n = streamIn.read(buf)) > 0) {
bos.write(buf, 0, n);
}
byte[] output = bos.toByteArray();
System.out.println("output: " + Zip.decompress(output)); // Zip.decompress(byte[]) returns a String of decompressed byte[] array received
} else {
System.out.println("output: " + streamIn.readUTF()); // this works great
}
Debugging a little bit my program, I've discovered that the while loop never ends, so:
byte[] output = bos.toByteArray();
System.out.println("output: " + Zip.decompress(output));
is never called.
If I put those 2 lines of code in the while loop (after bos.write()), then all works fine (it prints the message sent from the client)! But I don't think that's the solution, because the byte[] array received may vary in size. Because of this I assumed that the problem is in the receiving part (the client is actually able to send data).
So my problem became the while loop in the receiving part. I tried with:
while ((n = streamIn.read(buf)) != -1) {
and even with the condition != 0, but it's the same as before: the loop never ends, so the output part is never called.
-1 is only returned when the socket is closed or broken. You could close the socket after sending your zipped content, and your code would start working. But I suspect you want to keep the socket open for more (future) chat messages. So you need some other way of letting the client know when a discrete message has been fully transmitted. Like Patrick suggested, you could transmit the message length before each zipped payload.
You might be able to leverage something in the deflate format itself, though. I think it has a last-block-in-stream marker. If you're using java.util.zip.Inflater have a look at Inflater.finished().
The read function will not return a -1 until the stream is closed. What you can do is calculate the number of bytes that should be sent from the server to the client, and then read that number of bytes on the client side.
Calculating the number of bytes is as easy as sending the length of the byte array returned from the Zip.compress function before the actual message, and then use the readInt function to get that number.
Using this algorithm makes sure that you read the correct number of bytes before decompressing, so even if the client actually reads 0 bytes it will continue to read until it receives all bytes it wants. You can do a streamIn.read(buf, 0, Math.min(bytesLeft, buf.length)) to only read as many bytes you want.
Your problem is the way you are working with stream. You must send some meta-data so your client know what to expect as data. Idealy you are creating a protocol/state machine to read the stream. For your example, as a quick and dirt solution, send something like data size or a termination sequence or something.
Example of solution:
Server: send the "data size" before the compressed data
Client: wait for the "data size" bytes. Now loop till read is equal or greater "data size" value. Something like:
while( streamIn.ready() && dataRead < dataExpected)
{
dataRead += streamIn.read(buf);
}
Of course you need to read the dataExpected before, with a similar code.
Tip: You could also use UDP if you dont mind having the possibility to lose data. Its easier to program with datagrams...

Android - BufferedOutputStream doesn't flush

I have a problem with a BufferedOutputStream. I want to send a kml file from an Android device to a java server through a socket connection.
(The connection is ok, i am already able to exchange data with a PrintWriter in an other part of my program)
To send my kml file, I fill the buffer. But when i flush() it, nothing happen.
int lu = inFile.read();
while(lu != -1){
out.write(lu);
lu = inFile.read();
}
out.flush();
inFile.close();
inFile is my stream used to read the kml file
out is my BufferedOutputStream using the OutputStream of my socket
I don't close my out object but i don't want to, i don't use it just once. And this is the problem...
The close() method send the buffer's data but close the socket too.
The flush() method does not send the buffer's data.
I want to flush the buffer without closing my socket.
I also tried to use mySocket.shutdownOutput();
int lu = inFile.read();
while(lu != -1){
out.write(lu);
lu = inFile.read();
}
out.flush();
mySocket.shutdownOutput();
inFile.close();
This method close my stream and keep my socket open, that's what i want.
But when i try to open a new output stream, the Exception java.net.SocketException: Socket output is shutdown
So, how to flush my buffer without closing my sokcet are being unable to open a new output stream ?
Socket.close() and Socket.shutdownOutput() both send an EOS to the peer, on which he should close the socket, and after which you can no longer write to the socket, because you've closed it in that direction.
So if you need to continue writing to the socket you cannot use either of these methods.
Probably what you are searching for is a way to delimit application protocol messages. There are at least three techniques:
Send a length word prior to each message.
Send an out-of-band delimiter after each message, i.e. a byte or byte sequence that cannot occur in a message. The STX/ETX protocol, with escapes, is an example of this.
Use a self-describing message format such as Object Serialization or XML. STX/ETX is also an example of this.

InputStream not receiving EOF

I am attempting to send an image from my android device to my computer via a socket. The problem is the input stream on my computer reads in every single byte but the last set of them. I have tried trimming the byte array down and sending it, I've manually written out -1 to the outputstream multiple times but the inputstream never reads -1. It just hangs waiting for data. I've also tried not closing the stream or sockets to see if it was some sort of timing issue, but that didn't work as well.
Client side (Android Phone)
//This has to be an objectoutput stream because I write objects to it first
InputStream is = An image's input stream android
ObjectOutputStream objectOutputStream = new ObjectOutputStream(socket.getOutputStream());
objectOutputStream.writeObject(object);
objectOutputStream.flush();
byte[] b = new byte[socket.getSendBufferSize()];
int read = 0;
while ((read = is.read(b)) != -1) {
objectOutputStream.write(b, 0, read);
objectOutputStream.flush();
b = new byte[socket.getSendBufferSize()];
}
//Tried manually writing -1 and flushing here
objectOutputStream.close();
is.close();
socket.close();
Server Side (Computer) This bit of code takes place after the object input stream reads in the objects sent. It only starts to read when the file starts to send
File loc = Location of where the file is stored on the computer
loc.createNewFile();
FileOutputStream os = new FileOutputStream(loc);
Socket gSocket = The socket
ObjectInputStream gInputStream = Object Input stream created from the sockets input stream already used to read in the previous objects
byte[] b = new byte[gSocket.getReceiveBufferSize()];
int read = 0;
while ((read = gInputStream.read(b)) != -1) {
os.write(b, 0, read);
os.flush();
b = new byte[gSocket.getReceiveBufferSize()];
}
os.close();
This code never reads in -1 even if I write -1 directly and flush the stream. The outcome is java.net.SocketException: Connection reset when the stream or socket from the android device is closed. The picture is almost completely sent but the very last pixels of the picture are gray. I also even tried using the out/input stream directly from the socket instead of using the already created objectinputstream/objectoutputstream and it still doesn't work.
Firstly, I think you misunderstood the meaning of EOF (-1). It doesn't mean the server wrote a -1, it means the server closed the stream.
I think your main problem though is that both the server and the client are reading in a loop, and neither get to the point where they close the stream. They are deadlocked - both are waiting for the other one to close first.
Your client:
Your server:
If you know that you have no more data to write then just close the stream.
Since you're already using ObjectInputStream and ObjectOutputStream, you can use their respective readObject and writeObject methods to read/write entire objects at a time. Maybe you could send/receive the entire byte array as an object?
On your android:
1) byte[] imageBytes = ...; // contains the Image
2) objectOutputStream.writeObject(imageBytes);
On your computer:
1) byte[] imageBytes = (byte[])readObject();
2) get image from imageBytes
Of course, you'll have to use readObject from within a thread since it'll block.
You are writing byte[] arrays as objects, bur reading bytes. You should be reading Objects and casting them to byte[]. EOS will cause an EOFException to be thrown.

Using ServletOutputStream to write very large files in a Java servlet without memory issues

I am using IBM Websphere Application Server v6 and Java 1.4 and am trying to write large CSV files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment.
The smaller files aren't causing too much of a problem but with the larger files it appears that it is being written into the heap which is then causing an OutOfMemory error and bringing down the entire server.
These files can only be served out to authenticated users over HTTPS which is why I am serving them through a Servlet instead of just sticking them in Apache.
The code I am using is (some fluff removed around this):
resp.setHeader("Content-length", "" + fileLength);
resp.setContentType("application/vnd.ms-excel");
resp.setHeader("Content-Disposition","attachment; filename=\"export.csv\"");
FileInputStream inputStream = null;
try
{
inputStream = new FileInputStream(path);
byte[] buffer = new byte[1024];
int bytesRead = 0;
do
{
bytesRead = inputStream.read(buffer, offset, buffer.length);
resp.getOutputStream().write(buffer, 0, bytesRead);
}
while (bytesRead == buffer.length);
resp.getOutputStream().flush();
}
finally
{
if(inputStream != null)
inputStream.close();
}
The FileInputStream doesn't seem to be causing a problem as if I write to another file or just remove the write completely the memory usage doesn't appear to be a problem.
What I am thinking is that the resp.getOutputStream().write is being stored in memory until the data can be sent through to the client. So the entire file might be read and stored in the resp.getOutputStream() causing my memory issues and crashing!
I have tried Buffering these streams and also tried using Channels from java.nio, none of which seems to make any bit of difference to my memory issues. I have also flushed the OutputStream once per iteration of the loop and after the loop, which didn't help.
The average decent servletcontainer itself flushes the stream by default every ~2KB. You should really not have the need to explicitly call flush() on the OutputStream of the HttpServletResponse at intervals when sequentially streaming data from the one and same source. In for example Tomcat (and Websphere!) this is configureable as bufferSize attribute of the HTTP connector.
The average decent servletcontainer also just streams the data in chunks if the content length is unknown beforehand (as per the Servlet API specification!) and if the client supports HTTP 1.1.
The problem symptoms at least indicate that the servletcontainer is buffering the entire stream in memory before flushing. This can mean that the content length header is not set and/or the servletcontainer does not support chunked encoding and/or the client side does not support chunked encoding (i.e. it is using HTTP 1.0).
To fix the one or other, just set the content length beforehand:
response.setContentLengthLong(new File(path).length());
Or when you're not on Servlet 3.1 yet:
response.setHeader("Content-Length", String.valueOf(new File(path).length()));
Does flush work on the output stream.
Really I wanted to comment that you should use the three-arg form of write as the buffer is not necessarily fully read (particularly at the end of the file(!)). Also a try/finally would be in order unless you want you server to die unexpectedly.
I have used a class that wraps the outputstream to make it reusable in other contexts. It has worked well for me in getting data to the browser faster, but I haven't looked at the memory implications. (please pardon my antiquated m_ variable naming)
import java.io.IOException;
import java.io.OutputStream;
public class AutoFlushOutputStream extends OutputStream {
protected long m_count = 0;
protected long m_limit = 4096;
protected OutputStream m_out;
public AutoFlushOutputStream(OutputStream out) {
m_out = out;
}
public AutoFlushOutputStream(OutputStream out, long limit) {
m_out = out;
m_limit = limit;
}
public void write(int b) throws IOException {
if (m_out != null) {
m_out.write(b);
m_count++;
if (m_limit > 0 && m_count >= m_limit) {
m_out.flush();
m_count = 0;
}
}
}
}
I'm also not sure if flush() on ServletOutputStream works in this case, but ServletResponse.flushBuffer() should send the response to the client (at least per 2.3 servlet spec).
ServletResponse.setBufferSize() sounds promising, too.
So, following your scenario, shouldn't you been flush(ing) inside that while loop (on every iteration), instead of outside of it? I would try that, with a bit larger buffer though.
Kevin's class should close the m_out field if it's not null in the close() operator, we don't want to leak things, do we?
As well as the ServletOutputStream.flush() operator, the HttpServletResponse.flushBuffer() operation may also flush the buffers. However, it appears to be an implementation specific detail as to whether or not these operations have any effect, or whether http content length support is interfering. Remember, specifying content-length is an option on HTTP 1.0, so things should just stream out if you flush things. But I don't see that
The while condition does not work, you need to check the -1 before using it. And please use a temporary variable for the output stream, its nicer to read and it safes calling the getOutputStream() repeadably.
OutputStream outStream = resp.getOutputStream();
while(true) {
int bytesRead = inputStream.read(buffer);
if (bytesRead < 0)
break;
outStream.write(buffer, 0, bytesRead);
}
inputStream.close();
out.close();
unrelated to your memory problems, the while loop should be:
while(bytesRead > 0);
your code has an infinite loop.
do
{
bytesRead = inputStream.read(buffer, offset, buffer.length);
resp.getOutputStream().write(buffer, 0, bytesRead);
}
while (bytesRead == buffer.length);
offset has the same value thoughout the loop, so if initially offset = 0, it will remain so in every iteration which will cause infinite-loop and which will leads to OOM error.
Ibm websphere application server uses asynchronous data transfer for servlets by default. That means that it buffers response. If you have problems with large data and OutOfMemory exceptions, try changing settings on WAS to use synchronous mode.
Setting the WebSphere Application Server WebContainer to synchronous mode
You must also take care of loading chunks and flush them.
Sample for loading from large file.
ServletOutputStream os = response.getOutputStream();
FileInputStream fis = new FileInputStream(file);
try {
int buffSize = 1024;
byte[] buffer = new byte[buffSize];
int len;
while ((len = fis.read(buffer)) != -1) {
os.write(buffer, 0, len);
os.flush();
response.flushBuffer();
}
} finally {
os.close();
}

Categories

Resources