my android smartphone is tcpclient and chipkit wf32 wifi module is my tcp server.
int bytesRead;
InputStream inputStream = socket.getInputStream();
while ((bytesRead = inputStream.read(buffer)) != -1){
byteArrayOutputStream.write(buffer, 0, bytesRead);
response += byteArrayOutputStream.toString("UTF-8");
}
The above code reads the data from stream and copies to buffer. If no data is coming it will block. But sometimes i am getting -1. Can anyone explains the reason for getting -1? In document it is mentioned "end of stream is reached". But can you explain the meaning of that? thank you.
In the case of a socket, it means that the peer closed its end of the connection, or at least shut it down for output.
NB
response += byteArrayOutputStream.toString("UTF-8");
should be outside the loop. I've told you that before, in another of your numerous threads on this topic.
Related
This question already has answers here:
Java socket API: How to tell if a connection has been closed?
(9 answers)
Closed 1 year ago.
How does the InputStream.read(byte[]) method know if the "End of Stream" has been reached and return "-1" ?
What are all the conditions for returning "-1" ?
How to detect an "End of Stream" (without sending an integer which contains the total number of bytes to read before) ?
Example of use:
InputStream input = socket.getInputStream();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
for(int size = -1; (size = input.read(buffer)) != -1; ) {
baos.write(buffer, 0, size);
}
InputStream is an abstract type with many implementations. A FileInputStream, for example, will return -1 if you have reached the end of the file. If it's a TCP socket, it will return -1 if the connection has been closed. It is implementation-dependent how end-of-stream is determined.
It does not.
When you try to read n bytes from socket, the call may return before n bytes are ready, and number of bytes read is returned. How does read() decide to return? Based on timeout. The timeout value is commented as SO_TIMEOUT in AbstractPlainSocketImpl.java. Actually, real read happens with the native code, probably written in C, the SO_TIMEOUT defaults to whatever native code has. However, you can set timeout value with Socket.setSocketTimeout(millis).
SocketInputStream.java
n = socketRead(fd, b, off, length, timeout);
if (n > 0) {
return n;
}
If you observe HTTP protocol, the client and server coordinate using the content-length header to indicate each other when a request and response is ending, and when a new request and response is starting. The order of bytes received is taken care by the TCP layer.
Socket stream does not have end of stream check, like feof check with files. Its a two way communication read and write. However, you can check if bytes are available to read. TCP connections are live until either client or server chooses to close.
using DataInputStream to get an int and long sent from the Android client to this Java desktop server. After that a pdf file is received from the Android client. for a total of 3 files sent by the client to the Server. the problem is when sending in the other direction.
i have to close both the input and output streams right after the while loop. if i don't do that, the pdf file will be corrupted and the program will stop and get stuck on the while loop and not continue execution to the end of the thread.
if i have to close the input and output steams, the socket is getting closed. how do i reopen the same socket?
i need to reopen the same socket because of the need to send a message back to the Android client that the server received the pdf file from to send it a confirmation that the file was safely received by the server.
there are multiple Android clients connected to the same single Java server, so i imagine that the same socket is needed to send a message back to the client. without the socket it would be difficult to determine which client to send the message to.
byte[] buffer = new byte[fileSizeFromClient];
while((count = dis.read(buffer)) > 0){
bos.write(buffer, 0, count);
}
dis.close(); // closes DataInputStream dis
bos.close(); // closes BufferedOutputStream bos
EDIT:
code from the client
dos.writeInt((int)length); // sends the length as number bytes is file size to the server
dos.writeLong(serial); // sends the serial number to the server
int count = 0; // number of bytes
while ((count = bis.read(bytes)) > 0) {
dos.write(bytes, 0, count);
}
dos.close(); // need to close outputstream or result is incomplete file sent to server
// and the server hangs, stuck on the while loop
// if dos is closed then the server sends error free file
No. You can't reopen a socket. You have to make a new one. You don't have to close your socket once you're done with your file transfer. The server can still reuse it send your message reply. Since you've already sent the file size, your server can use that to know when your client is done sending the complete file. After that, your server can send your reply to the client.
Try this for your current loop.
int bytesRead = 0;
while((count = dis.read(buffer)) > 0 && bytesRead != fileSizeFromClient){
bytesRead += count;
bos.write(buffer, 0, count);
}
bos.close();
//don't close the input stream
It have established the following code, which seems to be working well:
void pipe(InputStream, OutputStream os) {
try {
try {
byte[] buf = new byte[1024*16];
int len, available = is.available();
while ((len = is.read(buf, 0, available > 0 ? available : 1)) != -1) {
os.write(buf, 0, len);
available = is.available();
if(available <= 0)
os.flush();
}
} finally {
try {
os.flush();
} finally {
os.close();
}
}
} finally {
is.close();
}
}
In the past, I found that if I call is.read(buf), then, even if data was available, it would block waiting for more data until the buffer was full. This was an echo server for TCP data, so my requirement was for there to be an immediate flush as soon as new data was arrived.
My first solution was the inefficient one-at-a-time is.read(). Later, when that was not good enough, I was looking at the available methods and found is.available(). The API states:
A single read or skip of this many bytes will not block.
So I have a pretty good solution now, but the one thing that looks bad to me is how I am handling cases where is.available() == 0. In this case, I simply read a single byte as a way to wait until new data is available.
What would be the recommended way to transfer data from an InputStream to an OutputStream, with immediate flush as data arrives? Is the above code really the right way, or should I change it, or use some brand new code? Perhaps I should be using some of the newer async routines, or maybe there is a built-in Java method for this?
In the past, I found that if I call is.read(buf), then, even if data was available, it would block waiting for more data until the buffer was full.
No you didn't. TCP doesn't work that way; sockets don't work that way; and Java sockets don't work that way. You are mistaken.
It's a lot simpler than you think:
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
out.close();
in.close();
There is no buffering on socket input/output streams so this will write everything that's read as soon as it has been read.
Calling available() in this circumstance, or indeed almost any circumstance, is a complete waste of time.
This is also the way to copy any kind of input stream to any kind of output stream in Java.
If you want non-blocking code, use NIO. But I don't see that you really do.
I am attempting to send an image from my android device to my computer via a socket. The problem is the input stream on my computer reads in every single byte but the last set of them. I have tried trimming the byte array down and sending it, I've manually written out -1 to the outputstream multiple times but the inputstream never reads -1. It just hangs waiting for data. I've also tried not closing the stream or sockets to see if it was some sort of timing issue, but that didn't work as well.
Client side (Android Phone)
//This has to be an objectoutput stream because I write objects to it first
InputStream is = An image's input stream android
ObjectOutputStream objectOutputStream = new ObjectOutputStream(socket.getOutputStream());
objectOutputStream.writeObject(object);
objectOutputStream.flush();
byte[] b = new byte[socket.getSendBufferSize()];
int read = 0;
while ((read = is.read(b)) != -1) {
objectOutputStream.write(b, 0, read);
objectOutputStream.flush();
b = new byte[socket.getSendBufferSize()];
}
//Tried manually writing -1 and flushing here
objectOutputStream.close();
is.close();
socket.close();
Server Side (Computer) This bit of code takes place after the object input stream reads in the objects sent. It only starts to read when the file starts to send
File loc = Location of where the file is stored on the computer
loc.createNewFile();
FileOutputStream os = new FileOutputStream(loc);
Socket gSocket = The socket
ObjectInputStream gInputStream = Object Input stream created from the sockets input stream already used to read in the previous objects
byte[] b = new byte[gSocket.getReceiveBufferSize()];
int read = 0;
while ((read = gInputStream.read(b)) != -1) {
os.write(b, 0, read);
os.flush();
b = new byte[gSocket.getReceiveBufferSize()];
}
os.close();
This code never reads in -1 even if I write -1 directly and flush the stream. The outcome is java.net.SocketException: Connection reset when the stream or socket from the android device is closed. The picture is almost completely sent but the very last pixels of the picture are gray. I also even tried using the out/input stream directly from the socket instead of using the already created objectinputstream/objectoutputstream and it still doesn't work.
Firstly, I think you misunderstood the meaning of EOF (-1). It doesn't mean the server wrote a -1, it means the server closed the stream.
I think your main problem though is that both the server and the client are reading in a loop, and neither get to the point where they close the stream. They are deadlocked - both are waiting for the other one to close first.
Your client:
Your server:
If you know that you have no more data to write then just close the stream.
Since you're already using ObjectInputStream and ObjectOutputStream, you can use their respective readObject and writeObject methods to read/write entire objects at a time. Maybe you could send/receive the entire byte array as an object?
On your android:
1) byte[] imageBytes = ...; // contains the Image
2) objectOutputStream.writeObject(imageBytes);
On your computer:
1) byte[] imageBytes = (byte[])readObject();
2) get image from imageBytes
Of course, you'll have to use readObject from within a thread since it'll block.
You are writing byte[] arrays as objects, bur reading bytes. You should be reading Objects and casting them to byte[]. EOS will cause an EOFException to be thrown.
I am using IBM Websphere Application Server v6 and Java 1.4 and am trying to write large CSV files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment.
The smaller files aren't causing too much of a problem but with the larger files it appears that it is being written into the heap which is then causing an OutOfMemory error and bringing down the entire server.
These files can only be served out to authenticated users over HTTPS which is why I am serving them through a Servlet instead of just sticking them in Apache.
The code I am using is (some fluff removed around this):
resp.setHeader("Content-length", "" + fileLength);
resp.setContentType("application/vnd.ms-excel");
resp.setHeader("Content-Disposition","attachment; filename=\"export.csv\"");
FileInputStream inputStream = null;
try
{
inputStream = new FileInputStream(path);
byte[] buffer = new byte[1024];
int bytesRead = 0;
do
{
bytesRead = inputStream.read(buffer, offset, buffer.length);
resp.getOutputStream().write(buffer, 0, bytesRead);
}
while (bytesRead == buffer.length);
resp.getOutputStream().flush();
}
finally
{
if(inputStream != null)
inputStream.close();
}
The FileInputStream doesn't seem to be causing a problem as if I write to another file or just remove the write completely the memory usage doesn't appear to be a problem.
What I am thinking is that the resp.getOutputStream().write is being stored in memory until the data can be sent through to the client. So the entire file might be read and stored in the resp.getOutputStream() causing my memory issues and crashing!
I have tried Buffering these streams and also tried using Channels from java.nio, none of which seems to make any bit of difference to my memory issues. I have also flushed the OutputStream once per iteration of the loop and after the loop, which didn't help.
The average decent servletcontainer itself flushes the stream by default every ~2KB. You should really not have the need to explicitly call flush() on the OutputStream of the HttpServletResponse at intervals when sequentially streaming data from the one and same source. In for example Tomcat (and Websphere!) this is configureable as bufferSize attribute of the HTTP connector.
The average decent servletcontainer also just streams the data in chunks if the content length is unknown beforehand (as per the Servlet API specification!) and if the client supports HTTP 1.1.
The problem symptoms at least indicate that the servletcontainer is buffering the entire stream in memory before flushing. This can mean that the content length header is not set and/or the servletcontainer does not support chunked encoding and/or the client side does not support chunked encoding (i.e. it is using HTTP 1.0).
To fix the one or other, just set the content length beforehand:
response.setContentLengthLong(new File(path).length());
Or when you're not on Servlet 3.1 yet:
response.setHeader("Content-Length", String.valueOf(new File(path).length()));
Does flush work on the output stream.
Really I wanted to comment that you should use the three-arg form of write as the buffer is not necessarily fully read (particularly at the end of the file(!)). Also a try/finally would be in order unless you want you server to die unexpectedly.
I have used a class that wraps the outputstream to make it reusable in other contexts. It has worked well for me in getting data to the browser faster, but I haven't looked at the memory implications. (please pardon my antiquated m_ variable naming)
import java.io.IOException;
import java.io.OutputStream;
public class AutoFlushOutputStream extends OutputStream {
protected long m_count = 0;
protected long m_limit = 4096;
protected OutputStream m_out;
public AutoFlushOutputStream(OutputStream out) {
m_out = out;
}
public AutoFlushOutputStream(OutputStream out, long limit) {
m_out = out;
m_limit = limit;
}
public void write(int b) throws IOException {
if (m_out != null) {
m_out.write(b);
m_count++;
if (m_limit > 0 && m_count >= m_limit) {
m_out.flush();
m_count = 0;
}
}
}
}
I'm also not sure if flush() on ServletOutputStream works in this case, but ServletResponse.flushBuffer() should send the response to the client (at least per 2.3 servlet spec).
ServletResponse.setBufferSize() sounds promising, too.
So, following your scenario, shouldn't you been flush(ing) inside that while loop (on every iteration), instead of outside of it? I would try that, with a bit larger buffer though.
Kevin's class should close the m_out field if it's not null in the close() operator, we don't want to leak things, do we?
As well as the ServletOutputStream.flush() operator, the HttpServletResponse.flushBuffer() operation may also flush the buffers. However, it appears to be an implementation specific detail as to whether or not these operations have any effect, or whether http content length support is interfering. Remember, specifying content-length is an option on HTTP 1.0, so things should just stream out if you flush things. But I don't see that
The while condition does not work, you need to check the -1 before using it. And please use a temporary variable for the output stream, its nicer to read and it safes calling the getOutputStream() repeadably.
OutputStream outStream = resp.getOutputStream();
while(true) {
int bytesRead = inputStream.read(buffer);
if (bytesRead < 0)
break;
outStream.write(buffer, 0, bytesRead);
}
inputStream.close();
out.close();
unrelated to your memory problems, the while loop should be:
while(bytesRead > 0);
your code has an infinite loop.
do
{
bytesRead = inputStream.read(buffer, offset, buffer.length);
resp.getOutputStream().write(buffer, 0, bytesRead);
}
while (bytesRead == buffer.length);
offset has the same value thoughout the loop, so if initially offset = 0, it will remain so in every iteration which will cause infinite-loop and which will leads to OOM error.
Ibm websphere application server uses asynchronous data transfer for servlets by default. That means that it buffers response. If you have problems with large data and OutOfMemory exceptions, try changing settings on WAS to use synchronous mode.
Setting the WebSphere Application Server WebContainer to synchronous mode
You must also take care of loading chunks and flush them.
Sample for loading from large file.
ServletOutputStream os = response.getOutputStream();
FileInputStream fis = new FileInputStream(file);
try {
int buffSize = 1024;
byte[] buffer = new byte[buffSize];
int len;
while ((len = fis.read(buffer)) != -1) {
os.write(buffer, 0, len);
os.flush();
response.flushBuffer();
}
} finally {
os.close();
}