I'm using a InputStream buffer as following. I'd like to know when it is actually buffering (filling itself with data)... I'm feeding it with internet stream. I put Logs before and after len = in.read(buffer) but they are logged in the same time (so the process is not here).
conn = new URL(StringUrls[0]).openConnection();
conn.setReadTimeout(5000);
conn.setConnectTimeout(5000);
in = conn.getInputStream();
int len=-1;
buffer = new byte[1024];
Log.v("buffer", "buffering...");
len = in.read(buffer);
Log.v("buffer", "buffered");
http://docs.oracle.com/javase/7/docs/api/java/io/InputStream.html#read(byte[]) states:
This method blocks until input data is available, end of file is detected, or an exception is thrown.
In other words, it is being filled when it has the data to be filed with.
When you open a connection, the data are ready to be read. That is why InputStream does not have to wait for anything.
Related
I have list of files that needs to be read from FTP server.
I have a method readFile(String path, FTPClient client) which reads and prints the file.
public byte[] readFile(String path,FTPClient client){
InputStream inStream = null;
ByteArrayOutputStream os = null;
byte[] finalBytes = new byte[0];
int reply;
int len;
byte[] buffer = new byte[1024];
try{
os = new ByteArrayOutputStream();
inStream = client.retrieveFileStream(path);
reply = client.getReplyCode();
log.warn("In getFTPfilebytes() :: Reply code -"+reply);
while ((len = inStream.read(buffer)) != -1) {
// write bytes from the buffer into output stream
os.write(buffer, 0, len);
}
finalBytes = os.toByteArray();
if(inStream == null){
throw new Exception("File not found");
}
inStream.close();
}catch(Exception e){
}finally{
try{ inStream.close();} catch(Exception e){}
}
return finalBytes;
}
I am calling above method in loop of list which contains strings of file path.
Issue - In loop only first file is getting read properly. Afterwards, it does not read file and throws an exception. inStream gives NULL for second iteration/second file. Also while iterating first file reply code after retrieveFileStream is "125(Data connection already open; transfer starting.)"
In second iteration it gives "200 (The requested action has been successfully completed.)"
I am not able to understand what is wrong here.
Have not closing inputstream connection properly?
You have to call FTPClient.completePendingCommand and close the input stream, as the documentation for FTPClient.retrieveFileStream says:
Returns an InputStream from which a named file from the server
can be read. If the current file type is ASCII, the returned
InputStream will convert line separators in the file to
the local representation. You must close the InputStream when you
finish reading from it. The InputStream itself will take care of
closing the parent data connection socket upon being closed.
To finalize the file transfer you must call completePendingCommand and
check its return value to verify success.
If this is not done, subsequent commands may behave unexpectedly.
inStream = client.retrieveFileStream(path);
try {
while ((len = inStream.read(buffer)) != -1) {
// write bytes from the buffer into output stream
os.write(buffer, 0, len);
}
finalBytes = os.toByteArray();
} finally {
inStream.close()
if (!client.completePendingCommand()) {
// error
}
}
Btw, there are better ways for copying from InputStream to OutputStream:
Easy way to write contents of a Java InputStream to an OutputStream
According to the documentation of the FTPClient.retrieveFileStream() method,
You must close the InputStream when you finish reading from it. The InputStream itself will take care of closing the parent data connection socket upon being closed.
When you close the stream, your client connection will be closed too. So instead of using the same client over and over, you need to create a new client connection for each file.
I didn't see the output stream is not properly closed.
finalBytes is o bytes?
where you defined the buffer variable?
please log the path so that we can see the path is correct or not. I guess the stream which is not properly closed makes the issue
This question already has answers here:
Determine the size of an InputStream
(13 answers)
Closed 3 years ago.
I'm having an InputStream from a ProcessBuilder that acutally reads the stdout stream.
Question: how can I know the size of that inmemory InputStream, so I can write it to a HttpResponse http header?
InputStream is = process.getInputStream();
InputStreamReader isr = new InputStreamReader(is);
BufferedReader br = new BufferedReader(isr);
OutputStream out = response.getOutputStream();
int bytes;
while ((bytes = br.read()) != -1) {
out.write(bytes);
}
//how can I know the size of the inmemory stream/file written?
//response.setContentLength((int) pdfFile.length());
There is no such thing as the size of an input stream. Consider a program which never exits, or a socket peer which never stops sending. And you don't need to know to write it to an HttpResponse header. The Content-length is managed automatically for you.
Try this
InputStream is = process.getInputStream();
ByteArrayOutputStream os = new ByteArrayOutputStream();
int b;
while ((b = is.read()) != -1)
os.write(b);
response.setContentLength(os.size());
response.getOutputStream().write(os.toByteArray());
If you really want to set the content length header, you'll need to read the entire stream before writing to the response OutputStream
ByteArrayOutputStream out = new ByteArrayOutputStream();
byte[] bytes = new byte[1024];
int count;
while ((count = in.read(bytes)) > 0) {
out.write(bytes, 0, count);
}
response.setContentLength(out.size();
out.writeTo(response.getOutputStream());
Note: With this approach you've now read the entire stream into memory, this will have an impact on available memory and likely won't scale well.
import org.apache.commons.io.IOUtils;
byte[] bytes = IOUtils.toByteArray(inputStream);
log.message("bytes .lenght "+bytes.length);
if (bytes.length > 400000)
//some byte range limit`enter code can add any byte range
{
throw new Exception("File Size is larger than 40 MB ..");
}
An InputStream inherently doesn't have a size. It could conceivably keep delivering bytes forever. Or the producing end could end the stream without warning.
If you must find out the length, then you have to read to the end, counting the bytes, and report the length when you finish.
You're worrying about HTTP's Content-length header, and you've got a point. The fact is that the original version of HTTP was not designed for large, dynamically generated content. The protocol inherently expects you to know the size of the content before you start writing it - yet how is that possible if it's (for example) an ongoing chat, or the output of a video camera?
The solution is HTTP's chunked transfer encoding. Here you don't set a Content-Length header. You set Transfer-Encoding: chunked, then write the content as chunks, each of which has a size header.
The HTTP RFC has details one this, or https://en.wikipedia.org/wiki/Chunked_transfer_encoding is slightly more friendly.
However most HTTP APIs hide this detail from you. Unless you are developing a web library from scratch (perhaps for academic reasons), you shouldn't have to think about Content-Length or Transfer-Encoding.
I have a problem with sending large string through socket from server to android client.
String is about 10MB.
Code for writing data to socket is this:
int socketTimeout = 200;
socket = new Socket(client.getHost(), client.getPort());
socket.setSoTimeout(socketTimeout);
OutputStreamWriter oos=new OutputStreamWriter(socket.getOutputStream());
String d = data.getData().toString() + "\n";
oos.write(d);
oos.flush();
Code for reading data from socket is this:
Socket s = params[0];
InputStream is = null;
try {
is = s.getInputStream();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int nRead;
byte[] data = new byte[32768];
while ((nRead = is.read(data, 0, data.length)) != -1) {
baos.write(data, 0, nRead);
}
return new String(baos.toByteArray());
}
So problem comes at line where I'm reading from inputStream where I get OutOfMemoryException. I tried using different examples of reading string from stream. I tried with BufferedInputStream, InputStreamReader, IOUtils, StringBuilder, BufferedReader ..etc. and all of them give me OutOfMemory exception when the string is large. I tested with smaller data something around 100K and it works perfectly.
Exception that I get on server-side is "Connection is reset by peer, socket write error."
You can read byte by byte in the client and write to a File byte by byte, in that way you are not holding the whole string in memory.
And then of course read that file by tokens or lines, not the whole string at once
By storing it in a ByteArrayOutputStream (or similar), you are coming up against the maximum heap size for the JVM in Android. This is a different size depending on the device. See: Android heap size on different phones/devices and OS versions
As has already been suggested, you should consider using a file stream to write the received data to disk.
i need to bring out inputstream from inputstream , for example inputstream A is 1024 byte and i need to bring out inputstream B from A of Hundred and fiftieth byte to end , from certain offset to certain end . i search in google and stackoverflow ...Is there any solution available in java ??
You can use the method "skip" to skip the first 150 bytes.
Here is an example:
byte[] buf = {1,2,3,4,5,6,7,8,9};
InputStream is1 = new ByteArrayInputStream(buf);
long skip = is1.skip(5);
System.out.println(is1.read());
If you know that you have a FileInputStream, you can use FileChannel.position() to set where in the file that stream will read from.
FileInputStream in = new FileInputStream("whatever");
FileChannel channel = in.getChannel();
channel.position(10);
This will not work with other types of streams.
I would like to send image file from java server to android app using this code:
Server(Java):
File file = new File("./clique.jpg");
FileInputStream stream = new FileInputStream(file);
DataOutputStream writer = new DataOutputStream(socket.getOutputStream());
byte[] contextB = new byte[4096];
int n;
int i = 0;
while ( (n=stream.read(contextB))!=-1 ){
writer.write(contextB, 0, n);
writer.flush();
System.out.println(n);
i+=n;
}
writer.flush();
stream.close();
android app:
DataInputStream reader = new DataInputStream(socket.getInputStream());
byte[] buffer = new byte[4096];
ByteArrayOutputStream content = new ByteArrayOutputStream();
int n;
int i = 0;
reader = new DataInputStream(socket.getInputStream());
while ( (n=reader.read(buffer)) != null){
content.write(buffer, 0, n);
content.flush();
}
Utility.CreateImageFile(content.toByteArray());
What I noticed is that in the android app n which is the number of read bytes is not 4096 while I am sending from server byte blocks of 4096 size,also I can not get n=-1 which is the end of stream,it blocks until I close the app then I get n=-1.
Regarding the number of bytes you read at a time has nothing to do with the number of bytes you write -it very much depends on the network conditions and will be variable with every chunk (basically as many bytes managed to be transmitted in short period of time between your reads as many you will get in the read chunk.
Regarding the end of stream - in your server code you have forgotten to close the output stream (you only close the stream which is input stream - you should also close the writer which in turn will close the underlying output stream.
Two comments:
1) I would really recommend to use Buffered Readers/Writers wrapping the writes/readers - the code you will get will be nicer and you will not have to create/manage buffers yourself.
2) Use try {} finally and close your streams in finally clauses - this is the best practice that will make sure that you will close the streams and free resources even in case of problems while reading/writing.
You got a problem in your android code:
while ( (n=reader.read(buffer)) != null) {
n can not be null.
Use writer.close() instead of writer.flush() after your loop on the server.