Easy way to write contents of a Java InputStream to an OutputStream - java
I was surprised to find today that I couldn't track down any simple way to write the contents of an InputStream to an OutputStream in Java. Obviously, the byte buffer code isn't difficult to write, but I suspect I'm just missing something which would make my life easier (and the code clearer).
So, given an InputStream in and an OutputStream out, is there a simpler way to write the following?
byte[] buffer = new byte[1024];
int len = in.read(buffer);
while (len != -1) {
out.write(buffer, 0, len);
len = in.read(buffer);
}
As WMR mentioned, org.apache.commons.io.IOUtils from Apache has a method called copy(InputStream,OutputStream) which does exactly what you're looking for.
So, you have:
InputStream in;
OutputStream out;
IOUtils.copy(in,out);
in.close();
out.close();
...in your code.
Is there a reason you're avoiding IOUtils?
If you are using Java 7, Files (in the standard library) is the best approach:
/* You can get Path from file also: file.toPath() */
Files.copy(InputStream in, Path target)
Files.copy(Path source, OutputStream out)
Edit: Of course it's just useful when you create one of InputStream or OutputStream from file. Use file.toPath() to get path from file.
To write into an existing file (e.g. one created with File.createTempFile()), you'll need to pass the REPLACE_EXISTING copy option (otherwise FileAlreadyExistsException is thrown):
Files.copy(in, target, StandardCopyOption.REPLACE_EXISTING)
Java 9
Since Java 9, InputStream provides a method called transferTo with the following signature:
public long transferTo(OutputStream out) throws IOException
As the documentation states, transferTo will:
Reads all bytes from this input stream and writes the bytes to the
given output stream in the order that they are read. On return, this
input stream will be at end of stream. This method does not close
either stream.
This method may block indefinitely reading from the
input stream, or writing to the output stream. The behavior for the
case where the input and/or output stream is asynchronously closed, or
the thread interrupted during the transfer, is highly input and output
stream specific, and therefore not specified
So in order to write contents of a Java InputStream to an OutputStream, you can write:
input.transferTo(output);
I think this will work, but make sure to test it... minor "improvement", but it might be a bit of a cost at readability.
byte[] buffer = new byte[1024];
int len;
while ((len = in.read(buffer)) != -1) {
out.write(buffer, 0, len);
}
Using Guava's ByteStreams.copy():
ByteStreams.copy(inputStream, outputStream);
Simple Function
If you only need this for writing an InputStream to a File then you can use this simple function:
private void copyInputStreamToFile( InputStream in, File file ) {
try {
OutputStream out = new FileOutputStream(file);
byte[] buf = new byte[1024];
int len;
while((len=in.read(buf))>0){
out.write(buf,0,len);
}
out.close();
in.close();
} catch (Exception e) {
e.printStackTrace();
}
}
For those who use Spring framework there is a useful StreamUtils class:
StreamUtils.copy(in, out);
The above does not close the streams. If you want the streams closed after the copy, use FileCopyUtils class instead:
FileCopyUtils.copy(in, out);
The JDK uses the same code so it seems like there is no "easier" way without clunky third party libraries (which probably don't do anything different anyway). The following is directly copied from java.nio.file.Files.java:
// buffer size used for reading and writing
private static final int BUFFER_SIZE = 8192;
/**
* Reads all bytes from an input stream and writes them to an output stream.
*/
private static long copy(InputStream source, OutputStream sink) throws IOException {
long nread = 0L;
byte[] buf = new byte[BUFFER_SIZE];
int n;
while ((n = source.read(buf)) > 0) {
sink.write(buf, 0, n);
nread += n;
}
return nread;
}
PipedInputStream and PipedOutputStream should only be used when you have multiple threads, as noted by the Javadoc.
Also, note that input streams and output streams do not wrap any thread interruptions with IOExceptions... So, you should consider incorporating an interruption policy to your code:
byte[] buffer = new byte[1024];
int len = in.read(buffer);
while (len != -1) {
out.write(buffer, 0, len);
len = in.read(buffer);
if (Thread.interrupted()) {
throw new InterruptedException();
}
}
This would be an useful addition if you expect to use this API for copying large volumes of data, or data from streams that get stuck for an intolerably long time.
There's no way to do this a lot easier with JDK methods, but as Apocalisp has already noted, you're not the only one with this idea: You could use IOUtils from Jakarta Commons IO, it also has a lot of other useful things, that IMO should actually be part of the JDK...
Using Java7 and try-with-resources, comes with a simplified and readable version.
try(InputStream inputStream = new FileInputStream("C:\\mov.mp4");
OutputStream outputStream = new FileOutputStream("D:\\mov.mp4")) {
byte[] buffer = new byte[10*1024];
for (int length; (length = inputStream.read(buffer)) != -1; ) {
outputStream.write(buffer, 0, length);
}
} catch (FileNotFoundException exception) {
exception.printStackTrace();
} catch (IOException ioException) {
ioException.printStackTrace();
}
Here comes how I'm doing with a simplest for loop.
private void copy(final InputStream in, final OutputStream out)
throws IOException {
final byte[] b = new byte[8192];
for (int r; (r = in.read(b)) != -1;) {
out.write(b, 0, r);
}
}
Use Commons Net's Util class:
import org.apache.commons.net.io.Util;
...
Util.copyStream(in, out);
I use BufferedInputStream and BufferedOutputStream to remove the buffering semantics from the code
try (OutputStream out = new BufferedOutputStream(...);
InputStream in = new BufferedInputStream(...))) {
int ch;
while ((ch = in.read()) != -1) {
out.write(ch);
}
}
A IMHO more minimal snippet (that also more narrowly scopes the length variable):
byte[] buffer = new byte[2048];
for (int n = in.read(buffer); n >= 0; n = in.read(buffer))
out.write(buffer, 0, n);
As a side note, I don't understand why more people don't use a for loop, instead opting for a while with an assign-and-test expression that is regarded by some as "poor" style.
This is my best shot!!
And do not use inputStream.transferTo(...) because is too generic.
Your code performance will be better if you control your buffer memory.
public static void transfer(InputStream in, OutputStream out, int buffer) throws IOException {
byte[] read = new byte[buffer]; // Your buffer size.
while (0 < (buffer = in.read(read)))
out.write(read, 0, buffer);
}
I use it with this (improvable) method when I know in advance the size of the stream.
public static void transfer(int size, InputStream in, OutputStream out) throws IOException {
transfer(in, out,
size > 0xFFFF ? 0xFFFF // 16bits 65,536
: size > 0xFFF ? 0xFFF// 12bits 4096
: size < 0xFF ? 0xFF // 8bits 256
: size
);
}
I think it's better to use a large buffer, because most of the files are greater than 1024 bytes. Also it's a good practice to check the number of read bytes to be positive.
byte[] buffer = new byte[4096];
int n;
while ((n = in.read(buffer)) > 0) {
out.write(buffer, 0, n);
}
out.close();
Not very readable, but effective, has no dependencies and runs with any java version
byte[] buffer=new byte[1024];
for(int n; (n=inputStream.read(buffer))!=-1; outputStream.write(buffer,0,n));
PipedInputStream and PipedOutputStream may be of some use, as you can connect one to the other.
Another possible candidate are the Guava I/O utilities:
http://code.google.com/p/guava-libraries/wiki/IOExplained
I thought I'd use these since Guava is already immensely useful in my project, rather than adding yet another library for one function.
I used ByteStreamKt.copyTo(src, dst, buffer.length) method
Here is my code
public static void replaceCurrentDb(Context context, Uri newDbUri) {
try {
File currentDb = context.getDatabasePath(DATABASE_NAME);
if (currentDb.exists()) {
InputStream src = context.getContentResolver().openInputStream(newDbUri);
FileOutputStream dst = new FileOutputStream(currentDb);
final byte[] buffer = new byte[8 * 1024];
ByteStreamsKt.copyTo(src, dst, buffer.length);
src.close();
dst.close();
Toast.makeText(context, "SUCCESS! Your selected file is set as current menu.", Toast.LENGTH_LONG).show();
}
else
Log.e("DOWNLOAD:::: Database", " fail, database not found");
}
catch (IOException e) {
Toast.makeText(context, "Data Download FAIL.", Toast.LENGTH_LONG).show();
Log.e("DOWNLOAD FAIL!!!", "fail, reason:", e);
}
}
public static boolean copyFile(InputStream inputStream, OutputStream out) {
byte buf[] = new byte[1024];
int len;
long startTime=System.currentTimeMillis();
try {
while ((len = inputStream.read(buf)) != -1) {
out.write(buf, 0, len);
}
long endTime=System.currentTimeMillis()-startTime;
Log.v("","Time taken to transfer all bytes is : "+endTime);
out.close();
inputStream.close();
} catch (IOException e) {
return false;
}
return true;
}
Try Cactoos:
new LengthOf(new TeeInput(input, output)).value();
More details here: http://www.yegor256.com/2017/06/22/object-oriented-input-output-in-cactoos.html
you can use this method
public static void copyStream(InputStream is, OutputStream os)
{
final int buffer_size=1024;
try
{
byte[] bytes=new byte[buffer_size];
for(;;)
{
int count=is.read(bytes, 0, buffer_size);
if(count==-1)
break;
os.write(bytes, 0, count);
}
}
catch(Exception ex){}
}
Related
How to write FileInputStream to ServletOutputStream [duplicate]
I was surprised to find today that I couldn't track down any simple way to write the contents of an InputStream to an OutputStream in Java. Obviously, the byte buffer code isn't difficult to write, but I suspect I'm just missing something which would make my life easier (and the code clearer). So, given an InputStream in and an OutputStream out, is there a simpler way to write the following? byte[] buffer = new byte[1024]; int len = in.read(buffer); while (len != -1) { out.write(buffer, 0, len); len = in.read(buffer); }
As WMR mentioned, org.apache.commons.io.IOUtils from Apache has a method called copy(InputStream,OutputStream) which does exactly what you're looking for. So, you have: InputStream in; OutputStream out; IOUtils.copy(in,out); in.close(); out.close(); ...in your code. Is there a reason you're avoiding IOUtils?
If you are using Java 7, Files (in the standard library) is the best approach: /* You can get Path from file also: file.toPath() */ Files.copy(InputStream in, Path target) Files.copy(Path source, OutputStream out) Edit: Of course it's just useful when you create one of InputStream or OutputStream from file. Use file.toPath() to get path from file. To write into an existing file (e.g. one created with File.createTempFile()), you'll need to pass the REPLACE_EXISTING copy option (otherwise FileAlreadyExistsException is thrown): Files.copy(in, target, StandardCopyOption.REPLACE_EXISTING)
Java 9 Since Java 9, InputStream provides a method called transferTo with the following signature: public long transferTo(OutputStream out) throws IOException As the documentation states, transferTo will: Reads all bytes from this input stream and writes the bytes to the given output stream in the order that they are read. On return, this input stream will be at end of stream. This method does not close either stream. This method may block indefinitely reading from the input stream, or writing to the output stream. The behavior for the case where the input and/or output stream is asynchronously closed, or the thread interrupted during the transfer, is highly input and output stream specific, and therefore not specified So in order to write contents of a Java InputStream to an OutputStream, you can write: input.transferTo(output);
I think this will work, but make sure to test it... minor "improvement", but it might be a bit of a cost at readability. byte[] buffer = new byte[1024]; int len; while ((len = in.read(buffer)) != -1) { out.write(buffer, 0, len); }
Using Guava's ByteStreams.copy(): ByteStreams.copy(inputStream, outputStream);
Simple Function If you only need this for writing an InputStream to a File then you can use this simple function: private void copyInputStreamToFile( InputStream in, File file ) { try { OutputStream out = new FileOutputStream(file); byte[] buf = new byte[1024]; int len; while((len=in.read(buf))>0){ out.write(buf,0,len); } out.close(); in.close(); } catch (Exception e) { e.printStackTrace(); } }
For those who use Spring framework there is a useful StreamUtils class: StreamUtils.copy(in, out); The above does not close the streams. If you want the streams closed after the copy, use FileCopyUtils class instead: FileCopyUtils.copy(in, out);
The JDK uses the same code so it seems like there is no "easier" way without clunky third party libraries (which probably don't do anything different anyway). The following is directly copied from java.nio.file.Files.java: // buffer size used for reading and writing private static final int BUFFER_SIZE = 8192; /** * Reads all bytes from an input stream and writes them to an output stream. */ private static long copy(InputStream source, OutputStream sink) throws IOException { long nread = 0L; byte[] buf = new byte[BUFFER_SIZE]; int n; while ((n = source.read(buf)) > 0) { sink.write(buf, 0, n); nread += n; } return nread; }
PipedInputStream and PipedOutputStream should only be used when you have multiple threads, as noted by the Javadoc. Also, note that input streams and output streams do not wrap any thread interruptions with IOExceptions... So, you should consider incorporating an interruption policy to your code: byte[] buffer = new byte[1024]; int len = in.read(buffer); while (len != -1) { out.write(buffer, 0, len); len = in.read(buffer); if (Thread.interrupted()) { throw new InterruptedException(); } } This would be an useful addition if you expect to use this API for copying large volumes of data, or data from streams that get stuck for an intolerably long time.
There's no way to do this a lot easier with JDK methods, but as Apocalisp has already noted, you're not the only one with this idea: You could use IOUtils from Jakarta Commons IO, it also has a lot of other useful things, that IMO should actually be part of the JDK...
Using Java7 and try-with-resources, comes with a simplified and readable version. try(InputStream inputStream = new FileInputStream("C:\\mov.mp4"); OutputStream outputStream = new FileOutputStream("D:\\mov.mp4")) { byte[] buffer = new byte[10*1024]; for (int length; (length = inputStream.read(buffer)) != -1; ) { outputStream.write(buffer, 0, length); } } catch (FileNotFoundException exception) { exception.printStackTrace(); } catch (IOException ioException) { ioException.printStackTrace(); }
Here comes how I'm doing with a simplest for loop. private void copy(final InputStream in, final OutputStream out) throws IOException { final byte[] b = new byte[8192]; for (int r; (r = in.read(b)) != -1;) { out.write(b, 0, r); } }
Use Commons Net's Util class: import org.apache.commons.net.io.Util; ... Util.copyStream(in, out);
I use BufferedInputStream and BufferedOutputStream to remove the buffering semantics from the code try (OutputStream out = new BufferedOutputStream(...); InputStream in = new BufferedInputStream(...))) { int ch; while ((ch = in.read()) != -1) { out.write(ch); } }
A IMHO more minimal snippet (that also more narrowly scopes the length variable): byte[] buffer = new byte[2048]; for (int n = in.read(buffer); n >= 0; n = in.read(buffer)) out.write(buffer, 0, n); As a side note, I don't understand why more people don't use a for loop, instead opting for a while with an assign-and-test expression that is regarded by some as "poor" style.
This is my best shot!! And do not use inputStream.transferTo(...) because is too generic. Your code performance will be better if you control your buffer memory. public static void transfer(InputStream in, OutputStream out, int buffer) throws IOException { byte[] read = new byte[buffer]; // Your buffer size. while (0 < (buffer = in.read(read))) out.write(read, 0, buffer); } I use it with this (improvable) method when I know in advance the size of the stream. public static void transfer(int size, InputStream in, OutputStream out) throws IOException { transfer(in, out, size > 0xFFFF ? 0xFFFF // 16bits 65,536 : size > 0xFFF ? 0xFFF// 12bits 4096 : size < 0xFF ? 0xFF // 8bits 256 : size ); }
I think it's better to use a large buffer, because most of the files are greater than 1024 bytes. Also it's a good practice to check the number of read bytes to be positive. byte[] buffer = new byte[4096]; int n; while ((n = in.read(buffer)) > 0) { out.write(buffer, 0, n); } out.close();
Not very readable, but effective, has no dependencies and runs with any java version byte[] buffer=new byte[1024]; for(int n; (n=inputStream.read(buffer))!=-1; outputStream.write(buffer,0,n));
PipedInputStream and PipedOutputStream may be of some use, as you can connect one to the other.
Another possible candidate are the Guava I/O utilities: http://code.google.com/p/guava-libraries/wiki/IOExplained I thought I'd use these since Guava is already immensely useful in my project, rather than adding yet another library for one function.
I used ByteStreamKt.copyTo(src, dst, buffer.length) method Here is my code public static void replaceCurrentDb(Context context, Uri newDbUri) { try { File currentDb = context.getDatabasePath(DATABASE_NAME); if (currentDb.exists()) { InputStream src = context.getContentResolver().openInputStream(newDbUri); FileOutputStream dst = new FileOutputStream(currentDb); final byte[] buffer = new byte[8 * 1024]; ByteStreamsKt.copyTo(src, dst, buffer.length); src.close(); dst.close(); Toast.makeText(context, "SUCCESS! Your selected file is set as current menu.", Toast.LENGTH_LONG).show(); } else Log.e("DOWNLOAD:::: Database", " fail, database not found"); } catch (IOException e) { Toast.makeText(context, "Data Download FAIL.", Toast.LENGTH_LONG).show(); Log.e("DOWNLOAD FAIL!!!", "fail, reason:", e); } }
public static boolean copyFile(InputStream inputStream, OutputStream out) { byte buf[] = new byte[1024]; int len; long startTime=System.currentTimeMillis(); try { while ((len = inputStream.read(buf)) != -1) { out.write(buf, 0, len); } long endTime=System.currentTimeMillis()-startTime; Log.v("","Time taken to transfer all bytes is : "+endTime); out.close(); inputStream.close(); } catch (IOException e) { return false; } return true; }
Try Cactoos: new LengthOf(new TeeInput(input, output)).value(); More details here: http://www.yegor256.com/2017/06/22/object-oriented-input-output-in-cactoos.html
you can use this method public static void copyStream(InputStream is, OutputStream os) { final int buffer_size=1024; try { byte[] bytes=new byte[buffer_size]; for(;;) { int count=is.read(bytes, 0, buffer_size); if(count==-1) break; os.write(bytes, 0, count); } } catch(Exception ex){} }
Does not closing a FileOutPutStream not write anything to the file?
I have a function which writes the given input stream to a given output stream. Code below. static void copyStream(InputStream is, OutputStream os) throws IOException { byte[] buffer = new byte[4096]; int len; try { while ((len = is.read(buffer)) != -1) { os.write(buffer, 0, len); } } } The above function is called from this function public static void copyFile(File srcFile, File destFile) throws IOException { FileInputStream fis = new FileInputStream(srcFile); try { FileOutputStream fos = new FileOutputStream(destFile); try { **copyStream**(fis, fos); } finally { if (fos != null) fos.close(); } } finally { if (fis != null) fis.close(); } } In this function, I am writing 4 MB at once. I use this function to copy images. Occasionally I see that the destination file is not created due to which an exception occurs while trying to read that file for future processing. I am guessing the culprit to be not closing the resources. Is my hypothesis good? What are the reasons why my function might fail? Please help
I believe, that given InputStream and OutputStream installed correctly. Add os.flush(); at the end. Sure, both streams should be closed in the caller as well. As alternative, you could use Apache IO utils org.apache.commons.io.IOUtils.copy(InputStream input, OutputStream output).
Yes you absolutely must close your destination file to ensure that all caches from the JVM through to the OS are flushed and the file is ready for a reader to consume. Copying large files the way that you are doing is concise in code but inefficient in operation. Consider upgrading your code to use the more efficient NIO methods, documented here in a blog post. In case that blog disappears, here's the code: Utility class: public final class ChannelTools { public static void fastChannelCopy(final ReadableByteChannel src, final WritableByteChannel dest) throws IOException { final ByteBuffer buffer = ByteBuffer.allocateDirect(16 * 1024); while (src.read(buffer) != -1) { // prepare the buffer to be drained buffer.flip(); // write to the channel, may block dest.write(buffer); // If partial transfer, shift remainder down // If buffer is empty, same as doing clear() buffer.compact(); } // EOF will leave buffer in fill state buffer.flip(); // make sure the buffer is fully drained. while (buffer.hasRemaining()) { dest.write(buffer); } } } Usage example with your InputStream and OutputStream: // allocate the stream ... only for example final InputStream input = new FileInputStream(inputFile); final OutputStream output = new FileOutputStream(outputFile); // get an channel from the stream final ReadableByteChannel inputChannel = Channels.newChannel(input); final WriteableByteChannel outputChannel = Channels.newChannel(output); // copy the channels ChannelTools.fastChannelCopy(inputChannel, outputChannel); // closing the channels inputChannel.close(); outputChannel.close() There is also a more concise method documented in Wikipedia that achieves the same thing with less code: // Getting file channels FileChannel in = new FileInputStream(source).getChannel(); FileChannel out = new FileOutputStream(target).getChannel(); // JavaVM does its best to do this as native I/O operations. in.transferTo(0, in.size(), out); // Closing file channels will close corresponding stream objects as well. out.close(); in.close();
Too large files when downloading Piktogramms
I'm trying to download some images provided by a hoster. This is the method I use: public static void downloadImage(String imageLink, File f) throws IOException { URL url = new URL(imageLink); byte[] buffer = new byte[1024]; BufferedInputStream in = new BufferedInputStream(url.openStream(), buffer.length); BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(f), buffer.length); while (in.read(buffer) > 0) out.write(buffer); out.flush(); out.close(); in.close(); } However, the file turn out too big. 5MB for a 80x60 jpg is too much in my opinion. What could be the cause of this?
You are doing things wrong here: read() returns the number of bytes that were really read; thus you have to write exactly that number from your buffer array into your output stream. Your code is corrupting your output; and simply writing out a buffer array ... that mostly consists of 0s! Instead do something like: int bytesRead; while ( ( bytesRead = in.read(buffer)) > 0) { byte outBuffer[] = new byte[bytesRead]; ... then use arraycopy to move bytesRead bytes out.write(outBuffer); } ( this is meant as inspiration to get you going, more pseudo like than real code )
Is it possible to read images without ImageIO?
I am trying to read an image and deliver it through a Java socket. But there are some bits that does not fit. When viewing in a diff tool I realized that all numbers bigger than 127 were truncated. So I wanted to just convert it to a char[] array and return it instead. Now I'm getting a complette different image, perhaps due to char's size. try (PrintWriter out = new PrintWriter(this.socket.getOutputStream(), true); BufferedInputStream in = new BufferedInputStream(new FileInputStream(filename), BUFSIZ)) { byte[] buffer = new byte[BUFSIZ]; while (in.read(buffer) != -1) { response.append(new String(buffer)); out.print(response.toString()); response.setLength(0); } } catch (IOException e) { System.err.println(e.getMessage()); } This is my reading and delivering code. I've read many times to use ImageIO but I want to do it without, since I don't know whether it's an image or not. (And what about other file types like executables?) So, is there any way to convert it to something like an unsigned byte that'll be delivered correctly on the client? Do I have to use something different than read() to achieve that?
Writers are for character data. Use the OutputStream. And you're making the usual mistake of assuming that read() filled the buffer. The following loop will copy anything correctly. Memorize it. int count; byte[] buffer = new byte[8192]; while ((count = in.read(buffer)) > 0) { out.write(buffer, 0, count); }
Repeat after me: a char is not a byte and it's not a code point. Repeat after me: a Writer is not an OutputStream. try (OutputStream out = this.socket.getOutputStream(); BufferedInputStream in = new BufferedInputStream(new FileInputStream(filename), BUFSIZ)) { byte[] buffer = new byte[BUFSIZ]; int len; while ((len = in.read(buffer))) != -1) { out.write(buffer, 0, len); } } catch (IOException e) { System.err.println(e.getMessage()); } (this is from memory, check the args for write()).
Connecting an input stream to an outputstream
update in java9: https://docs.oracle.com/javase/9/docs/api/java/io/InputStream.html#transferTo-java.io.OutputStream- I saw some similar, but not-quite-what-i-need threads. I have a server, which will basically take input from a client, client A, and forward it, byte for byte, to another client, client B. I'd like to connect my inputstream of client A with my output stream of client B. Is that possible? What are ways to do that? Also, these clients are sending each other messages, which are somewhat time sensitive, so buffering won't do. I do not want a buffer of say 500 and a client sends 499 bytes and then my server holds off on forwarding the 500 bytes because it hasn't received the last byte to fill the buffer. Right now, I am parsing each message to find its length, then reading length bytes, then forwarding them. I figured (and tested) this would be better than reading a byte and forwarding a byte over and over because that would be very slow. I also did not want to use a buffer or a timer for the reason I stated in my last paragraph — I do not want messages waiting a really long time to get through simply because the buffer isn't full. What's a good way to do this?
Just because you use a buffer doesn't mean the stream has to fill that buffer. In other words, this should be okay: public static void copyStream(InputStream input, OutputStream output) throws IOException { byte[] buffer = new byte[1024]; // Adjust if you want int bytesRead; while ((bytesRead = input.read(buffer)) != -1) { output.write(buffer, 0, bytesRead); } } That should work fine - basically the read call will block until there's some data available, but it won't wait until it's all available to fill the buffer. (I suppose it could, and I believe FileInputStream usually will fill the buffer, but a stream attached to a socket is more likely to give you the data immediately.) I think it's worth at least trying this simple solution first.
How about just using void feedInputToOutput(InputStream in, OutputStream out) { IOUtils.copy(in, out); } and be done with it? from jakarta apache commons i/o library which is used by a huge amount of projects already so you probably already have the jar in your classpath already.
JDK 9 has added InputStream#transferTo(OutputStream out) for this functionality.
For completeness, guava also has a handy utility for this ByteStreams.copy(input, output);
You can use a circular buffer : Code // buffer all data in a circular buffer of infinite size CircularByteBuffer cbb = new CircularByteBuffer(CircularByteBuffer.INFINITE_SIZE); class1.putDataOnOutputStream(cbb.getOutputStream()); class2.processDataFromInputStream(cbb.getInputStream()); Maven dependency <dependency> <groupId>org.ostermiller</groupId> <artifactId>utils</artifactId> <version>1.07.00</version> </dependency> Mode details http://ostermiller.org/utils/CircularBuffer.html
Asynchronous way to achieve it. void inputStreamToOutputStream(final InputStream inputStream, final OutputStream out) { Thread t = new Thread(new Runnable() { public void run() { try { int d; while ((d = inputStream.read()) != -1) { out.write(d); } } catch (IOException ex) { //TODO make a callback on exception. } } }); t.setDaemon(true); t.start(); }
BUFFER_SIZE is the size of chucks to read in. Should be > 1kb and < 10MB. private static final int BUFFER_SIZE = 2 * 1024 * 1024; private void copy(InputStream input, OutputStream output) throws IOException { try { byte[] buffer = new byte[BUFFER_SIZE]; int bytesRead = input.read(buffer); while (bytesRead != -1) { output.write(buffer, 0, bytesRead); bytesRead = input.read(buffer); } //If needed, close streams. } finally { input.close(); output.close(); } }
Use org.apache.commons.io.IOUtils InputStream inStream = new ... OutputStream outStream = new ... IOUtils.copy(inStream, outStream); or copyLarge for size >2GB
This is a Scala version that is clean and fast (no stackoverflow): import scala.annotation.tailrec import java.io._ implicit class InputStreamOps(in: InputStream) { def >(out: OutputStream): Unit = pipeTo(out) def pipeTo(out: OutputStream, bufferSize: Int = 1<<10): Unit = pipeTo(out, Array.ofDim[Byte](bufferSize)) #tailrec final def pipeTo(out: OutputStream, buffer: Array[Byte]): Unit = in.read(buffer) match { case n if n > 0 => out.write(buffer, 0, n) pipeTo(out, buffer) case _ => in.close() out.close() } } This enables to use > symbol e.g. inputstream > outputstream and also pass in custom buffers/sizes.
In case you are into functional this is a function written in Scala showing how you could copy an input stream to an output stream using only vals (and not vars). def copyInputToOutputFunctional(inputStream: InputStream, outputStream: OutputStream,bufferSize: Int) { val buffer = new Array[Byte](bufferSize); def recurse() { val len = inputStream.read(buffer); if (len > 0) { outputStream.write(buffer.take(len)); recurse(); } } recurse(); } Note that this is not recommended to use in a java application with little memory available because with a recursive function you could easily get a stack overflow exception error