Related
I was surprised to find today that I couldn't track down any simple way to write the contents of an InputStream to an OutputStream in Java. Obviously, the byte buffer code isn't difficult to write, but I suspect I'm just missing something which would make my life easier (and the code clearer).
So, given an InputStream in and an OutputStream out, is there a simpler way to write the following?
byte[] buffer = new byte[1024];
int len = in.read(buffer);
while (len != -1) {
out.write(buffer, 0, len);
len = in.read(buffer);
}
As WMR mentioned, org.apache.commons.io.IOUtils from Apache has a method called copy(InputStream,OutputStream) which does exactly what you're looking for.
So, you have:
InputStream in;
OutputStream out;
IOUtils.copy(in,out);
in.close();
out.close();
...in your code.
Is there a reason you're avoiding IOUtils?
If you are using Java 7, Files (in the standard library) is the best approach:
/* You can get Path from file also: file.toPath() */
Files.copy(InputStream in, Path target)
Files.copy(Path source, OutputStream out)
Edit: Of course it's just useful when you create one of InputStream or OutputStream from file. Use file.toPath() to get path from file.
To write into an existing file (e.g. one created with File.createTempFile()), you'll need to pass the REPLACE_EXISTING copy option (otherwise FileAlreadyExistsException is thrown):
Files.copy(in, target, StandardCopyOption.REPLACE_EXISTING)
Java 9
Since Java 9, InputStream provides a method called transferTo with the following signature:
public long transferTo(OutputStream out) throws IOException
As the documentation states, transferTo will:
Reads all bytes from this input stream and writes the bytes to the
given output stream in the order that they are read. On return, this
input stream will be at end of stream. This method does not close
either stream.
This method may block indefinitely reading from the
input stream, or writing to the output stream. The behavior for the
case where the input and/or output stream is asynchronously closed, or
the thread interrupted during the transfer, is highly input and output
stream specific, and therefore not specified
So in order to write contents of a Java InputStream to an OutputStream, you can write:
input.transferTo(output);
I think this will work, but make sure to test it... minor "improvement", but it might be a bit of a cost at readability.
byte[] buffer = new byte[1024];
int len;
while ((len = in.read(buffer)) != -1) {
out.write(buffer, 0, len);
}
Using Guava's ByteStreams.copy():
ByteStreams.copy(inputStream, outputStream);
Simple Function
If you only need this for writing an InputStream to a File then you can use this simple function:
private void copyInputStreamToFile( InputStream in, File file ) {
try {
OutputStream out = new FileOutputStream(file);
byte[] buf = new byte[1024];
int len;
while((len=in.read(buf))>0){
out.write(buf,0,len);
}
out.close();
in.close();
} catch (Exception e) {
e.printStackTrace();
}
}
For those who use Spring framework there is a useful StreamUtils class:
StreamUtils.copy(in, out);
The above does not close the streams. If you want the streams closed after the copy, use FileCopyUtils class instead:
FileCopyUtils.copy(in, out);
The JDK uses the same code so it seems like there is no "easier" way without clunky third party libraries (which probably don't do anything different anyway). The following is directly copied from java.nio.file.Files.java:
// buffer size used for reading and writing
private static final int BUFFER_SIZE = 8192;
/**
* Reads all bytes from an input stream and writes them to an output stream.
*/
private static long copy(InputStream source, OutputStream sink) throws IOException {
long nread = 0L;
byte[] buf = new byte[BUFFER_SIZE];
int n;
while ((n = source.read(buf)) > 0) {
sink.write(buf, 0, n);
nread += n;
}
return nread;
}
PipedInputStream and PipedOutputStream should only be used when you have multiple threads, as noted by the Javadoc.
Also, note that input streams and output streams do not wrap any thread interruptions with IOExceptions... So, you should consider incorporating an interruption policy to your code:
byte[] buffer = new byte[1024];
int len = in.read(buffer);
while (len != -1) {
out.write(buffer, 0, len);
len = in.read(buffer);
if (Thread.interrupted()) {
throw new InterruptedException();
}
}
This would be an useful addition if you expect to use this API for copying large volumes of data, or data from streams that get stuck for an intolerably long time.
There's no way to do this a lot easier with JDK methods, but as Apocalisp has already noted, you're not the only one with this idea: You could use IOUtils from Jakarta Commons IO, it also has a lot of other useful things, that IMO should actually be part of the JDK...
Using Java7 and try-with-resources, comes with a simplified and readable version.
try(InputStream inputStream = new FileInputStream("C:\\mov.mp4");
OutputStream outputStream = new FileOutputStream("D:\\mov.mp4")) {
byte[] buffer = new byte[10*1024];
for (int length; (length = inputStream.read(buffer)) != -1; ) {
outputStream.write(buffer, 0, length);
}
} catch (FileNotFoundException exception) {
exception.printStackTrace();
} catch (IOException ioException) {
ioException.printStackTrace();
}
Here comes how I'm doing with a simplest for loop.
private void copy(final InputStream in, final OutputStream out)
throws IOException {
final byte[] b = new byte[8192];
for (int r; (r = in.read(b)) != -1;) {
out.write(b, 0, r);
}
}
Use Commons Net's Util class:
import org.apache.commons.net.io.Util;
...
Util.copyStream(in, out);
I use BufferedInputStream and BufferedOutputStream to remove the buffering semantics from the code
try (OutputStream out = new BufferedOutputStream(...);
InputStream in = new BufferedInputStream(...))) {
int ch;
while ((ch = in.read()) != -1) {
out.write(ch);
}
}
A IMHO more minimal snippet (that also more narrowly scopes the length variable):
byte[] buffer = new byte[2048];
for (int n = in.read(buffer); n >= 0; n = in.read(buffer))
out.write(buffer, 0, n);
As a side note, I don't understand why more people don't use a for loop, instead opting for a while with an assign-and-test expression that is regarded by some as "poor" style.
This is my best shot!!
And do not use inputStream.transferTo(...) because is too generic.
Your code performance will be better if you control your buffer memory.
public static void transfer(InputStream in, OutputStream out, int buffer) throws IOException {
byte[] read = new byte[buffer]; // Your buffer size.
while (0 < (buffer = in.read(read)))
out.write(read, 0, buffer);
}
I use it with this (improvable) method when I know in advance the size of the stream.
public static void transfer(int size, InputStream in, OutputStream out) throws IOException {
transfer(in, out,
size > 0xFFFF ? 0xFFFF // 16bits 65,536
: size > 0xFFF ? 0xFFF// 12bits 4096
: size < 0xFF ? 0xFF // 8bits 256
: size
);
}
I think it's better to use a large buffer, because most of the files are greater than 1024 bytes. Also it's a good practice to check the number of read bytes to be positive.
byte[] buffer = new byte[4096];
int n;
while ((n = in.read(buffer)) > 0) {
out.write(buffer, 0, n);
}
out.close();
Not very readable, but effective, has no dependencies and runs with any java version
byte[] buffer=new byte[1024];
for(int n; (n=inputStream.read(buffer))!=-1; outputStream.write(buffer,0,n));
PipedInputStream and PipedOutputStream may be of some use, as you can connect one to the other.
Another possible candidate are the Guava I/O utilities:
http://code.google.com/p/guava-libraries/wiki/IOExplained
I thought I'd use these since Guava is already immensely useful in my project, rather than adding yet another library for one function.
I used ByteStreamKt.copyTo(src, dst, buffer.length) method
Here is my code
public static void replaceCurrentDb(Context context, Uri newDbUri) {
try {
File currentDb = context.getDatabasePath(DATABASE_NAME);
if (currentDb.exists()) {
InputStream src = context.getContentResolver().openInputStream(newDbUri);
FileOutputStream dst = new FileOutputStream(currentDb);
final byte[] buffer = new byte[8 * 1024];
ByteStreamsKt.copyTo(src, dst, buffer.length);
src.close();
dst.close();
Toast.makeText(context, "SUCCESS! Your selected file is set as current menu.", Toast.LENGTH_LONG).show();
}
else
Log.e("DOWNLOAD:::: Database", " fail, database not found");
}
catch (IOException e) {
Toast.makeText(context, "Data Download FAIL.", Toast.LENGTH_LONG).show();
Log.e("DOWNLOAD FAIL!!!", "fail, reason:", e);
}
}
public static boolean copyFile(InputStream inputStream, OutputStream out) {
byte buf[] = new byte[1024];
int len;
long startTime=System.currentTimeMillis();
try {
while ((len = inputStream.read(buf)) != -1) {
out.write(buf, 0, len);
}
long endTime=System.currentTimeMillis()-startTime;
Log.v("","Time taken to transfer all bytes is : "+endTime);
out.close();
inputStream.close();
} catch (IOException e) {
return false;
}
return true;
}
Try Cactoos:
new LengthOf(new TeeInput(input, output)).value();
More details here: http://www.yegor256.com/2017/06/22/object-oriented-input-output-in-cactoos.html
you can use this method
public static void copyStream(InputStream is, OutputStream os)
{
final int buffer_size=1024;
try
{
byte[] bytes=new byte[buffer_size];
for(;;)
{
int count=is.read(bytes, 0, buffer_size);
if(count==-1)
break;
os.write(bytes, 0, count);
}
}
catch(Exception ex){}
}
My Java program implements a server that should get a very large file, compressed using gzip, from a client over websockets and should check for some bytes pattern in the file content.
The client sends the file chunks embedded inside a proprietary protocol so I'm getting message after message from the client, parse the message and extract the gzipped file content.
I can't hold the whole file in the program memory so I'm trying to decompress each chunk, process the data and continue to the next chunk.
I'm using the following code:
public static String gzipDecompress(byte[] compressed) throws IOException {
String uncompressed;
try (
ByteArrayInputStream bis = new ByteArrayInputStream(compressed);
GZIPInputStream gis = new GZIPInputStream(bis);
Reader reader = new InputStreamReader(gis);
Writer writer = new StringWriter()
) {
char[] buffer = new char[10240];
for (int length = 0; (length = reader.read(buffer)) > 0; ) {
writer.write(buffer, 0, length);
}
uncompressed = writer.toString();
}
return uncompressed;
}
But I'm getting the following exception when calling the function with the first compressed chunk:
java.io.EOFException: Unexpected end of ZLIB input stream
at java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:240)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)
at java.util.zip.GZIPInputStream.read(GZIPInputStream.java:117)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.Reader.read(Reader.java:140)
It's important to mention that I'm not skipping any chunk and trying to decompress the chunks sequentially.
What am I missing?
The problem is that you play with those chunks manually.
The correct way would be to obtain some InputStream, wrap it with GZIPInputStream and then read the data.
InputStream is = // obtain the original gzip stream
GZIPInputStream gis = new GZIPInputStream(is);
Reader reader = new InputStreamReader(gis);
//... proceed reading and so on
GZIPInputStream works in stream fashion, so if you only ask 10kb at a time from your reader, the overall memory footprint will be low regardless of the size of the initial GZIP file.
Update after the question was updated
A possible solution for your situation is to write an InputStream implementation that streams bytes that are being put to it in chunks by your client protocol handler.
Here is a prototype:
public class ProtocolDataInputStream extends InputStream {
private BlockingQueue<byte[]> nextChunks = new ArrayBlockingQueue<byte[]>(100);
private byte[] currentChunk = null;
private int currentChunkOffset = 0;
private boolean noMoreChunks = false;
#Override
public synchronized int read() throws IOException {
boolean takeNextChunk = currentChunk == null || currentChunkOffset >= currentChunk.length;
if (takeNextChunk) {
if (noMoreChunks) {
// stream is exhausted
return -1;
} else {
currentChunk = nextChunks.take();
currentChunkOffset = 0;
}
}
return currentChunk[currentChunkOffset++];
}
#Override
public synchronized int available() throws IOException {
if (currentChunk == null) {
return 0;
} else {
return currentChunk.length - currentChunkOffset;
}
}
public synchronized void addChunk(byte[] chunk, boolean chunkIsLast) {
nextChunks.add(chunk);
if (chunkIsLast) {
noMoreChunks = true;
}
}
}
Your client protocol handler adds byte chunks using addChunk(), while your decompressing code pulls the data out of this stream (via Reader).
Please note that this code has some issues:
The queue being used has a limited size. If addChunk() is being called too frequently, the queue may be filled, which will block addChunk(). This may be desirable or not.
Only read() method is implemented for illustration purposes. For performance, it is better to implement read(byte[]) in the same manner.
Conservative synchornization is used under the assumption that reader (decompressor) and writer (protocol handler calling addChunk()) are different threads.
InterruptedException is not handled on take() to avoid too much details.
If your decompressor and addChunk() execute in the same thread (in the same loop), then you could try to use the InputStream.available() method when pulling using InputStream or Reader.ready() when pulling with a Reader.
An arbitrary sequence of bytes from a gzipped stream is not valid standalone gzip data. One way or another, you must concatenate all the byte chunks.
The easiest way is to accumulate them all with a simple pipe:
import java.io.PipedOutputStream;
import java.io.IOException;
import java.util.zip.GZIPInputStream;
public class ChunkInflater {
private final PipedOutputStream pipe;
private final InputStream stream;
public ChunkInflater()
throws IOException {
pipe = new PipedOutputStream();
stream = new GZIPInputStream(new PipedInputStream(pipe));
}
public InputStream getInputStream() {
return stream;
}
public void addChunk(byte[] compressedChunk)
throws IOException {
pipe.write(compressedChunk);
}
}
Now you have an InputStream you can read in whatever increments you desire. For instance:
ChunkInflater inflater = new ChunkInflater();
Callable<Void> chunkReader = new Callable<Void>() {
#Override
public Void call()
throws IOException {
byte[] chunk;
while ((chunk = readChunkFromSource()) != null) {
inflater.addChunk(chunk);
}
return null;
}
};
ExecutorService executor = Executors.newSingleThreadExecutor();
executor.submit(chunkReader);
executor.shutdown();
Reader reader = new InputStreamReader(inflater.getInputStream());
// read text here
I am converting some code from the Http Client 3.x library over to the Http Components 4.x library. The old code contains a check to make sure that the response is not over a certain size. This is fairly easy to do in Http Client 3.x since you can get back a stream from the response using the getResponseBodyAsStream() method and determine when the size has been exceeded. I can't find a similar way in Http Components.
Here's the old code as an example of what I'm trying to do:
private static final long RESPONSE_SIZE_LIMIT = 1024 * 1024 * 10;
private static final int READ_BUFFER_SIZE = 16384;
private static ByteArrayOutputStream readResponseBody(HttpMethodBase method)
throws IOException {
int len;
byte buff[] = new byte[READ_BUFFER_SIZE];
ByteArrayOutputStream out = null;
InputStream in = null;
long byteCount = 0;
in = method.getResponseBodyAsStream();
out = new ByteArrayOutputStream(READ_BUFFER_SIZE);
while ((len = in.read(buff)) != -1 && byteCount <= RESPONSE_SIZE_LIMIT) {
byteCount += len;
out.write(buff, 0, len);
}
if (byteCount >= RESPONSE_SIZE_LIMIT) {
throw new IOException(
"Size limited exceeded reading from HTTP input stream");
}
return (out);
}
You can use HttpEntity.getContent() to get an InputStream to read from yourself.
We have a class which wraps BouncyCastle (actually SpongyCastle for Android) Blowfish to encrypt data to stream:
public class BlowfishOutputStream extends OutputStream
{
private final OutputStream os;
private final PaddedBufferedBlockCipher bufferedCipher;
Our original code encrypted a whole byte array before writing to the output stream in a single operation
public void write(byte[] raw, int offset, int length) throws IOException
{
byte[] out = new byte[bufferedCipher.getOutputSize(length)];
int result = this.bufferedCipher.processBytes(raw, 0, length, out, 0);
if (result > 0)
{
this.os.write(out, 0, result);
}
}
When sending images (ie large amount of data at once) it results in two copies being retained in memory at once.
The following code is meant to be equivalent but is not, and I do not know why. I can verify that data is being sent (sum of c2 is equivalent to the length) but an intermediate process when it is received on our server discards the image before we get to see what arrives. All I know at this stage is that when the initial code is used, the response is received and the included images can be extracted, when the replacement code is used the response is received (and accepted) but images do not appear to be extracted.
public void write(byte[] raw, int offset, int length) throws IOException
{
// write to the output stream as we encrypt, not all at once.
final byte[] inBuffer = new byte[Constants.ByteBufferSize];
final byte[] outBuffer = new byte[Constants.ByteBufferSize];
ByteArrayInputStream bis = new ByteArrayInputStream(raw);
// read into inBuffer, encrypt into outBuffer and write to output stream
for (int len; (len = bis.read(inBuffer)) != -1;)
{
int c2 = this.bufferedCipher.processBytes(inBuffer, 0, len, outBuffer, 0);
this.os.write(outBuffer, 0, c2);
}
}
Note that the issue is not due to a missing call to doFinal, as this is called when the stream is closed.
public void close() throws IOException
{
byte[] out = new byte[bufferedCipher.getOutputSize(0)];
int result = this.bufferedCipher.doFinal(out, 0);
if (result > 0)
{
this.os.write(out, 0, result);
}
*nb try/catch omitted*
}
Confirmed, although ironically the issue was not with the images but in previous data, but that data was writing the complete raw byte array and not just the range specified. The equivalent code for encrypting the byte array on the fly is:
#Override
public void write(byte[] raw, int offset, int length) throws IOException
{
// write to the stream as we encrypt, not all at once.
final byte[] inBuffer = new byte[Constants.ByteBufferSize];
final byte[] outBuffer = new byte[Constants.ByteBufferSize];
int readStart = offset;
// read into inBuffer, encrypt into outBuffer and write to output stream
while(readStart<length)
{
int readAmount = Math.min(length-readStart, inBuffer.length);
System.arraycopy(raw, readStart, inBuffer, 0, readAmount);
readStart+=readAmount;
int c2 = this.bufferedCipher.processBytes(inBuffer, 0, readAmount, outBuffer, 0);
this.os.write(outBuffer, 0, c2);
}
}
I was surprised to find today that I couldn't track down any simple way to write the contents of an InputStream to an OutputStream in Java. Obviously, the byte buffer code isn't difficult to write, but I suspect I'm just missing something which would make my life easier (and the code clearer).
So, given an InputStream in and an OutputStream out, is there a simpler way to write the following?
byte[] buffer = new byte[1024];
int len = in.read(buffer);
while (len != -1) {
out.write(buffer, 0, len);
len = in.read(buffer);
}
As WMR mentioned, org.apache.commons.io.IOUtils from Apache has a method called copy(InputStream,OutputStream) which does exactly what you're looking for.
So, you have:
InputStream in;
OutputStream out;
IOUtils.copy(in,out);
in.close();
out.close();
...in your code.
Is there a reason you're avoiding IOUtils?
If you are using Java 7, Files (in the standard library) is the best approach:
/* You can get Path from file also: file.toPath() */
Files.copy(InputStream in, Path target)
Files.copy(Path source, OutputStream out)
Edit: Of course it's just useful when you create one of InputStream or OutputStream from file. Use file.toPath() to get path from file.
To write into an existing file (e.g. one created with File.createTempFile()), you'll need to pass the REPLACE_EXISTING copy option (otherwise FileAlreadyExistsException is thrown):
Files.copy(in, target, StandardCopyOption.REPLACE_EXISTING)
Java 9
Since Java 9, InputStream provides a method called transferTo with the following signature:
public long transferTo(OutputStream out) throws IOException
As the documentation states, transferTo will:
Reads all bytes from this input stream and writes the bytes to the
given output stream in the order that they are read. On return, this
input stream will be at end of stream. This method does not close
either stream.
This method may block indefinitely reading from the
input stream, or writing to the output stream. The behavior for the
case where the input and/or output stream is asynchronously closed, or
the thread interrupted during the transfer, is highly input and output
stream specific, and therefore not specified
So in order to write contents of a Java InputStream to an OutputStream, you can write:
input.transferTo(output);
I think this will work, but make sure to test it... minor "improvement", but it might be a bit of a cost at readability.
byte[] buffer = new byte[1024];
int len;
while ((len = in.read(buffer)) != -1) {
out.write(buffer, 0, len);
}
Using Guava's ByteStreams.copy():
ByteStreams.copy(inputStream, outputStream);
Simple Function
If you only need this for writing an InputStream to a File then you can use this simple function:
private void copyInputStreamToFile( InputStream in, File file ) {
try {
OutputStream out = new FileOutputStream(file);
byte[] buf = new byte[1024];
int len;
while((len=in.read(buf))>0){
out.write(buf,0,len);
}
out.close();
in.close();
} catch (Exception e) {
e.printStackTrace();
}
}
For those who use Spring framework there is a useful StreamUtils class:
StreamUtils.copy(in, out);
The above does not close the streams. If you want the streams closed after the copy, use FileCopyUtils class instead:
FileCopyUtils.copy(in, out);
The JDK uses the same code so it seems like there is no "easier" way without clunky third party libraries (which probably don't do anything different anyway). The following is directly copied from java.nio.file.Files.java:
// buffer size used for reading and writing
private static final int BUFFER_SIZE = 8192;
/**
* Reads all bytes from an input stream and writes them to an output stream.
*/
private static long copy(InputStream source, OutputStream sink) throws IOException {
long nread = 0L;
byte[] buf = new byte[BUFFER_SIZE];
int n;
while ((n = source.read(buf)) > 0) {
sink.write(buf, 0, n);
nread += n;
}
return nread;
}
PipedInputStream and PipedOutputStream should only be used when you have multiple threads, as noted by the Javadoc.
Also, note that input streams and output streams do not wrap any thread interruptions with IOExceptions... So, you should consider incorporating an interruption policy to your code:
byte[] buffer = new byte[1024];
int len = in.read(buffer);
while (len != -1) {
out.write(buffer, 0, len);
len = in.read(buffer);
if (Thread.interrupted()) {
throw new InterruptedException();
}
}
This would be an useful addition if you expect to use this API for copying large volumes of data, or data from streams that get stuck for an intolerably long time.
There's no way to do this a lot easier with JDK methods, but as Apocalisp has already noted, you're not the only one with this idea: You could use IOUtils from Jakarta Commons IO, it also has a lot of other useful things, that IMO should actually be part of the JDK...
Using Java7 and try-with-resources, comes with a simplified and readable version.
try(InputStream inputStream = new FileInputStream("C:\\mov.mp4");
OutputStream outputStream = new FileOutputStream("D:\\mov.mp4")) {
byte[] buffer = new byte[10*1024];
for (int length; (length = inputStream.read(buffer)) != -1; ) {
outputStream.write(buffer, 0, length);
}
} catch (FileNotFoundException exception) {
exception.printStackTrace();
} catch (IOException ioException) {
ioException.printStackTrace();
}
Here comes how I'm doing with a simplest for loop.
private void copy(final InputStream in, final OutputStream out)
throws IOException {
final byte[] b = new byte[8192];
for (int r; (r = in.read(b)) != -1;) {
out.write(b, 0, r);
}
}
Use Commons Net's Util class:
import org.apache.commons.net.io.Util;
...
Util.copyStream(in, out);
I use BufferedInputStream and BufferedOutputStream to remove the buffering semantics from the code
try (OutputStream out = new BufferedOutputStream(...);
InputStream in = new BufferedInputStream(...))) {
int ch;
while ((ch = in.read()) != -1) {
out.write(ch);
}
}
A IMHO more minimal snippet (that also more narrowly scopes the length variable):
byte[] buffer = new byte[2048];
for (int n = in.read(buffer); n >= 0; n = in.read(buffer))
out.write(buffer, 0, n);
As a side note, I don't understand why more people don't use a for loop, instead opting for a while with an assign-and-test expression that is regarded by some as "poor" style.
This is my best shot!!
And do not use inputStream.transferTo(...) because is too generic.
Your code performance will be better if you control your buffer memory.
public static void transfer(InputStream in, OutputStream out, int buffer) throws IOException {
byte[] read = new byte[buffer]; // Your buffer size.
while (0 < (buffer = in.read(read)))
out.write(read, 0, buffer);
}
I use it with this (improvable) method when I know in advance the size of the stream.
public static void transfer(int size, InputStream in, OutputStream out) throws IOException {
transfer(in, out,
size > 0xFFFF ? 0xFFFF // 16bits 65,536
: size > 0xFFF ? 0xFFF// 12bits 4096
: size < 0xFF ? 0xFF // 8bits 256
: size
);
}
I think it's better to use a large buffer, because most of the files are greater than 1024 bytes. Also it's a good practice to check the number of read bytes to be positive.
byte[] buffer = new byte[4096];
int n;
while ((n = in.read(buffer)) > 0) {
out.write(buffer, 0, n);
}
out.close();
Not very readable, but effective, has no dependencies and runs with any java version
byte[] buffer=new byte[1024];
for(int n; (n=inputStream.read(buffer))!=-1; outputStream.write(buffer,0,n));
PipedInputStream and PipedOutputStream may be of some use, as you can connect one to the other.
Another possible candidate are the Guava I/O utilities:
http://code.google.com/p/guava-libraries/wiki/IOExplained
I thought I'd use these since Guava is already immensely useful in my project, rather than adding yet another library for one function.
I used ByteStreamKt.copyTo(src, dst, buffer.length) method
Here is my code
public static void replaceCurrentDb(Context context, Uri newDbUri) {
try {
File currentDb = context.getDatabasePath(DATABASE_NAME);
if (currentDb.exists()) {
InputStream src = context.getContentResolver().openInputStream(newDbUri);
FileOutputStream dst = new FileOutputStream(currentDb);
final byte[] buffer = new byte[8 * 1024];
ByteStreamsKt.copyTo(src, dst, buffer.length);
src.close();
dst.close();
Toast.makeText(context, "SUCCESS! Your selected file is set as current menu.", Toast.LENGTH_LONG).show();
}
else
Log.e("DOWNLOAD:::: Database", " fail, database not found");
}
catch (IOException e) {
Toast.makeText(context, "Data Download FAIL.", Toast.LENGTH_LONG).show();
Log.e("DOWNLOAD FAIL!!!", "fail, reason:", e);
}
}
public static boolean copyFile(InputStream inputStream, OutputStream out) {
byte buf[] = new byte[1024];
int len;
long startTime=System.currentTimeMillis();
try {
while ((len = inputStream.read(buf)) != -1) {
out.write(buf, 0, len);
}
long endTime=System.currentTimeMillis()-startTime;
Log.v("","Time taken to transfer all bytes is : "+endTime);
out.close();
inputStream.close();
} catch (IOException e) {
return false;
}
return true;
}
Try Cactoos:
new LengthOf(new TeeInput(input, output)).value();
More details here: http://www.yegor256.com/2017/06/22/object-oriented-input-output-in-cactoos.html
you can use this method
public static void copyStream(InputStream is, OutputStream os)
{
final int buffer_size=1024;
try
{
byte[] bytes=new byte[buffer_size];
for(;;)
{
int count=is.read(bytes, 0, buffer_size);
if(count==-1)
break;
os.write(bytes, 0, count);
}
}
catch(Exception ex){}
}