How to measure upload bitrate using Java+Google Data API - java

I'm writing a Java client application which uses the Google Data API to upload things to youtube. I'm wondering how I would go about tracking the progress of an upload, using the Google Data API library I simply call service.insert to insert a new video, which blocks until it is complete.
Has anyone else come up with a solution to monitor the status of the upload and count the bytes as they are sent?
Thanks for any ideas
Link:
http://code.google.com/apis/youtube/2.0/developers_guide_java.html#Direct_Upload

Extend com.google.gdata.data.media.MediaSource writeTo() to include a counter of bytesRead:
public static void writeTo(MediaSource source, OutputStream outputStream)
throws IOException {
InputStream sourceStream = source.getInputStream();
BufferedOutputStream bos = new BufferedOutputStream(outputStream);
BufferedInputStream bis = new BufferedInputStream(sourceStream);
long byteCounter = 0L;
try {
byte [] buf = new byte[2048]; // Transfer in 2k chunks
int bytesRead = 0;
while ((bytesRead = bis.read(buf, 0, buf.length)) >= 0) {
// byte counter
byteCounter += bytesRead;
bos.write(buf, 0, bytesRead);
}
bos.flush();
} finally {
bis.close();
}
}
}

Related

How to write FileInputStream to ServletOutputStream [duplicate]

I was surprised to find today that I couldn't track down any simple way to write the contents of an InputStream to an OutputStream in Java. Obviously, the byte buffer code isn't difficult to write, but I suspect I'm just missing something which would make my life easier (and the code clearer).
So, given an InputStream in and an OutputStream out, is there a simpler way to write the following?
byte[] buffer = new byte[1024];
int len = in.read(buffer);
while (len != -1) {
out.write(buffer, 0, len);
len = in.read(buffer);
}
As WMR mentioned, org.apache.commons.io.IOUtils from Apache has a method called copy(InputStream,OutputStream) which does exactly what you're looking for.
So, you have:
InputStream in;
OutputStream out;
IOUtils.copy(in,out);
in.close();
out.close();
...in your code.
Is there a reason you're avoiding IOUtils?
If you are using Java 7, Files (in the standard library) is the best approach:
/* You can get Path from file also: file.toPath() */
Files.copy(InputStream in, Path target)
Files.copy(Path source, OutputStream out)
Edit: Of course it's just useful when you create one of InputStream or OutputStream from file. Use file.toPath() to get path from file.
To write into an existing file (e.g. one created with File.createTempFile()), you'll need to pass the REPLACE_EXISTING copy option (otherwise FileAlreadyExistsException is thrown):
Files.copy(in, target, StandardCopyOption.REPLACE_EXISTING)
Java 9
Since Java 9, InputStream provides a method called transferTo with the following signature:
public long transferTo(OutputStream out) throws IOException
As the documentation states, transferTo will:
Reads all bytes from this input stream and writes the bytes to the
given output stream in the order that they are read. On return, this
input stream will be at end of stream. This method does not close
either stream.
This method may block indefinitely reading from the
input stream, or writing to the output stream. The behavior for the
case where the input and/or output stream is asynchronously closed, or
the thread interrupted during the transfer, is highly input and output
stream specific, and therefore not specified
So in order to write contents of a Java InputStream to an OutputStream, you can write:
input.transferTo(output);
I think this will work, but make sure to test it... minor "improvement", but it might be a bit of a cost at readability.
byte[] buffer = new byte[1024];
int len;
while ((len = in.read(buffer)) != -1) {
out.write(buffer, 0, len);
}
Using Guava's ByteStreams.copy():
ByteStreams.copy(inputStream, outputStream);
Simple Function
If you only need this for writing an InputStream to a File then you can use this simple function:
private void copyInputStreamToFile( InputStream in, File file ) {
try {
OutputStream out = new FileOutputStream(file);
byte[] buf = new byte[1024];
int len;
while((len=in.read(buf))>0){
out.write(buf,0,len);
}
out.close();
in.close();
} catch (Exception e) {
e.printStackTrace();
}
}
For those who use Spring framework there is a useful StreamUtils class:
StreamUtils.copy(in, out);
The above does not close the streams. If you want the streams closed after the copy, use FileCopyUtils class instead:
FileCopyUtils.copy(in, out);
The JDK uses the same code so it seems like there is no "easier" way without clunky third party libraries (which probably don't do anything different anyway). The following is directly copied from java.nio.file.Files.java:
// buffer size used for reading and writing
private static final int BUFFER_SIZE = 8192;
/**
* Reads all bytes from an input stream and writes them to an output stream.
*/
private static long copy(InputStream source, OutputStream sink) throws IOException {
long nread = 0L;
byte[] buf = new byte[BUFFER_SIZE];
int n;
while ((n = source.read(buf)) > 0) {
sink.write(buf, 0, n);
nread += n;
}
return nread;
}
PipedInputStream and PipedOutputStream should only be used when you have multiple threads, as noted by the Javadoc.
Also, note that input streams and output streams do not wrap any thread interruptions with IOExceptions... So, you should consider incorporating an interruption policy to your code:
byte[] buffer = new byte[1024];
int len = in.read(buffer);
while (len != -1) {
out.write(buffer, 0, len);
len = in.read(buffer);
if (Thread.interrupted()) {
throw new InterruptedException();
}
}
This would be an useful addition if you expect to use this API for copying large volumes of data, or data from streams that get stuck for an intolerably long time.
There's no way to do this a lot easier with JDK methods, but as Apocalisp has already noted, you're not the only one with this idea: You could use IOUtils from Jakarta Commons IO, it also has a lot of other useful things, that IMO should actually be part of the JDK...
Using Java7 and try-with-resources, comes with a simplified and readable version.
try(InputStream inputStream = new FileInputStream("C:\\mov.mp4");
OutputStream outputStream = new FileOutputStream("D:\\mov.mp4")) {
byte[] buffer = new byte[10*1024];
for (int length; (length = inputStream.read(buffer)) != -1; ) {
outputStream.write(buffer, 0, length);
}
} catch (FileNotFoundException exception) {
exception.printStackTrace();
} catch (IOException ioException) {
ioException.printStackTrace();
}
Here comes how I'm doing with a simplest for loop.
private void copy(final InputStream in, final OutputStream out)
throws IOException {
final byte[] b = new byte[8192];
for (int r; (r = in.read(b)) != -1;) {
out.write(b, 0, r);
}
}
Use Commons Net's Util class:
import org.apache.commons.net.io.Util;
...
Util.copyStream(in, out);
I use BufferedInputStream and BufferedOutputStream to remove the buffering semantics from the code
try (OutputStream out = new BufferedOutputStream(...);
InputStream in = new BufferedInputStream(...))) {
int ch;
while ((ch = in.read()) != -1) {
out.write(ch);
}
}
A IMHO more minimal snippet (that also more narrowly scopes the length variable):
byte[] buffer = new byte[2048];
for (int n = in.read(buffer); n >= 0; n = in.read(buffer))
out.write(buffer, 0, n);
As a side note, I don't understand why more people don't use a for loop, instead opting for a while with an assign-and-test expression that is regarded by some as "poor" style.
This is my best shot!!
And do not use inputStream.transferTo(...) because is too generic.
Your code performance will be better if you control your buffer memory.
public static void transfer(InputStream in, OutputStream out, int buffer) throws IOException {
byte[] read = new byte[buffer]; // Your buffer size.
while (0 < (buffer = in.read(read)))
out.write(read, 0, buffer);
}
I use it with this (improvable) method when I know in advance the size of the stream.
public static void transfer(int size, InputStream in, OutputStream out) throws IOException {
transfer(in, out,
size > 0xFFFF ? 0xFFFF // 16bits 65,536
: size > 0xFFF ? 0xFFF// 12bits 4096
: size < 0xFF ? 0xFF // 8bits 256
: size
);
}
I think it's better to use a large buffer, because most of the files are greater than 1024 bytes. Also it's a good practice to check the number of read bytes to be positive.
byte[] buffer = new byte[4096];
int n;
while ((n = in.read(buffer)) > 0) {
out.write(buffer, 0, n);
}
out.close();
Not very readable, but effective, has no dependencies and runs with any java version
byte[] buffer=new byte[1024];
for(int n; (n=inputStream.read(buffer))!=-1; outputStream.write(buffer,0,n));
PipedInputStream and PipedOutputStream may be of some use, as you can connect one to the other.
Another possible candidate are the Guava I/O utilities:
http://code.google.com/p/guava-libraries/wiki/IOExplained
I thought I'd use these since Guava is already immensely useful in my project, rather than adding yet another library for one function.
I used ByteStreamKt.copyTo(src, dst, buffer.length) method
Here is my code
public static void replaceCurrentDb(Context context, Uri newDbUri) {
try {
File currentDb = context.getDatabasePath(DATABASE_NAME);
if (currentDb.exists()) {
InputStream src = context.getContentResolver().openInputStream(newDbUri);
FileOutputStream dst = new FileOutputStream(currentDb);
final byte[] buffer = new byte[8 * 1024];
ByteStreamsKt.copyTo(src, dst, buffer.length);
src.close();
dst.close();
Toast.makeText(context, "SUCCESS! Your selected file is set as current menu.", Toast.LENGTH_LONG).show();
}
else
Log.e("DOWNLOAD:::: Database", " fail, database not found");
}
catch (IOException e) {
Toast.makeText(context, "Data Download FAIL.", Toast.LENGTH_LONG).show();
Log.e("DOWNLOAD FAIL!!!", "fail, reason:", e);
}
}
public static boolean copyFile(InputStream inputStream, OutputStream out) {
byte buf[] = new byte[1024];
int len;
long startTime=System.currentTimeMillis();
try {
while ((len = inputStream.read(buf)) != -1) {
out.write(buf, 0, len);
}
long endTime=System.currentTimeMillis()-startTime;
Log.v("","Time taken to transfer all bytes is : "+endTime);
out.close();
inputStream.close();
} catch (IOException e) {
return false;
}
return true;
}
Try Cactoos:
new LengthOf(new TeeInput(input, output)).value();
More details here: http://www.yegor256.com/2017/06/22/object-oriented-input-output-in-cactoos.html
you can use this method
public static void copyStream(InputStream is, OutputStream os)
{
final int buffer_size=1024;
try
{
byte[] bytes=new byte[buffer_size];
for(;;)
{
int count=is.read(bytes, 0, buffer_size);
if(count==-1)
break;
os.write(bytes, 0, count);
}
}
catch(Exception ex){}
}

Too large files when downloading Piktogramms

I'm trying to download some images provided by a hoster. This is the method I use:
public static void downloadImage(String imageLink, File f) throws IOException
{
URL url = new URL(imageLink);
byte[] buffer = new byte[1024];
BufferedInputStream in = new BufferedInputStream(url.openStream(), buffer.length);
BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(f), buffer.length);
while (in.read(buffer) > 0)
out.write(buffer);
out.flush();
out.close();
in.close();
}
However, the file turn out too big. 5MB for a 80x60 jpg is too much in my opinion.
What could be the cause of this?
You are doing things wrong here: read() returns the number of bytes that were really read; thus you have to write exactly that number from your buffer array into your output stream.
Your code is corrupting your output; and simply writing out a buffer array ... that mostly consists of 0s!
Instead do something like:
int bytesRead;
while ( ( bytesRead = in.read(buffer)) > 0) {
byte outBuffer[] = new byte[bytesRead];
... then use arraycopy to move bytesRead bytes
out.write(outBuffer);
}
( this is meant as inspiration to get you going, more pseudo like than real code )

InputStream - Dealing with network changes

I'm downloading an attachment using Java mail API and whenever there is a small change in network state, my app gets stuck and I have to restart it, it's not even crashing.
This is the code snippet:
InputStream is = bodyPart.getInputStream();
String fileName = MimeUtility.decodeText(bodyPart.getFileName());
// Downloading the file
File f = new File(Constants.getPath() + fileName);
try {
FileOutputStream fos;
fos = new FileOutputStream(f);
byte[] buf = new byte[8*1024];
int bytesRead;
while ((bytesRead = is.read(buf)) != -1) {
fos.write(buf, 0, bytesRead);
}
fos.close();
}
What is the best way to deal with this issue? Thanks.
Your application is stuck. The solution to that is to set a read timeout, as discussed in this question. If the timeout occurs a SocketTimeoutException will be thrown.

How to write file data correctly?

My application is unable to transfer data properly over a socket connection and write it to a file properly. Files over about 65,535 bytes get corrupted and are no longer recognized by the programs designed to run them.
I have been able to send small .doc and .txt files successfully, but .mp3 .wmv .m4a .avi and just about anything else does not work. Neither do larger docs.
I have looked all over the internet for a solution to this problem. I have repeatedly tweaked the I/O code to fix the problem but it still doesn't work! Here is the I/O code in the super class that handles sending and receiving files. If you need anymore information/other parts of code, let me know.
protected void sendFile() throws IOException {
byte[] bytes = new byte[(int) file.length()];
buffin = new BufferedInputStream(new FileInputStream(file));
int bytesRead = buffin.read(bytes,0,bytes.length);
System.out.println(bytesRead);
out = sock.getOutputStream();
out.write(bytes,0,fileBytes);
out.flush();
out.close();
}
protected void receiveFile() throws IOException {
byte[] bytes = new byte[fileBytes];
in = sock.getInputStream();
for(int i=0;i<fileBytes;i++) {
in.read(bytes);
}
fos = new FileOutputStream("/Datawire/"+fileName);
buffout = new BufferedOutputStream(fos);
buffout.write(bytes,0,fileBytes);
buffout.flush();
buffout.close();
}
UPDATED CODE (that works):
protected void sendFile() throws IOException {
if((file.length())<63000) {
byte[] bytes = new byte[(int)file.length()];
buffin = new BufferedInputStream(new FileInputStream(file));
buffin.read(bytes,0,bytes.length);
out = sock.getOutputStream();
out.write(bytes,0,bytes.length);
out.close();
} else {
byte[] bytes = new byte[32000];
buffin = new BufferedInputStream(new FileInputStream(file));
out = sock.getOutputStream();
int bytesRead;
while((bytesRead = buffin.read(bytes))>0) {
out.write(bytes,0,bytesRead);
}
out.close();
}
}
protected void receiveFile() throws IOException {
if(fileBytes<63000) {
byte[] bytes = new byte[32000];
in = sock.getInputStream();
System.out.println(in.available());
in.read(bytes,0,fileBytes);
fos = new FileOutputStream("/Datawire/"+fileName);
buffout = new BufferedOutputStream(fos);
buffout.write(bytes,0,bytes.length);
buffout.close();
} else {
byte[] bytes = new byte[16000];
in = sock.getInputStream();
fos = new FileOutputStream("/Datawire/"+fileName);
buffout = new BufferedOutputStream(fos);
int bytesRead;
while((bytesRead = in.read(bytes))>0) {
buffout.write(bytes,0,bytesRead);
}
buffout.close();
}
}
The issue is that you are sending only chunks of it. That is, you are only sending 64k of the file ever. If the file is ever larger then 64k the other end will never see it.
You want to continously read from the BufferedInputStream until the read() returns either less then the length or -1.
Your code is completely wrong. This is how to copy a stream in Java:
int count;
byte[] buffer = new byte[8192]; // more if you like but no need for it to be the entire file size
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
You should use this both when sending the file and when receiving the file. At present your sending method hopes that the entire file fits into memory; fits into INTEGER_MAX bytes; and is read in one chunk by the read method, without even checking the result. You can't assume any of those things. Your receive method is complete rubbish: it just keeps overwriting the same array, again without checking any read() results.
EDIT: Your revised code is just as bad, or worse. You are calling read() to check for EOS and then throwing that byte away, and then calling read() again and throwing away the read count it returns. You pointlessly have a different path for files < 64000, or 63000, or whatever it is, that has zero benefit except to give you two code paths to test, or possibly four, instead of one. The network only gives you 1460 bytes at a time at best anyway so what is the point? You already have (a) a BufferedInputStream with a default buffersize of 8192, and (b) my code that uses a byte[] buffer of any size you like. My code above works for any amount of data in two lines of executable code. Yours is 20. QED.
I suggest that you use some good library to read and write file contents as well as socket read/write. For example Apache Commons IO. If you insist on writig code yourself, do it smaller chunks rather than the whole file at once.
You have to consider that InputStream.read returns the number of bytes read which may be less than the total number of bytes in the file.
You would probably be better off just letting something like CopyUtils.copy take care of this for you.
You need to loop until bytesRead < 0. You need to make sure that fileBytes is => than the transferred file.
protected void receiveFile() throws IOException {
byte [] bytes = new byte [fileBytes];
InputStream is = sock.getInputStream();
FileOutputStream fos = new FileOutputStream("/Datawire/"+fileName);
BufferedOutputStream bos = new BufferedOutputStream(fos);
int bytesRead = is.read(bytes,0,bytes.length);
int current = bytesRead;
do {
bytesRead =
is.read(bytes, current, (bytes.length-current));
if(bytesRead >= 0) current += bytesRead;
} while(bytesRead > -1);
bos.write(bytes, 0 , current);
bos.flush();
bos.close();
}

Easy way to write contents of a Java InputStream to an OutputStream

I was surprised to find today that I couldn't track down any simple way to write the contents of an InputStream to an OutputStream in Java. Obviously, the byte buffer code isn't difficult to write, but I suspect I'm just missing something which would make my life easier (and the code clearer).
So, given an InputStream in and an OutputStream out, is there a simpler way to write the following?
byte[] buffer = new byte[1024];
int len = in.read(buffer);
while (len != -1) {
out.write(buffer, 0, len);
len = in.read(buffer);
}
As WMR mentioned, org.apache.commons.io.IOUtils from Apache has a method called copy(InputStream,OutputStream) which does exactly what you're looking for.
So, you have:
InputStream in;
OutputStream out;
IOUtils.copy(in,out);
in.close();
out.close();
...in your code.
Is there a reason you're avoiding IOUtils?
If you are using Java 7, Files (in the standard library) is the best approach:
/* You can get Path from file also: file.toPath() */
Files.copy(InputStream in, Path target)
Files.copy(Path source, OutputStream out)
Edit: Of course it's just useful when you create one of InputStream or OutputStream from file. Use file.toPath() to get path from file.
To write into an existing file (e.g. one created with File.createTempFile()), you'll need to pass the REPLACE_EXISTING copy option (otherwise FileAlreadyExistsException is thrown):
Files.copy(in, target, StandardCopyOption.REPLACE_EXISTING)
Java 9
Since Java 9, InputStream provides a method called transferTo with the following signature:
public long transferTo(OutputStream out) throws IOException
As the documentation states, transferTo will:
Reads all bytes from this input stream and writes the bytes to the
given output stream in the order that they are read. On return, this
input stream will be at end of stream. This method does not close
either stream.
This method may block indefinitely reading from the
input stream, or writing to the output stream. The behavior for the
case where the input and/or output stream is asynchronously closed, or
the thread interrupted during the transfer, is highly input and output
stream specific, and therefore not specified
So in order to write contents of a Java InputStream to an OutputStream, you can write:
input.transferTo(output);
I think this will work, but make sure to test it... minor "improvement", but it might be a bit of a cost at readability.
byte[] buffer = new byte[1024];
int len;
while ((len = in.read(buffer)) != -1) {
out.write(buffer, 0, len);
}
Using Guava's ByteStreams.copy():
ByteStreams.copy(inputStream, outputStream);
Simple Function
If you only need this for writing an InputStream to a File then you can use this simple function:
private void copyInputStreamToFile( InputStream in, File file ) {
try {
OutputStream out = new FileOutputStream(file);
byte[] buf = new byte[1024];
int len;
while((len=in.read(buf))>0){
out.write(buf,0,len);
}
out.close();
in.close();
} catch (Exception e) {
e.printStackTrace();
}
}
For those who use Spring framework there is a useful StreamUtils class:
StreamUtils.copy(in, out);
The above does not close the streams. If you want the streams closed after the copy, use FileCopyUtils class instead:
FileCopyUtils.copy(in, out);
The JDK uses the same code so it seems like there is no "easier" way without clunky third party libraries (which probably don't do anything different anyway). The following is directly copied from java.nio.file.Files.java:
// buffer size used for reading and writing
private static final int BUFFER_SIZE = 8192;
/**
* Reads all bytes from an input stream and writes them to an output stream.
*/
private static long copy(InputStream source, OutputStream sink) throws IOException {
long nread = 0L;
byte[] buf = new byte[BUFFER_SIZE];
int n;
while ((n = source.read(buf)) > 0) {
sink.write(buf, 0, n);
nread += n;
}
return nread;
}
PipedInputStream and PipedOutputStream should only be used when you have multiple threads, as noted by the Javadoc.
Also, note that input streams and output streams do not wrap any thread interruptions with IOExceptions... So, you should consider incorporating an interruption policy to your code:
byte[] buffer = new byte[1024];
int len = in.read(buffer);
while (len != -1) {
out.write(buffer, 0, len);
len = in.read(buffer);
if (Thread.interrupted()) {
throw new InterruptedException();
}
}
This would be an useful addition if you expect to use this API for copying large volumes of data, or data from streams that get stuck for an intolerably long time.
There's no way to do this a lot easier with JDK methods, but as Apocalisp has already noted, you're not the only one with this idea: You could use IOUtils from Jakarta Commons IO, it also has a lot of other useful things, that IMO should actually be part of the JDK...
Using Java7 and try-with-resources, comes with a simplified and readable version.
try(InputStream inputStream = new FileInputStream("C:\\mov.mp4");
OutputStream outputStream = new FileOutputStream("D:\\mov.mp4")) {
byte[] buffer = new byte[10*1024];
for (int length; (length = inputStream.read(buffer)) != -1; ) {
outputStream.write(buffer, 0, length);
}
} catch (FileNotFoundException exception) {
exception.printStackTrace();
} catch (IOException ioException) {
ioException.printStackTrace();
}
Here comes how I'm doing with a simplest for loop.
private void copy(final InputStream in, final OutputStream out)
throws IOException {
final byte[] b = new byte[8192];
for (int r; (r = in.read(b)) != -1;) {
out.write(b, 0, r);
}
}
Use Commons Net's Util class:
import org.apache.commons.net.io.Util;
...
Util.copyStream(in, out);
I use BufferedInputStream and BufferedOutputStream to remove the buffering semantics from the code
try (OutputStream out = new BufferedOutputStream(...);
InputStream in = new BufferedInputStream(...))) {
int ch;
while ((ch = in.read()) != -1) {
out.write(ch);
}
}
A IMHO more minimal snippet (that also more narrowly scopes the length variable):
byte[] buffer = new byte[2048];
for (int n = in.read(buffer); n >= 0; n = in.read(buffer))
out.write(buffer, 0, n);
As a side note, I don't understand why more people don't use a for loop, instead opting for a while with an assign-and-test expression that is regarded by some as "poor" style.
This is my best shot!!
And do not use inputStream.transferTo(...) because is too generic.
Your code performance will be better if you control your buffer memory.
public static void transfer(InputStream in, OutputStream out, int buffer) throws IOException {
byte[] read = new byte[buffer]; // Your buffer size.
while (0 < (buffer = in.read(read)))
out.write(read, 0, buffer);
}
I use it with this (improvable) method when I know in advance the size of the stream.
public static void transfer(int size, InputStream in, OutputStream out) throws IOException {
transfer(in, out,
size > 0xFFFF ? 0xFFFF // 16bits 65,536
: size > 0xFFF ? 0xFFF// 12bits 4096
: size < 0xFF ? 0xFF // 8bits 256
: size
);
}
I think it's better to use a large buffer, because most of the files are greater than 1024 bytes. Also it's a good practice to check the number of read bytes to be positive.
byte[] buffer = new byte[4096];
int n;
while ((n = in.read(buffer)) > 0) {
out.write(buffer, 0, n);
}
out.close();
Not very readable, but effective, has no dependencies and runs with any java version
byte[] buffer=new byte[1024];
for(int n; (n=inputStream.read(buffer))!=-1; outputStream.write(buffer,0,n));
PipedInputStream and PipedOutputStream may be of some use, as you can connect one to the other.
Another possible candidate are the Guava I/O utilities:
http://code.google.com/p/guava-libraries/wiki/IOExplained
I thought I'd use these since Guava is already immensely useful in my project, rather than adding yet another library for one function.
I used ByteStreamKt.copyTo(src, dst, buffer.length) method
Here is my code
public static void replaceCurrentDb(Context context, Uri newDbUri) {
try {
File currentDb = context.getDatabasePath(DATABASE_NAME);
if (currentDb.exists()) {
InputStream src = context.getContentResolver().openInputStream(newDbUri);
FileOutputStream dst = new FileOutputStream(currentDb);
final byte[] buffer = new byte[8 * 1024];
ByteStreamsKt.copyTo(src, dst, buffer.length);
src.close();
dst.close();
Toast.makeText(context, "SUCCESS! Your selected file is set as current menu.", Toast.LENGTH_LONG).show();
}
else
Log.e("DOWNLOAD:::: Database", " fail, database not found");
}
catch (IOException e) {
Toast.makeText(context, "Data Download FAIL.", Toast.LENGTH_LONG).show();
Log.e("DOWNLOAD FAIL!!!", "fail, reason:", e);
}
}
public static boolean copyFile(InputStream inputStream, OutputStream out) {
byte buf[] = new byte[1024];
int len;
long startTime=System.currentTimeMillis();
try {
while ((len = inputStream.read(buf)) != -1) {
out.write(buf, 0, len);
}
long endTime=System.currentTimeMillis()-startTime;
Log.v("","Time taken to transfer all bytes is : "+endTime);
out.close();
inputStream.close();
} catch (IOException e) {
return false;
}
return true;
}
Try Cactoos:
new LengthOf(new TeeInput(input, output)).value();
More details here: http://www.yegor256.com/2017/06/22/object-oriented-input-output-in-cactoos.html
you can use this method
public static void copyStream(InputStream is, OutputStream os)
{
final int buffer_size=1024;
try
{
byte[] bytes=new byte[buffer_size];
for(;;)
{
int count=is.read(bytes, 0, buffer_size);
if(count==-1)
break;
os.write(bytes, 0, count);
}
}
catch(Exception ex){}
}

Categories

Resources