Why is my binary data bigger after getting it from the webserver? - java

I need to serve a binary file through a web service implemented in Python/Django. The problem is, that when I compare the original file with the transferred file with vbindiff I see trailing bytes on the transferred file, sadly rendering it useless.
The Binary File is accessed saved by a client in Java with:
HttpURLConnection userdataConnection = null;
URL userdataUrl = null;
try {
userdataUrl = new URL("http://localhost:8000/app/vuforia/10");
userdataConnection = (HttpURLConnection) userdataUrl.openConnection();
userdataConnection.setRequestMethod("GET");
userdataConnection.setRequestProperty("Content-Type", "application/octet-stream");
userdataConnection.connect();
InputStream userdataStream = new BufferedInputStream(userdataConnection.getInputStream());
try (ByteArrayOutputStream fileStream = new ByteArrayOutputStream()) {
byte[] buffer = new byte[4094];
while (userdataStream.read(buffer) != -1) {
fileStream.write(buffer);
}
byte[] fileBytes = fileStream.toByteArray();
try (FileOutputStream fos = new FileOutputStream("./test.dat")) {
fos.write(fileBytes);
}
}
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
I think that HttpURLConnection.getInputStream only reads the body of the response, or not?
This code serves the data in the backend
in views.py:
if request.method == "GET":
all_data = VuforiaDatabase.objects.all()
data = all_data.get(id=version)
return FileResponse(data.get_dat_bytes())
in models.py:
def get_dat_bytes(self):
return self.dat_upload.open()
How do I go about transferring the binary data 1:1?

You’re ignoring the return value of InputStream.read.
From the documentation:
Returns:
the total number of bytes read into the buffer, or -1 if there is no more data because the end of the stream has been reached.
Your code is assuming that the buffer is filled with every call to userdataStream.read(buffer), instead of checking how many bytes were actually read into buffer.
You don’t need to read from an InputStream at all. Just use Files.copy:
Path file = Paths.get("./test.dat");
try (InputStream userdataStream = new BufferedInputStream(userdataConnection.getInputStream())) {
Files.copy(userdataStream, file, StandardCopyOption.REPLACE_EXISTING);
}

You always write a multiple the 4094 bytes, no matter how many bytes you actually read.
Don't do .write(buffer); write the amount you actually read. This is what userdataStream.read returns you. It can return a number smaller than the buffer size, but still positive.
If you project is using Apache Commons already, you can just use copyInputStreamToFile.
Note: 4K = 4096, not 4094, and it's a ridiculously small buffer, unless you operate something like a smartcard. On a PC, use something like a few hundred kb at least.

Related

Extract tar.gz file in memory in Java

I'm using the Apache Compress library to read a .tar.gz file, something like this:
final TarArchiveInputStream tarIn = initializeTarArchiveStream(this.archiveFile);
try {
TarArchiveEntry tarEntry = tarIn.getNextTarEntry();
while (tarEntry != null) {
byte[] btoRead = new byte[1024];
BufferedOutputStream bout = new BufferedOutputStream(new FileOutputStream(destPath)); //<- I don't want this!
int len = 0;
while ((len = tarIn.read(btoRead)) != -1) {
bout.write(btoRead, 0, len);
}
bout.close();
tarEntry = tarIn.getNextTarEntry();
}
tarIn.close();
}
catch (IOException e) {
e.printStackTrace();
}
Is it possible not to extract this into a seperate file, and read it in memory somehow? Maybe into a giant String or something?
You could replace the file stream with a ByteArrayOutputStream.
i.e. replace this:
BufferedOutputStream bout = new BufferedOutputStream(new FileOutputStream(destPath)); //<- I don't want this!
with this:
ByteArrayOutputStream bout = new ByteArrayOutputStream();
and then after closing bout, use bout.toByteArray() to get the bytes.
Is it possible not to extract this into a seperate file, and read it in memory somehow? Maybe into a giant String or something?
Yea sure.
Just replace the code in the inner loop that is openning files and writing to them with code that writes to a ByteArrayOutputStream ... or a series of such streams.
The natural representation of the data that you read from the TAR (like that) will be bytes / byte arrays. If the bytes are properly encoded characters, and you know the correct encoding, then you can convert them to strings. Otherwise, it is better to leave the data as bytes. (If you attempt to convert non-text data to strings, or if you convert using the wrong charset/encoding you are liable to mangle it ... irreversibly.)
Obviously, you are going to need to think through some of these issues yourself, but basic idea should work ... provided you have enough heap space.
copy the value of btoread to a String like
String s = String.valueof(byteVar);
and goon appending the byte value to the string untill end of the file reaches..

Read first bytes of a file

I need a very simple function that allows me to read the first 1k bytes of a file through FTP. I want to use it in MATLAB to read the first lines and, according to some parameters, to download only files I really need eventually. I found some examples online that unfortunately do not work. Here I'm proposing the sample code where I'm trying to download one single file (I'm using the Apache libraries).
FTPClient client = new FTPClient();
FileOutputStream fos = null;
try {
client.connect("data.site.org");
// filename to be downloaded.
String filename = "filename.Z";
fos = new FileOutputStream(filename);
// Download file from FTP server
InputStream stream = client.retrieveFileStream("/pub/obs/2008/021/ab120210.08d.Z");
byte[] b = new byte[1024];
stream.read(b);
fos.write(b);
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (fos != null) {
fos.close();
}
client.disconnect();
} catch (IOException e) {
e.printStackTrace();
}
}
the error is in stream which is returned empty. I know I'm passing the folder name in a wrong way, but I cannot understand how I have to do. I've tried in many way.
I've also tried with the URL's Java classes as:
URL url;
url = new URL("ftp://data.site.org/pub/obs/2008/021/ab120210.08d.Z");
URLConnection con = url.openConnection();
BufferedInputStream in =
new BufferedInputStream(con.getInputStream());
FileOutputStream out =
new FileOutputStream("C:\\filename.Z");
int i;
byte[] bytesIn = new byte[1024];
if ((i = in.read(bytesIn)) >= 0) {
out.write(bytesIn);
}
out.close();
in.close();
but it is giving an error when I'm closing the InputStream in!
I'm definitely stuck. Some comments about would be very useful!
Try this test
InputStream is = new URL("ftp://test:test#ftp.secureftp-test.com/bookstore.xml").openStream();
byte[] a = new byte[1000];
int n = is.read(a);
is.close();
System.out.println(new String(a, 0, n));
it definitely works
From my experience when you read bytes from a stream acquired from ftpClient.retrieveFileStream, for the first run it is not guarantied that you get your byte buffer filled up. However, either you should read the return value of stream.read(b); surrounded with a cycle based on it or use an advanced library to fill up the 1024 length byte[] buffer:
InputStream stream = null;
try {
// Download file from FTP server
stream = client.retrieveFileStream("/pub/obs/2008/021/ab120210.08d.Z");
byte[] b = new byte[1024];
IOUtils.read(stream, b); // will call periodically stream.read() until it fills up your buffer or reaches end-of-file
fos.write(b);
} catch (IOException e) {
e.printStackTrace();
} finally {
IOUtils.closeQuietly(inputStream);
}
I cannot understand why it doesn't work. I found this link where they used the Apache library to read 4096 bytes each time. I read the first 1024 bytes and it works eventually, the only thing is that if completePendingCommand() is used, the program is held for ever. Thus I've removed it and everything works fine.

about download a file by java drive api

I use the "get" method from java drive api, and I can get the inputstream. but I cannt open the file when I use the inputstream to creat it. It likes the file is broken.
private static String fileurl = "C:\\googletest\\drive\\";
public static void newFile(String filetitle, InputStream stream) throws IOException {
String filepath = fileurl + filetitle;
BufferedInputStream bufferedInputStream=new BufferedInputStream(stream);
byte[] buffer = new byte[bufferedInputStream.available()];
File file = new File(filepath);
if (!file.exists()) {
file.getParentFile().mkdirs();
BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(new FileOutputStream(filepath));
while( bufferedInputStream.read(buffer) != -1) {
bufferedOutputStream.write(buffer);
}
bufferedOutputStream.flush();
bufferedOutputStream.close();
}
}
Firstly, C:\googletest\drive\ is not a URL. It is a file system pathname.
Next, the following probably does not do what you think it does:
byte[] buffer = new byte[bufferedInputStream.available()];
The problem is that the available() call can return zero ... for a non-empty stream. The value returned by available() is an estimate of how many bytes that are currently available to read ... right now. That is not necessarily the stream length ... or anything related to it. And indeed the device drivers for some devices consistently return zero, even when there is data to be read.
Finally, this is wrong:
while( bufferedInputStream.read(buffer) != -1) {
bufferedOutputStream.write(buffer);
You are assuming that read returning -1 means that it filled the buffer. That is not so. Any one of the read calls could return with a partly full buffer. But then you write the entire buffer contents to the output stream ... including "junk" from previous reads.
Either or both of the 2nd and 3rd problems could lead to file corruption. In fact, the third one is likely to.

Java -- How to read an unknown number of bytes from an inputStream (socket/socketServer)?

Looking to read in some bytes over a socket using an inputStream. The bytes sent by the server may be of variable quantity, and the client doesn't know in advance the length of the byte array. How may this be accomplished?
byte b[];
sock.getInputStream().read(b);
This causes a 'might not be initialized error' from the Net BzEAnSZ. Help.
You need to expand the buffer as needed, by reading in chunks of bytes, 1024 at a time as in this example code I wrote some time ago
byte[] resultBuff = new byte[0];
byte[] buff = new byte[1024];
int k = -1;
while((k = sock.getInputStream().read(buff, 0, buff.length)) > -1) {
byte[] tbuff = new byte[resultBuff.length + k]; // temp buffer size = bytes already read + bytes last read
System.arraycopy(resultBuff, 0, tbuff, 0, resultBuff.length); // copy previous bytes
System.arraycopy(buff, 0, tbuff, resultBuff.length, k); // copy current lot
resultBuff = tbuff; // call the temp buffer as your result buff
}
System.out.println(resultBuff.length + " bytes read.");
return resultBuff;
Assuming the sender closes the stream at the end of the data:
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buf = new byte[4096];
while(true) {
int n = is.read(buf);
if( n < 0 ) break;
baos.write(buf,0,n);
}
byte data[] = baos.toByteArray();
Read an int, which is the size of the next segment of data being received. Create a buffer with that size, or use a roomy pre-existing buffer. Read into the buffer, making sure it is limited to the aforeread size. Rinse and repeat :)
If you really don't know the size in advance as you said, read into an expanding ByteArrayOutputStream as the other answers have mentioned. However, the size method really is the most reliable.
Without re-inventing the wheel, using Apache Commons:
IOUtils.toByteArray(inputStream);
For example, complete code with error handling:
public static byte[] readInputStreamToByteArray(InputStream inputStream) {
if (inputStream == null) {
// normally, the caller should check for null after getting the InputStream object from a resource
throw new FileProcessingException("Cannot read from InputStream that is NULL. The resource requested by the caller may not exist or was not looked up correctly.");
}
try {
return IOUtils.toByteArray(inputStream);
} catch (IOException e) {
throw new FileProcessingException("Error reading input stream.", e);
} finally {
closeStream(inputStream);
}
}
private static void closeStream(Closeable closeable) {
try {
if (closeable != null) {
closeable.close();
}
} catch (Exception e) {
throw new FileProcessingException("IO Error closing a stream.", e);
}
}
Where FileProcessingException is your app-specific meaningful RT exception that will travel uninterrupted to your proper handler w/o polluting the code in between.
The simple answer is:
byte b[] = new byte[BIG_ENOUGH];
int nosRead = sock.getInputStream().read(b);
where BIG_ENOUGH is big enough.
But in general there is a big problem with this. A single read call is not guaranteed to return all that the other end has written.
If the nosRead value is BIG_ENOUGH, your application has no way of knowing for sure if there are more bytes to come; the other end may have sent exactly BIG_ENOUGH bytes ... or more than BIG_ENOUGH bytes. In the former case, you application will block (for ever) if you try to read. In the latter case, your application has to do (at least) another read to get the rest of the data.
If the nosRead value is less than BIG_ENOUGH, your application still doesn't know. It might have received everything there is, part of the data may have been delayed (due to network packet fragmentation, network packet loss, network partition, etc), or the other end might have blocked or crashed part way through sending the data.
The best answer is that EITHER your application needs to know beforehand how many bytes to expect, OR the application protocol needs to somehow tell the application how many bytes to expect or when all bytes have been sent.
Possible approaches are:
the application protocol uses fixed message sizes (not applicable to your example)
the application protocol message sizes are specified in message headers
the application protocol uses end-of-message markers
the application protocol is not message based, and the other end closes the connection to say that is the end.
Without one of these strategies, your application is left to guess, and is liable to get it wrong occasionally.
Then you use multiple read calls and (maybe) multiple buffers.
Stream all Input data into Output stream. Here is working example:
InputStream inputStream = null;
byte[] tempStorage = new byte[1024];//try to read 1Kb at time
int bLength;
try{
ByteArrayOutputStream outputByteArrayStream = new ByteArrayOutputStream();
if (fileName.startsWith("http"))
inputStream = new URL(fileName).openStream();
else
inputStream = new FileInputStream(fileName);
while ((bLength = inputStream.read(tempStorage)) != -1) {
outputByteArrayStream.write(tempStorage, 0, bLength);
}
outputByteArrayStream.flush();
//Here is the byte array at the end
byte[] finalByteArray = outputByteArrayStream.toByteArray();
outputByteArrayStream.close();
inputStream.close();
}catch(Exception e){
e.printStackTrace();
if (inputStream != null) inputStream.close();
}
Either:
Have the sender close the socket after transferring the bytes. Then at the receiver just keep reading until EOS.
Have the sender prefix a length word as per Chris's suggestion, then read that many bytes.
Use a self-describing protocol such as XML, Serialization, ...
Use BufferedInputStream, and use the available() method which returns the size of bytes available for reading, and then construct a byte[] with that size. Problem solved. :)
BufferedInputStream buf = new BufferedInputStream(is);
int size = buf.available();
Here is a simpler example using ByteArrayOutputStream...
socketInputStream = socket.getInputStream();
int expectedDataLength = 128; //todo - set accordingly/experiment. Does not have to be precise value.
ByteArrayOutputStream baos = new ByteArrayOutputStream(expectedDataLength);
byte[] chunk = new byte[expectedDataLength];
int numBytesJustRead;
while((numBytesJustRead = socketInputStream.read(chunk)) != -1) {
baos.write(chunk, 0, numBytesJustRead);
}
return baos.toString("UTF-8");
However, if the server does not return a -1, you will need to detect the end of the data some other way - e.g., maybe the returned content always ends with a certain marker (e.g., ""), or you could possibly solve using socket.setSoTimeout(). (Mentioning this as it is seems to be a common problem.)
This is both a late answer and self-advertising, but anyone checking out this question may want to take a look here:
https://github.com/GregoryConrad/SmartSocket
This question is 7 years old, but i had a similiar problem, while making a NIO and OIO compatible system (Client and Server might be whatever they want, OIO or NIO).
This was quit the challenge, because of the blocking InputStreams.
I found a way, which makes it possible and i want to post it, to help people with similiar problems.
Reading a byte array of dynamic sice is done here with the DataInputStream, which kann be simply wrapped around the socketInputStream. Also, i do not want to introduce a specific communication protocoll (like first sending the size of bytes, that will be send), because i want to make this as vanilla as possible. First of, i have a simple utility Buffer class, which looks like this:
import java.util.ArrayList;
import java.util.List;
public class Buffer {
private byte[] core;
private int capacity;
public Buffer(int size){
this.capacity = size;
clear();
}
public List<Byte> list() {
final List<Byte> result = new ArrayList<>();
for(byte b : core) {
result.add(b);
}
return result;
}
public void reallocate(int capacity) {
this.capacity = capacity;
}
public void teardown() {
this.core = null;
}
public void clear() {
core = new byte[capacity];
}
public byte[] array() {
return core;
}
}
This class only exists, because of the dumb way, byte <=> Byte autoboxing in Java works with this List. This is not realy needed at all in this example, but i did not want to leave something out of this explanation.
Next up, the 2 simple, core methods. In those, a StringBuilder is used as a "callback". It will be filled with the result which has been read and the amount of bytes read will be returned. This might be done different of course.
private int readNext(StringBuilder stringBuilder, Buffer buffer) throws IOException {
// Attempt to read up to the buffers size
int read = in.read(buffer.array());
// If EOF is reached (-1 read)
// we disconnect, because the
// other end disconnected.
if(read == -1) {
disconnect();
return -1;
}
// Add the read byte[] as
// a String to the stringBuilder.
stringBuilder.append(new String(buffer.array()).trim());
buffer.clear();
return read;
}
private Optional<String> readBlocking() throws IOException {
final Buffer buffer = new Buffer(256);
final StringBuilder stringBuilder = new StringBuilder();
// This call blocks. Therefor
// if we continue past this point
// we WILL have some sort of
// result. This might be -1, which
// means, EOF (disconnect.)
if(readNext(stringBuilder, buffer) == -1) {
return Optional.empty();
}
while(in.available() > 0) {
buffer.reallocate(in.available());
if(readNext(stringBuilder, buffer) == -1) {
return Optional.empty();
}
}
buffer.teardown();
return Optional.of(stringBuilder.toString());
}
The first method readNext will fill the buffer, with byte[] from the DataInputStream and return the amount bytes read this way.
In the secon method, readBlocking, i utilized the blocking nature, not to worry about consumer-producer-problems. Simply readBlocking will block, untill a new byte-array is received. Before we call this blocking method, we allocate a Buffer-size. Note, i called reallocate after the first read (inside the while loop). This is not needed. You can safely delete this line and the code will still work. I did it, because of the uniqueness of my problem.
The 2 things, i did not explain in more detail are:
1. in (the DataInputStream and the only short varaible here, sorry for that)
2. disconnect (your disconnect routine)
All in all, you can now use it, this way:
// The in has to be an attribute, or an parameter to the readBlocking method
DataInputStream in = new DataInputStream(socket.getInputStream());
final Optional<String> rawDataOptional = readBlocking();
rawDataOptional.ifPresent(string -> threadPool.execute(() -> handle(string)));
This will provide you with a way of reading byte arrays of any shape or form over a socket (or any InputStream realy). Hope this helps!

Reading from a ZipInputStream into a ByteArrayOutputStream

I am trying to read a single file from a java.util.zip.ZipInputStream, and copy it into a java.io.ByteArrayOutputStream (so that I can then create a java.io.ByteArrayInputStream and hand that to a 3rd party library that will end up closing the stream, and I don't want my ZipInputStream getting closed).
I'm probably missing something basic here, but I never enter the while loop here:
ByteArrayOutputStream streamBuilder = new ByteArrayOutputStream();
int bytesRead;
byte[] tempBuffer = new byte[8192*2];
try {
while ((bytesRead = zipStream.read(tempBuffer)) != -1) {
streamBuilder.write(tempBuffer, 0, bytesRead);
}
} catch (IOException e) {
// ...
}
What am I missing that will allow me to copy the stream?
Edit:
I should have mentioned earlier that this ZipInputStream is not coming from a file, so I don't think I can use a ZipFile. It is coming from a file uploaded through a servlet.
Also, I have already called getNextEntry() on the ZipInputStream before getting to this snippet of code. If I don't try copying the file into another InputStream (via the OutputStream mentioned above), and just pass the ZipInputStream to my 3rd party library, the library closes the stream, and I can't do anything more, like dealing with the remaining files in the stream.
Your loop looks valid - what does the following code (just on it's own) return?
zipStream.read(tempBuffer)
if it's returning -1, then the zipStream is closed before you get it, and all bets are off. It's time to use your debugger and make sure what's being passed to you is actually valid.
When you call getNextEntry(), does it return a value, and is the data in the entry meaningful (i.e. does getCompressedSize() return a valid value)? IF you are just reading a Zip file that doesn't have read-ahead zip entries embedded, then ZipInputStream isn't going to work for you.
Some useful tidbits about the Zip format:
Each file embedded in a zip file has a header. This header can contain useful information (such as the compressed length of the stream, it's offset in the file, CRC) - or it can contain some magic values that basically say 'The information isn't in the stream header, you have to check the Zip post-amble'.
Each zip file then has a table that is attached to the end of the file that contains all of the zip entries, along with the real data. The table at the end is mandatory, and the values in it must be correct. In contrast, the values embedded in the stream do not have to be provided.
If you use ZipFile, it reads the table at the end of the zip. If you use ZipInputStream, I suspect that getNextEntry() attempts to use the entries embedded in the stream. If those values aren't specified, then ZipInputStream has no idea how long the stream might be. The inflate algorithm is self terminating (you actually don't need to know the uncompressed length of the output stream in order to fully recover the output), but it's possible that the Java version of this reader doesn't handle this situation very well.
I will say that it's fairly unusual to have a servlet returning a ZipInputStream (it's much more common to receive an inflatorInputStream if you are going to be receiving compressed content.
You probably tried reading from a FileInputStream like this:
ZipInputStream in = new ZipInputStream(new FileInputStream(...));
This won’t work since a zip archive can contain multiple files and you need to specify which file to read.
You could use java.util.zip.ZipFile and a library such as IOUtils from Apache Commons IO or ByteStreams from Guava that assist you in copying the stream.
Example:
ByteArrayOutputStream out = new ByteArrayOutputStream();
try (ZipFile zipFile = new ZipFile("foo.zip")) {
ZipEntry zipEntry = zipFile.getEntry("fileInTheZip.txt");
try (InputStream in = zipFile.getInputStream(zipEntry)) {
IOUtils.copy(in, out);
}
}
I'd use IOUtils from the commons io project.
IOUtils.copy(zipStream, byteArrayOutputStream);
You're missing call
ZipEntry entry = (ZipEntry) zipStream.getNextEntry();
to position the first byte decompressed of the first entry.
ByteArrayOutputStream streamBuilder = new ByteArrayOutputStream();
int bytesRead;
byte[] tempBuffer = new byte[8192*2];
ZipEntry entry = (ZipEntry) zipStream.getNextEntry();
try {
while ( (bytesRead = zipStream.read(tempBuffer)) != -1 ){
streamBuilder.write(tempBuffer, 0, bytesRead);
}
} catch (IOException e) {
...
}
You could implement your own wrapper around the ZipInputStream that ignores close() and hand that off to the third-party library.
thirdPartyLib.handleZipData(new CloseIgnoringInputStream(zipStream));
class CloseIgnoringInputStream extends InputStream
{
private ZipInputStream stream;
public CloseIgnoringInputStream(ZipInputStream inStream)
{
stream = inStream;
}
public int read() throws IOException {
return stream.read();
}
public void close()
{
//ignore
}
public void reallyClose() throws IOException
{
stream.close();
}
}
I would call getNextEntry() on the ZipInputStream until it is at the entry you want (use ZipEntry.getName() etc.). Calling getNextEntry() will advance the "cursor" to the beginning of the entry that it returns. Then, use ZipEntry.getSize() to determine how many bytes you should read using zipInputStream.read().
It is unclear how you got the zipStream. It should work when you get it like this:
zipStream = zipFile.getInputStream(zipEntry)
t is unclear how you got the zipStream. It should work when you get it like this:
zipStream = zipFile.getInputStream(zipEntry)
If you are obtaining the ZipInputStream from a ZipFile you can get one stream for the 3d party library, let it use it, and you obtain another input stream using the code before.
Remember, an inputstream is a cursor. If you have the entire data (like a ZipFile) you can ask for N cursors over it.
A diferent case is if you only have an "GZip" inputstream, only an zipped byte stream. In that case you ByteArrayOutputStream buffer makes all sense.
Please try code bellow
private static byte[] getZipArchiveContent(File zipName) throws WorkflowServiceBusinessException {
BufferedInputStream buffer = null;
FileInputStream fileStream = null;
ByteArrayOutputStream byteOut = null;
byte data[] = new byte[BUFFER];
try {
try {
fileStream = new FileInputStream(zipName);
buffer = new BufferedInputStream(fileStream);
byteOut = new ByteArrayOutputStream();
int count;
while((count = buffer.read(data, 0, BUFFER)) != -1) {
byteOut.write(data, 0, count);
}
} catch(Exception e) {
throw new WorkflowServiceBusinessException(e.getMessage(), e);
} finally {
if(null != fileStream) {
fileStream.close();
}
if(null != buffer) {
buffer.close();
}
if(null != byteOut) {
byteOut.close();
}
}
} catch(Exception e) {
throw new WorkflowServiceBusinessException(e.getMessage(), e);
}
return byteOut.toByteArray();
}
Check if the input stream is positioned in the begging.
Otherwise, as implementation: I do not think that you need to write to the result stream while you are reading, unless you process this exact stream in another thread.
Just create a byte array, read the input stream, then create the output stream.

Categories

Resources