I have to fill a byte[] in my Android application. Sometime, this one is bigger than 4KB.
I initialize my byte[] like this :
int size = ReadTools.getPacketSize(ptr.dataInputStream);
byte[] myByteArray = new byte[size];
Here, my size = 22625. But when I fill up my byte[] like this :
while (i != size) {
myByteArray[i] = ptr.dataInputStream.readByte();
i++;
}
But when I print the content of my byte[], I have a byte[] with size = 4060.
Does Java split my byte[] if this one is bigger than 4060 ? And if yes, how can I have a byte[] superior to 4060 ?
Here is my full code:
public class ReadSocket extends Thread{
DataInputStream inputStream;
BufferedReader reader;
GlobalContent ptr;
public ReadSocket(DataInputStream inputStream, GlobalContent ptr)
{
this.inputStream = inputStream;
this.ptr = ptr;
}
public void run() {
int i = 0;
int j = 0;
try {
ptr.StatusThreadReadSocket = 1;
while(ptr.dataInputStream.available() == 0)
{
if(ptr.StatusThreadReadSocket == 0)
{
ptr.dataInputStream.close();
break;
}
}
if(ptr.StatusThreadReadSocket == 1)
{
int end = ReadTools.getPacketSize(ptr.dataInputStream);
byte[] buffer = new byte[end];
while (i != end) {
buffer[j] = ptr.dataInputStream.readByte();
i++;
j++;
}
ptr.StatusThreadReadSocket = 0;
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
...
}
Java doesn't split anything. You should post the minimal code which reproduces your error, and tell where ReadTools comes from.
There are two options here:
ReadTools.getPacketSize() returns 4096
You inadevertedly reassign myByteArray to another array
You should really post your full code and tell what library you use. Likely, it will have a method like
read(byte[] buffer, int offset, int length);
Which will save you some typing and also give better performance if all you need is bulk reading the content of the input in memory
Related
I am trying to build an Android app that records PCM audio and exports it as a wav file.
It worked fine for 8BitPCM, but when I switched to 16BitPCM I got white noise.
I finally figured out it was the endianness of the byte array, but now, after converting from Little Endian to Big Endian, I get my audio crystal clear, but reversed!
Here is how I call the method:
byte[] inputByteArray = convertLittleEndianToBig(readToByte(input));
and then that byte[] is appended to my .wav header here:
OutputStream os;
os = new FileOutputStream(output);
BufferedOutputStream bos = new BufferedOutputStream(os);
DataOutputStream outFile = new DataOutputStream(bos);
// Adding header here...
outFile.write(inputByteArray);
convertLittleEndianToBig():
public static byte[] convertLittleEndianToBig(byte[] value) {
final int length = value.length;
byte[] res = new byte[length];
for(int i = 0; i < length; i++) {
res[length - i - 1] = value[i];
}
return res;
}
and.... readToByte():
public static byte[] readToByte(File file) throws IOException, FileNotFoundException {
if (file.length() < MAX_FILE_SIZE && file.length() != 0L) {
ByteArrayOutputStream ous = null;
InputStream ios = null;
try {
byte[] buffer = new byte[4096];
ous = new ByteArrayOutputStream();
ios = new FileInputStream(file);
int read = 0;
while ((read = ios.read(buffer)) != -1) {
ous.write(buffer, 0, read);
}
} finally {
try {
if (ous != null)
ous.close();
} catch (IOException e) {
}
try {
if (ios != null)
ios.close();
} catch (IOException e) {
}
}
return ous.toByteArray();
}
else {
return new byte[0];
}
So weird that the audio sounds exactly right, but backwards.
If I remove the call to "convertLittleEndianToBig()" I am back to white noise static.
Thanks for any help. This is my first real project.
I'm an idiot - 16 bits != a byte.
I was reversing the byte array when I should have been reversing a short array.
I ended up replacing LittleEndianToBig with:
public static short[] convertLittleBytesToBigShorts(byte[] value) {
short[] shorts = new short[value.length/2];
ByteBuffer.wrap(value).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
return shorts;
}
and the write command with:
for (int i = 0; i < inputByteArray.length; i++)
{
outFile.writeShort(inputByteArray[i]);
}
I'll clean it up, but that was the issue. My audio is correct now.
When experimenting with ZLib compression, I have run across a strange problem. Decompressing a zlib-compressed byte array with random data fails reproducibly if the source array is at least 32752 bytes long. Here's a little program that reproduces the problem, you can see it in action on IDEOne. The compression and decompression methods are standard code picked off tutorials.
public class ZlibMain {
private static byte[] compress(final byte[] data) {
final Deflater deflater = new Deflater();
deflater.setInput(data);
deflater.finish();
final byte[] bytesCompressed = new byte[Short.MAX_VALUE];
final int numberOfBytesAfterCompression = deflater.deflate(bytesCompressed);
final byte[] returnValues = new byte[numberOfBytesAfterCompression];
System.arraycopy(bytesCompressed, 0, returnValues, 0, numberOfBytesAfterCompression);
return returnValues;
}
private static byte[] decompress(final byte[] data) {
final Inflater inflater = new Inflater();
inflater.setInput(data);
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream(data.length)) {
final byte[] buffer = new byte[Math.max(1024, data.length / 10)];
while (!inflater.finished()) {
final int count = inflater.inflate(buffer);
outputStream.write(buffer, 0, count);
}
outputStream.close();
final byte[] output = outputStream.toByteArray();
return output;
} catch (DataFormatException | IOException e) {
throw new RuntimeException(e);
}
}
public static void main(final String[] args) {
roundTrip(100);
roundTrip(1000);
roundTrip(10000);
roundTrip(20000);
roundTrip(30000);
roundTrip(32000);
for (int i = 32700; i < 33000; i++) {
if(!roundTrip(i))break;
}
}
private static boolean roundTrip(final int i) {
System.out.printf("Starting round trip with size %d: ", i);
final byte[] data = new byte[i];
for (int j = 0; j < data.length; j++) {
data[j]= (byte) j;
}
shuffleArray(data);
final byte[] compressed = compress(data);
try {
final byte[] decompressed = CompletableFuture.supplyAsync(() -> decompress(compressed))
.get(2, TimeUnit.SECONDS);
System.out.printf("Success (%s)%n", Arrays.equals(data, decompressed) ? "matching" : "non-matching");
return true;
} catch (InterruptedException | ExecutionException | TimeoutException e) {
System.out.println("Failure!");
return false;
}
}
// Implementing Fisher–Yates shuffle
// source: https://stackoverflow.com/a/1520212/342852
static void shuffleArray(byte[] ar) {
Random rnd = ThreadLocalRandom.current();
for (int i = ar.length - 1; i > 0; i--) {
int index = rnd.nextInt(i + 1);
// Simple swap
byte a = ar[index];
ar[index] = ar[i];
ar[i] = a;
}
}
}
Is this a known bug in ZLib? Or do I have an error in my compress / decompress routines?
It is an error in the logic of the compress / decompress methods; I am not this deep in the implementations but with debugging I found the following:
When the buffer of 32752 bytes is compressed, the deflater.deflate() method returns a value of 32767, this is the size to which you initialized the buffer in the line:
final byte[] bytesCompressed = new byte[Short.MAX_VALUE];
If you increase the buffer size for example to
final byte[] bytesCompressed = new byte[4 * Short.MAX_VALUE];
the you will see, that the input of 32752 bytes actually is deflated to 32768 bytes. So in your code, the compressed data does not contain all the data which should be in there.
When you then try to decompress, the inflater.inflate()method returns zero which indicates that more input data is needed. But as you only check for inflater.finished() you end in an endless loop.
So you can either increase the buffer size on compressing, but that probably just means haveing the problem with bigger files, or you better need to rewrite to compress/decompress logic to process your data in chunks.
Apparently the compress() method was faulty.
This one works:
public static byte[] compress(final byte[] data) {
try (final ByteArrayOutputStream outputStream =
new ByteArrayOutputStream(data.length);) {
final Deflater deflater = new Deflater();
deflater.setInput(data);
deflater.finish();
final byte[] buffer = new byte[1024];
while (!deflater.finished()) {
final int count = deflater.deflate(buffer);
outputStream.write(buffer, 0, count);
}
final byte[] output = outputStream.toByteArray();
return output;
} catch (IOException e) {
throw new IllegalStateException(e);
}
}
While transfering images over the network using sockets, I had a -for me- strange issue:
When I wrote images to the OutputStream of one socket with ImageIO.write() and read the same images from the InputStream of the other socket with ImageIO.read() I noticed, that 16 bytes per image were sent more than read.
To be able to send multiple images in a row I had to read these bytes after every call of ImageIO.read() to not receive null because the input could not be parsed.
Does anybody know, why this is so and what these bytes are?
In this piece of code I have extracted the issue:
public class Test implements Runnable
{
public static final int COUNT = 5;
public void run()
{
try(ServerSocket server = new ServerSocket(3040))
{
Socket client = server.accept();
for(int i = 0; i < COUNT; i++)
{
final BufferedImage image = readImage(client.getInputStream());
System.out.println(image);
}
}
catch(IOException e)
{
e.printStackTrace();
}
}
private BufferedImage readImage(InputStream stream) throws IOException
{
BufferedImage image = ImageIO.read(stream);
dontKnowWhy(stream);
return image;
}
private void dontKnowWhy(InputStream stream) throws IOException
{
stream.read(new byte[16]);
}
public static void main(String... args)
{
new Thread(new Test()).start();
try(Socket server = new Socket("localhost", 3040))
{
for(int i = 0; i < COUNT; i++)
{
BufferedImage image = new BufferedImage(300, 300, BufferedImage.TYPE_INT_ARGB); //
int[] vals = new int[image.getWidth() * image.getHeight()]; //
Arrays.fill(vals, new Random().nextInt()); // Create random image
image.setRGB(0, 0, image.getWidth(), image.getHeight(), vals, 0, 1); //
ImageIO.write(image, "png", server.getOutputStream()); //send image to server
long time = System.currentTimeMillis(); //
while(time + 1000 > System.currentTimeMillis()); //wait a second
}
}
catch(IOException e)
{
e.printStackTrace();
}
}
}
I am glad about any answers, already thank you!
The "extra" bytes you see, is not read, simply because they are not needed to correctly decode the image (they are, however, most likely needed to form a fully compliant file in the chosen file format, so they are not just random "garbage" bytes).
For any given ImageIO plugin, the number of bytes left in the stream after a read may be 0, 16or any other number. It might depend on the format, the writer that wrote it, the reader, the number of images in the input, the metadata in the file, etc. In other words, relying on this behavior would be an error.
The easies way to fix this, is to prepend each image with a byte count, containing the length of the output image. This typically means you need to buffer the response on the client, to either a ByteArrayOutputStream (in-memory) or a FileOutputStream (disk).
The client then needs to read the byte count for the image, and make sure you skip any remaining bytes after the read. This can be accomplished by wrapping the input (see FilterInputStream) and keep track of the byte count internally.
(You can also read all the bytes up front, and wrapping them in a ByteArrayInputStream, before passing the data to ImageIO.read(), which is simpler but does more in-memory buffering).
After this, the client is ready do start over, by reading a new byte count, and a new image.
Another approach if you'd like less buffering on the server, could be to implement something like HTTP chunked transfer encoding, where you have multiple smaller blocks (chunks) sent to the client for each image, each prepended with its own byte count. You would need to handle the last chunk of each image especially, or insert special delimiter chunks to mark end of stream or start of a new stream.
Code below implements the buffering approach on the server, while using direct reading on the client.
Server:
DataOutputStream stream = new DataOutputStream(server.getOutputStream());
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
for (...) {
buffer.reset();
ImageIO.write(image, "png", buffer);
stream.writeInt(buffer.size());
buffer.writeTo(stream); // Send image to server
}
Client:
DataInputStream stream = new DataInputStream(client.getInputStream());
for (...) {
int size = stream.readInt();
try (InputStream imageData = new SubStream(stream, size)) {
return ImageIO.read(imageData);
}
// Note: imageData implicitly closed using try-with-resources
}
...
// Util class
private static final class SubStream extends FilterInputStream {
private final long length;
private long pos;
public SubStream(final InputStream stream, final long length) {
super(stream);
this.length = length;
}
#Override
public boolean markSupported() {
return false;
}
#Override
public int available() throws IOException {
return (int) Math.min(super.available(), length - pos);
}
#Override
public int read() throws IOException {
if (pos++ >= length) {
return -1;
}
return super.read();
}
#Override
public int read(byte[] b, int off, int len) throws IOException {
if (pos >= length) {
return -1;
}
int count = super.read(b, off, (int) Math.min(len, length - pos));
if (count < 0) {
return -1;
}
pos += count;
return count;
}
#Override
public long skip(long n) throws IOException {
if (pos >= length) {
return -1;
}
long skipped = super.skip(Math.min(n, length - pos));
if (skipped < 0) {
return -1;
}
pos += skipped;
return skipped;
}
#Override
public void close() throws IOException {
// Don't close wrapped stream, just consume any bytes left
while (pos < length) {
skip(length - pos);
}
}
}
Is there a way to read all InputStream values at once without a need of using some Apache IO lib?
I am reading IR signal and saving it from the InputStream into the byte[] array. While debugging, I have noticed that it works only if I put a delay there, so that I read all bytes at once and then process it.
Is there a smarter way to do it?
CODE:
public void run() {
Log.i(TAG, "BEGIN mConnectedThread");
byte[] buffer = new byte[100];
int numberOfBytes;
removeSharedPrefs("mSharedPrefs");
// Keep listening to the InputStream while connected
while (true) {
try {
// Read from the InputStream
numberOfBytes = mmInStream.read(buffer);
Thread.sleep(700); //If I stop it here for a while, all works fine, because array is fully populated
if (numberOfBytes > 90){
// GET AXIS VALUES FROM THE SHARED PREFS
String[] refValues = loadArray("gestureBuffer", context);
if (refValues!=null && refValues.length>90) {
int incorrectPoints;
if ((incorrectPoints = checkIfGesureIsSameAsPrevious(buffer, refValues, numberOfBytes)) < 5) {
//Correct
} else {
//Incorrect
}
}
saveArray(buffer, numberOfBytes);
}else{
System.out.println("Transmission of the data was corrupted.");
}
buffer = new byte[100];
// Send the obtained bytes to the UI Activity
mHandler.obtainMessage(Constants.MESSAGE_READ, numberOfBytes, -1, buffer)
.sendToTarget();
} catch (IOException e) {
Log.e(TAG, "disconnected", e);
connectionLost();
// Start the service over to restart listening mode
BluetoothChatService.this.start();
break;
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Edit:
My old answer is wrong, see EJPs comment! Please don't use it. The behaviour of ByteChannels depend on wether InputStreams are blocking or not.
So this is why I would suggest, you just copy IOUtils.read from Apache Commons:
public static int read(final InputStream input, final byte[] buffer) throws IOException {
int remaining = buffer.length;
while (remaining > 0) {
final int location = buffer.length - remaining;
final int count = input.read(buffer, location, remaining);
if (count == -1) { // EOF
break;
}
remaining -= count;
}
return buffer.length - remaining;
}
Old answer:
You can use ByteChannels and read into a ByteBuffer:
ReadableByteChannel c = Channels.newChannel(inputstream);
ByteBuffer buf = ByteBuffer.allocate(numBytesExpected);
int numBytesActuallyRead = c.read(buf);
This read method is attempting to read as many bytes as there is remaining space in the buffer. If the stream ends before the buffer is fully filled, the number of bytes actually read is returned. See JavaDoc.
The following code doesn't work to download a file (btw clen is file's length):
int pos = 0, total_pos = 0;
byte[] buffer = new byte[BUFFER_SIZE];
while (pos != -1) {
pos = in.read(buffer, 0, BUFFER_SIZE);
total_pos += pos;
out.write(buffer);
setProgress((int) (total_pos * 100 / clen));
}
...but this works fine:
int buf;
while ((buf = in.read()) != -1)
out.write(buf);
I'm wondering why, even though the second code segment works quickly. On that note, is there any particular reason to use a byte[] buffer (since it doesn't seem to be faster, and BufferedInputStream already uses a buffer of its own....?)
Here's how it should be done.
public static void copyStream(InputStream is, OutputStream os)
{
byte[] buff = new byte[4096];
int count;
try {
while((count = is.read(buff)) > 0)
os.write(buff, 0, count);
}catch (Exception e) {
e.printStackTrace();
}finally {
try {
if(is != null)
is.close();
} catch (IOException e) {
e.printStackTrace();
}
try {
if(os != null)
os.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
I've tried to make the minimum changes necessary to your code to get it working. st0le did a good job of providing a neater version of stream copying.
public class Test {
private static final String FORMAT = "UTF-8";
private static final int BUFFER_SIZE = 10; // for demonstration purposes.
public static void main(String[] args) throws Exception {
String string = "This is a test of the public broadcast system";
int clen = string.length();
ByteArrayInputStream in = new ByteArrayInputStream(string.getBytes(FORMAT));
OutputStream out = System.out;
int pos = 0, total_pos = 0;
byte[] buffer = new byte[BUFFER_SIZE];
while (pos != -1) {
pos = in.read(buffer, 0, BUFFER_SIZE);
if (pos > 0) {
total_pos += pos;
out.write(buffer, 0, pos);
setProgress((int) (total_pos * 100 / clen));
}
}
}
private static void setProgress(int i) {
}
}
You were ignoring the value of pos when you were writing out the buffer to the output stream.
You also need to re-check the value of pos because it may have just read the end of the file. You don't increment the total_pos in that case (although you should probably report that you are 100% complete)
Be sure to handle your resources correctly with close()s in the appropriate places.
-edit-
The general reason for using an array as a buffer is so that the output stream can do as much work as it can with a larger set of data.
Writing to a console there might not be much of a delay, but it might be a network socket being written to or some other slow device. As the JavaDoc states
The write method of OutputStream calls the write method of one argument on each of the bytes to be written out. Subclasses are encouraged to override this method and provide a more efficient implementation.
The benefit of using it when using a Buffered Input/Output Stream are probably minimal.