I have the following snippet of code. The marked line is causing a BufferUnderflowException. I read the documentation on the exception but still do not understand what exactly it menas. I use the .rewind() method which I was under the impression mitigates the issue.
Can anyone please enlighten me on the topic or cause of my error?
Bitmap cameraBaseSized = BitmapFactory.decodeFile(cameraPath, opts);
Bitmap canvasBlendSized = BitmapFactory.decodeFile(canvasPath, options);
Bitmap result = cameraBaseSized.copy(Config.ARGB_8888, true);
IntBuffer buffBase = IntBuffer.allocate(cameraBaseSized.getWidth()
* cameraBaseSized.getHeight());
cameraBaseSized.copyPixelsToBuffer(buffBase);
buffBase.rewind();
IntBuffer buffBlend = IntBuffer.allocate(canvasBlendSized.getWidth()
* canvasBlendSized.getHeight());
canvasBlendSized.copyPixelsToBuffer(buffBlend);
buffBlend.rewind();
IntBuffer buffOut = IntBuffer.allocate(cameraBaseSized.getWidth()
* cameraBaseSized.getHeight());
buffOut.rewind();
while (buffOut.position() < buffOut.limit()) {
int filterInt = buffBlend.get(); //BUFFERUNDERFLOW EXCEPTION
int srcInt = buffBase.get();
int redValueFilter = Color.red(filterInt);
int greenValueFilter = Color.green(filterInt);
int blueValueFilter = Color.blue(filterInt);
int redValueSrc = Color.red(srcInt);
int greenValueSrc = Color.green(srcInt);
int blueValueSrc = Color.blue(srcInt);
int redValueFinal = multiply(redValueFilter, redValueSrc);
int greenValueFinal = multiply(greenValueFilter, greenValueSrc);
int blueValueFinal = multiply(blueValueFilter, blueValueSrc);
int pixel = Color.argb(255, redValueFinal, greenValueFinal, blueValueFinal);
buffOut.put(pixel);
}
buffOut.rewind();
result.copyPixelsFromBuffer(buffOut);
And the exception snippet
11-29 14:41:57.347: E/AndroidRuntime(2166): Caused by: java.nio.BufferUnderflowException
11-29 14:41:57.347: E/AndroidRuntime(2166): at java.nio.IntArrayBuffer.get(IntArrayBuffer.java:55)
I also would like to add this is happening only on specific devices, particularly samsung flavors.
Maybe this test will help:
ByteBuffer b = ByteBuffer.allocate(1);
b.get();
b.get();
After allocation there is 1 byte in the buffer, the first get() reads this byte and buffer reaches its limit, second get() is illegal, there is nothing to read, so you get BufferUnderflowException.
This code does not fail:
ByteBuffer b = ByteBuffer.allocate(1);
b.get();
b.rewind();
b.get();
Your read the byte, but it is still in the buffer, so you can rewind it and read the byte again
Since it is happening on certain devices, it could be possible that you are not getting the pixel format you are expecting.
Also if buffBlend is for any reason shorter than buffOut (which could potentially be caused by the bitmaps being different formats), you will get this exception when you try and get() past the end of it.
EDIT:
You could change int filterInt = buffBlend.get(); to
int filterInt = 0;
if (buffBlend.position() < buffBlend.limit())
filterInt = buffBlend.get();
If you do that for both of your buffers, that should protect you against the exception, and blend with black when one image is bigger than the other.
If someone is still interested:
Short solution: call buffBlend.position(0); before executing the while loop. Do the same for all your IntBuffer objects.
Background:
When you put an element into a IntBuffer the index/position of the IntBuffer is incremented. And when you call .get() method the incrimination continues. That's the problem. Let's say you put 3 int values into an IntBuffer. After that you call the .get() and you get the BufferUnderflowException. This happens because when you call the get() method the position/index increments to the 4, but your IntBuffer has the capacity of 3.
Solution: before calling the first get(), execute intBufferObject.position(0); that is how you get your index position back to the 0.
See more here get() and put() methods documentation.
https://docs.oracle.com/javase/7/docs/api/java/nio/IntBuffer.html#get()
Related
Here is a version of implementation of function Atomic::cmpxchg used for CAS:
jbyte Atomic::cmpxchg(jbyte exchange_value, volatile jbyte* dest, jbyte compare_value) {
assert(sizeof(jbyte) == 1, "assumption.");
uintptr_t dest_addr = (uintptr_t)dest;
// look here
uintptr_t offset = dest_addr % sizeof(jint);
volatile jint* dest_int = (volatile jint*)(dest_addr - offset);
jint cur = *dest_int;
jbyte* cur_as_bytes = (jbyte*)(&cur);
jint new_val = cur;
jbyte* new_val_as_bytes = (jbyte*)(&new_val);
// ... and here
new_val_as_bytes[offset] = exchange_value;
while (cur_as_bytes[offset] == compare_value) {
jint res = cmpxchg(new_val, dest_int, cur);
if (res == cur) break;
cur = res;
new_val = cur;
new_val_as_bytes[offset] = exchange_value;
}
return cur_as_bytes[offset];
}
In the code above, I want to know what the use of offset actually is. I think we could simply and directly comapre cur_as_bytes and compare_value, without any offset. So why do we need it and how does it work? Is it for alignment? Thanks.
Yes, it is for alignment. The posted code implements a single-byte compare-exchange, using an already existing int-based compare-exchange.
This gives a few problems that the code needs to solve:
The int-based compare-exchange is restricted to reading int-aligned values, which means that you have to work out which of the (4?) bytes of the int you actually want to change. After all, the other bytes in the int must be unaffected
When you then actually do the compare-exchange, it is only a failure if the single byte you are trying to alter has been changed behind your back. If any of the other bytes in the int have changed, then that is only a failure for the int-cmpxchg, but not a failure for the byte-cmpxchg
The part before the while-loop handles the first part of that, by creating an int-aligned pointer that the int-value can be read from, and then setting up the "int we expect to see" and "int we want to change to" values.
The loop then handles the second part, where the algorithm attempts the int-cmpxchg, and then retries any failures as long as it is one of the other bytes that have been changed from expected.
I am writing some code that intends to take a Wave file, and write it out to and AudioTrack in mode stream. This is a minimum viable test to get AudioTrack stream mode working.
But once I write some buffer of audio to the AudioTrack, and subsequently call play(), the method getPlaybackHeadPosition() continually returns 0.
EDIT: If I ignore my available frames check, and just continually write buffers to the AudioTrack, the write method returns 0 (after the the first buffer write), indicating that it simply did not write any more audio. So it seems that the AudioTrack just doesn't want to start playing.
My code is properly priming the audiotrack. The play method is not throwing any exceptions, so I am not sure what is going wrong.
When stepping through the code, everything on my end is exactly how I anticipate it, so I am thinking somehow I have the AudioTrack configured wrong.
I am running on an emulator, but I don't think that should be an issue.
The WavFile class I am using is a vetted class that I have up and running reliably in lots of Java projects, it is tested to work well.
Observe the following log write, which is a snippet from the larger chunk of code. This log write is never hitting...
if (headPosition > 0)
Log.e("headPosition is greater than zero!!");
..
public static void writeToAudioTrackStream(final WavFile wave)
{
Log.e("writeToAudioTrackStream");
Thread thread = new Thread()
{
public void run()
{
try {
final float[] data = wave.getData();
int format = -1;
if (wave.getChannel() == 1)
format = AudioFormat.CHANNEL_OUT_MONO;
else if (wave.getChannel() == 2)
format = AudioFormat.CHANNEL_OUT_STEREO;
else
throw new RuntimeException("writeToAudioTrackStatic() - unsupported number of channels value = "+wave.getChannel());
final int bufferSizeInFrames = 2048;
final int bytesPerSmp = wave.getBytesPerSmp();
final int bufferSizeInBytes = bufferSizeInFrames * bytesPerSmp * wave.getChannel();
AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, wave.getSmpRate(),
format,
AudioFormat.ENCODING_PCM_FLOAT,
bufferSizeInBytes,
AudioTrack.MODE_STREAM);
int index = 0;
float[] buffer = new float[bufferSizeInFrames * wave.getChannel()];
boolean started = false;
int framesWritten = 0;
while (index < data.length) {
// calculate the available space in the buffer
int headPosition = audioTrack.getPlaybackHeadPosition();
if (headPosition > 0)
Log.e("headPosition is greater than zero!!");
int framesInBuffer = framesWritten - headPosition;
int availableFrames = bufferSizeInFrames - framesInBuffer;
// once the buffer has no space, the prime is done, so start playing
if (availableFrames == 0) {
if (!started) {
audioTrack.play();
started = true;
}
continue;
}
int endOffset = availableFrames * wave.getChannel();
for (int i = 0; i < endOffset; i++)
buffer[i] = data[index + i];
int samplesWritten = audioTrack.write(buffer , 0 , endOffset , AudioTrack.WRITE_BLOCKING);
// could return error values
if (samplesWritten < 0)
throw new RuntimeException("AudioTrack write error.");
framesWritten += samplesWritten / wave.getChannel();
index = endOffset;
}
}
catch (Exception e) {
Log.e(e.toString());
}
}
};
thread.start();
}
Per the documentation,
For portability, an application should prime the data path to the maximum allowed by writing data until the write() method returns a short transfer count. This allows play() to start immediately, and reduces the chance of underrun.
With a strict reading, this might be seen to contradict the earlier statement:
...you can optionally prime the data path prior to calling play(), by writing up to bufferSizeInBytes...
(emphasis mine), but the intent is clear enough: You're supposed to get a short write first.
This is just to get play started. Once that takes place, you can, in fact, use
getPlaybackHeadPosition() to determine when more space is available. I've used that technique successfully in my own code, on many different devices/API levels.
As an aside: You should be prepared for getPlaybackHeadPosition() to change only in large increments (if I remember correctly, it's getMinBufferSize()/2). This is the max resolution available from the system; onMarkerReached() cannot be used to do any better.
I have a method which takes a parameter which is Partition enum. This method will be called by multiple background threads (15 max) around same time period by passing different value of partition. Here dataHoldersByPartition is a map of Partition and ConcurrentLinkedQueue<DataHolder>.
private final ImmutableMap<Partition, ConcurrentLinkedQueue<DataHolder>> dataHoldersByPartition;
//... some code to populate entry in `dataHoldersByPartition`
private void validateAndSend(final Partition partition) {
ConcurrentLinkedQueue<DataHolder> dataHolders = dataHoldersByPartition.get(partition);
Map<byte[], byte[]> clientKeyBytesAndProcessBytesHolder = new HashMap<>();
int totalSize = 0;
DataHolder dataHolder;
while ((dataHolder = dataHolders.poll()) != null) {
byte[] clientKeyBytes = dataHolder.getClientKey().getBytes(StandardCharsets.UTF_8);
if (clientKeyBytes.length > 255)
continue;
byte[] processBytes = dataHolder.getProcessBytes();
int clientKeyLength = clientKeyBytes.length;
int processBytesLength = processBytes.length;
int additionalLength = clientKeyLength + processBytesLength;
if (totalSize + additionalLength > 50000) {
Message message = new Message(clientKeyBytesAndProcessBytesHolder, partition);
// here size of `message.serialize()` byte array should always be less than 50k at all cost
sendToDatabase(message.getAddress(), message.serialize());
clientKeyBytesAndProcessBytesHolder = new HashMap<>();
totalSize = 0;
}
clientKeyBytesAndProcessBytesHolder.put(clientKeyBytes, processBytes);
totalSize += additionalLength;
}
// calling again with remaining values only if clientKeyBytesAndProcessBytesHolder is not empty
if(!clientKeyBytesAndProcessBytesHolder.isEmpty()) {
Message message = new Message(partition, clientKeyBytesAndProcessBytesHolder);
// here size of `message.serialize()` byte array should always be less than 50k at all cost
sendToDatabase(message.getAddress(), message.serialize());
}
}
And below is my Message class:
public final class Message {
private final byte dataCenter;
private final byte recordVersion;
private final Map<byte[], byte[]> clientKeyBytesAndProcessBytesHolder;
private final long address;
private final long addressFrom;
private final long addressOrigin;
private final byte recordsPartition;
private final byte replicated;
public Message(Map<byte[], byte[]> clientKeyBytesAndProcessBytesHolder, Partition recordPartition) {
this.clientKeyBytesAndProcessBytesHolder = clientKeyBytesAndProcessBytesHolder;
this.recordsPartition = (byte) recordPartition.getPartition();
this.dataCenter = Utils.CURRENT_LOCATION.get().datacenter();
this.recordVersion = 1;
this.replicated = 0;
long packedAddress = new Data().packAddress();
this.address = packedAddress;
this.addressFrom = 0L;
this.addressOrigin = packedAddress;
}
// Output of this method should always be less than 50k always
public byte[] serialize() {
int bufferCapacity = getBufferCapacity(clientKeyBytesAndProcessBytesHolder); // 36 + dataSize + 1 + 1 + keyLength + 8 + 2;
ByteBuffer byteBuffer = ByteBuffer.allocate(bufferCapacity).order(ByteOrder.BIG_ENDIAN);
// header layout
byteBuffer.put(dataCenter).put(recordVersion).putInt(clientKeyBytesAndProcessBytesHolder.size())
.putInt(bufferCapacity).putLong(address).putLong(addressFrom).putLong(addressOrigin)
.put(recordsPartition).put(replicated);
// now the data layout
for (Map.Entry<byte[], byte[]> entry : clientKeyBytesAndProcessBytesHolder.entrySet()) {
byte keyType = 0;
byte[] key = entry.getKey();
byte[] value = entry.getValue();
byte keyLength = (byte) key.length;
short valueLength = (short) value.length;
ByteBuffer dataBuffer = ByteBuffer.wrap(value);
long timestamp = valueLength > 10 ? dataBuffer.getLong(2) : System.currentTimeMillis();
byteBuffer.put(keyType).put(keyLength).put(key).putLong(timestamp).putShort(valueLength)
.put(value);
}
return byteBuffer.array();
}
private int getBufferCapacity(Map<byte[], byte[]> clientKeyBytesAndProcessBytesHolder) {
int size = 36;
for (Entry<byte[], byte[]> entry : clientKeyBytesAndProcessBytesHolder.entrySet()) {
size += 1 + 1 + 8 + 2;
size += entry.getKey().length;
size += entry.getValue().length;
}
return size;
}
// getters and to string method here
}
Basically, what I have to make sure is whenever the sendToDatabase method is called, size of message.serialize() byte array should always be less than 50k at all cost. My sendToDatabase method sends byte array coming out from serialize method. And because of that condition I am doing below validation plus few other stuff. In the method, I will iterate dataHolders CLQ and I will extract clientKeyBytes and processBytes from it. Here is the validation I am doing:
If the clientKeyBytes length is greater than 255 then I will skip it and continue iterating.
I will keep incrementing the totalSize variable which will be the sum of clientKeyLength and processBytesLength, and this totalSize length should always be less than 50000 bytes.
As soon as it reaches the 50000 limit, I will send the clientKeyBytesAndProcessBytesHolder map to the sendToDatabase method and clear out the map, reset totalSize to 0 and start populating again.
If it doesn't reaches that limit and dataHolders got empty, then it will send whatever it has.
I believe there is some bug in my current code because of which maybe some records are not being sent properly or dropped somewhere because of my condition and I am not able to figure this out. Looks like to properly achieve this 50k condition I may have to use getBufferCapacity method to correctly figure out the size before calling sendToDatabase method?
I checked your code, its look good as per your logic. As you said it will always store the information which is less than 50K but it will actually store information till 50K. To make it less than 50K you have to change the if condition to if (totalSize + additionalLength >= 50000).
If your codes still not fulfilling your requirement i.e. storing information when totalSize + additionalLength is greater than 50k I can advise you few thinks.
As more than 50 threads call this method you need to consider two section in your codes to be synchronize.
One is global variable which is a container dataHoldersByPartition object. If multiple concurrent and parallel searches happened in this container object, outcome might not be perfect. Just check whether container type is synchronized or not. If not make this block like below:-
synchronized(this){
ConcurrentLinkedQueue<DataHolder> dataHolders = dataHoldersByPartition.get(partition);
}
Now, I can give only two suggestion to fix this issue. One is instead of if (totalSize + additionalLength > 50000) this you can check the size of the object clientKeyBytesAndProcessBytesHolder if(sizeof(clientKeyBytesAndProcessBytesHolder) >= 50000) (check appropriate method for sizeof in java). And second one is narrow down the area to check whether it is a side effect of multithreading or not. All these suggestion are to find out the area where exactly problem is and fix should be from your end only.
First check whether you method validateAndSend is exactly satisfying your requirement or not. For that synchronize whole validateAndSend method first and check whether everything fine or still have the same result. If still have the same result that means it is not because of multithreading but your coding is not as per requirement. If its work fine that means it is a problem of multithreading. If method synchronization is fixing your issue but degrade the performance you just remove the synchronization from it and concentrate every small block of your code which might cause the issue and make it synchronize block and remove if still not fixing your issue. Like that finally you locate the block of code which is actually creating the issue and leave it as synchronize to fix it finally.
For example first attempt:-
`private synchronize void validateAndSend`
Second attempts: Remove synchronize key words from the method and do the below step:-
synchronize(this){
Message message = new Message(clientKeyBytesAndProcessBytesHolder, partition);
sendToDatabase(message.getAddress(), message.serialize());
}
If you think that I did not correctly understand you please let me know.
In your validateAndSend I would put whole data to the queue, and do whole processing in separate thread. Please consider command model. That way all threads are going to put their load on queue. Consumer thread has all the data, all the information in place, and can process it quite effectively. The only complicated part is sending response / result back to calling thread. Since in your case that is not a problem - the better. There are some more benefits of this pattern - please look at netflix/hystrix.
I send long number via UDP.
LinkedQueue Q = new LinkedQueue<ByteBuffer>();
while (this._run) {
udp_socket.receive(packet);
if (packet.getLength() > 0) {
ByteBuffer bb = ByteBuffer.wrap(buf, 0, packet.getLength());
Q.add(bb);
}
}
//udp close. I remove data from Queue, but all ByteBuffers have same value.
while(!Q.isEmpty){
ByteBuffer b = Q.remove();
b.getLong();//same value
}
Why i receive same value? Any Suggest?
Does your byte buffer consists of just one long?
Probably not, my guess is that you put a bit too much for just one long in there.
And that's why it gives you same values on first sizeof(long) bytes.
What you need to do is to keep calling .getLong() until you hit the end of the buffer.
See the docs.
I'm trying to grab a set of mipmaplevels and save to a local cache file to avoid rebuilding them each time (and it is not practical to pre-generate them...)
I've got the mipmaplevels into a set of bitmaps OK, and now want to write them to my cache file, but whatever variety of buffer I use (direct or not, setting byteorder or not) hasArray always comes back false on the intbuffer. I must be doing something silly here, but I can't see the wood for the trees anymore.
Not been using java for long, so this is prolly a noob error ;)
Code looks like this:
int tsize = 256;
ByteBuffer xbb = ByteBuffer.allocate(tsize*tsize*4);
// or any other variety of create like wrap etc. makes no diference
xbb.order(ByteOrder.nativeOrder()); // in or out - makes no difference
IntBuffer ib = xbb.asIntBuffer();
for (int i = 0; i < tbm.length; i++) {
//ib.array() throws exception, ib.hasArray() returns false;
tbm[i].getPixels(ib.array(), 0, tsize, 0, 0, tsize, tsize);
ou.write(xbb.array(), 0, tsize*tsize*4);
tsize = tsize / 2;
}
See this link - it talks about same problem. There are answers there; however, it concludes with:
In another post
http://forum.java.sun.com/thread.jsp?forum=4&thread=437539
it is suggested to implement a version of DataBuffer which is
backed by a (Mapped)ByteBuffer *gasp* ...
Notice, that forum.java.sun.com has moved to Oracle somewhere.
I hope this helps. Anyhow, if you find better answer let me know too :-)
In my testing, changing the ByteBuffer into an IntBuffer using ByteBuffer.asIntBuffer() causes you to lose the backing array.
If you use IntBuffer.allocate(tsize*tsize) instead, you should be able to get the backing array.
ByteBuffer buf = ByteBuffer.allocate(10);
IntBuffer ib = buf.asIntBuffer();
System.out.printf("Buf %s, hasArray: %s, ib.hasArray %s\n", ib, buf.hasArray(), ib.hasArray());
buf = ByteBuffer.allocateDirect(10);
ib = IntBuffer.allocate(10);
System.out.printf("Buf %s, hasArray: %s, ib.hasArray %s\n", ib, buf.hasArray(), ib.hasArray());
Produces:
Buf java.nio.ByteBufferAsIntBufferB[pos=0 lim=2 cap=2], buf.hasArray: true, ib.hasArray false
Buf java.nio.HeapIntBuffer[pos=0 lim=10 cap=10], buf.hasArray: false, ib.hasArray true