I am trying to write a simple application which plays sound and can alter the volume of that sound at any time during playing. I am doing this by converting each pair of bytes in the byte array of the sound into an int, then multiplying that int by increase or decrease in volume and then writing them back as two bytes (i.e. 1 sample). However, this results in extreme distortion in the sound. Is it possible that I have got the bit shifting wrong? My sound format is:
.wav 44100.0hz, 16bit, little-endian
At the moment the byte array that I pass the adjustVolume method represents a 10th of a second of audio data. i.e. sampleRate/10
Is there something I am missing here that is causing it to distort and not scale volume properly? Have I got the writing of bytes back and fort wrong?
private byte[] adjustVolume(byte[] audioSamples, double volume) {
byte[] array = new byte[audioSamples.length];
for (int i = 0; i < array.length; i += 2) {
// convert byte pair to int
int audioSample = (int) (((audioSamples[i + 1] & 0xff) << 8) | (audioSamples[i] & 0xff));
audioSample = (int) (audioSample * volume);
// convert back
array[i] = (byte) audioSample;
array[i + 1] = (byte) (audioSample >> 16);
}
return array;
}
This code is based off of: Audio: Change Volume of samples in byte array in which the asker is trying to do the same thing. However, having used the code from his question (which I think was not updated after he got his answer) I can't get it to work and I am not exactly sure what it is doing.
I suggest you wrap your byte array in a ByteBuffer (not forgetting to set its .order() to little endian), read a short, manipulate it, write it again.
Sample code:
// Necessary in order to convert negative shorts!
private static final int USHORT_MASK = (1 << 16) - 1;
final ByteBuffer buf = ByteBuffer.wrap(audioSamples)
.order(ByteOrder.LITTLE_ENDIAN);
final ByteBuffer newBuf = ByteBuffer.allocate(audioSamples.length)
.order(ByteOrder.LITTLE_ENDIAN);
int sample;
while (buf.hasRemaining()) {
sample = (int) buf.getShort() & USHORT_MASK;
sample *= volume;
newBuf.putShort((short) (sample & USHORT_MASK));
}
return newBuf.array();
Related
I'm working on drawing semi-transparent images on top of other images for a small 2d game. To currently blend the images I'm using the formula found here: https://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending
My implementation of this is as follows;
private static int blend(int source, int dest, int trans)
{
double alpha = ((double) trans / 255.0);
int sourceRed = (source >> 16 & 0xff);
int sourceGreen = (source >> 8 & 0xff);
int sourceBlue = (source & 0xff);
int destRed = (dest >> 16 & 0xff);
int destGreen = (dest >> 8 & 0xff);
int destBlue = (dest & 0xff);
int blendedRed = (int) (alpha * sourceRed + (1.0 - alpha) * destRed);
int blendedGreen = (int) (alpha * sourceGreen + (1.0 - alpha) * destGreen);
int blendedBlue = (int) (alpha * sourceBlue + (1.0 - alpha) * destBlue);
return (blendedRed << 16) + (blendedGreen << 8) + blendedBlue;
}
Now, it works fine, but it has a pretty high overhead since it's being called for every single pixel every single frame. I get a performance drop of around 30% FPS as opposed to simply rendering the image without blending.
I just wanted to know if anyone can think of a better way to optimise this code as I'm probably doing too many bit operations.
not a java coder (so read with prejudice) but you are doing some things really wrong (from mine C++ and low level gfx perspective):
mixing integers and floating point
that requires conversions which are sometimes really costly... Its much better to use integer weights (alpha) in range <0..255> and then just divide by 255 or bitshift by 8. That would be most likely much faster.
bitshifting/masking to obtain bytes
yes its fine but there are simpler and faster methods simply by using
enum{
_b=0, // db
_g=1,
_r=2,
_a=3,
};
union color
{
DWORD dd; // 1x32 bit unsigned int
BYTE db[4]; // 4x8 bit unsigned int
};
color col;
col.dd=some_rgba_color;
r = col.dd[_r]; // get red channel
col.dd[_b]=5; // set blue channel
decent compilers could optimize some parts of your code to this internally on its own but I doubt it can do it everywhere...
You can also use pointers instead of union in the same way...
function overhead
you got function blending single pixel. That means it will be called a lot. its usually much faster to blend region (rectangle) per single call than call stuff on per pixel basis. Because you trash the stack this way. To limit this you can try these (for functions that are called massively):
Recode your app so you can blend regions instead of pixels causing much less function calls.
Lower the stack trashing by lowering operands, return values and internal variables of called function to limit the amount of RAM being allocated/freed/overwritten/copied each call... For example by using static or global variables for example the Alpha will most likely not be changing much. Or you can use alpha encoded in the color directly instead of having alpha as operand.
use inline or macros like #define to place the source code directly to code instead of function call.
For starters I would try to recode your function body to something like this:
enum{
_b=0, // db
_g=1,
_r=2,
_a=3,
};
union color
{
unsigned int dd; // 1x32 bit unsigned int
unsigned char db[4]; // 4x8 bit unsigned int
};
private static unsigned int blend(unsigned int src, unsigned int dst, unsigned int alpha)
{
unsigned int i,a,_alpha=255-alpha;
color s,d;
s.dd=src;
d.dd=dst;
for (i=0;i<3;i++)
{
a=(((unsigned int)(s.db[i]))*alpha) + (((unsigned int)(d.db[i]))*_alpha);
a>>=8;
d.db[i]=a;
}
return d.dd;
}
However if you want true speed use GPU (OpenGL Blending).
I am trying to port some existing C# code that uses BitConverter to Java. I have found various other threads, but then happened upon a github class that appears to do the trick. However, the ToUInt16 does not match the output from my C# code. The ToInt16 and ToInt32 appear to be returning the same values. Can you help me understand what is wrong with this implementation (or possibly what I am doing wrong)?
Code Ref: Java BitConverter
ToUInt16:
public static int ToUInt16(byte[] bytes, int offset) {
int result = (int)bytes[offset+1]&0xff;
result |= ((int)bytes[offset]&0xff) << 8;
if(Lysis.bDebug)
System.out.println(result & 0xffff);
return result & 0xffff;
}
ToUInt32:
public static long ToUInt32(byte[] bytes, int offset) {
long result = (int)bytes[offset]&0xff;
result |= ((int)bytes[offset+1]&0xff) << 8;
result |= ((int)bytes[offset+2]&0xff) << 16;
result |= ((int)bytes[offset+3]&0xff) << 24;
if(Lysis.bDebug)
System.out.println(result & 0xFFFFFFFFL);
return result & 0xFFFFFFFFL;
}
MyCode Snippet:
byte[] byteArray = from some byte array
int offset = currentOffset
int msTime = BitConverter.ToUInt16(byteArray, offset)
msTime does not match what is coming from C#
C# Example (string from vendor gets converted from a string using Convert.FromBase64String)
byte[] rawData = Convert.FromBase64String(vendorRawData);
byte[] sampleDataRaw = rawData;
Assert.AreEqual(15616, sampleDataRaw.Length);
//Show byte data for msTime
Console.WriteLine(sampleDataRaw[7]);
Console.WriteLine(sampleDataRaw[6]);
//Show byte data for msTime
Console.WriteLine(sampleDataRaw[14]);
Console.WriteLine(sampleDataRaw[13]);
var msTime = BitConverter.ToUInt16(sampleDataRaw, 6);
var dmWindow = BitConverter.ToUInt16(sampleDataRaw, 13);
Assert.AreEqual(399, msTime);
Assert.AreEqual(10, dmWindow);
C# Console Output for byte values:
1
143
0
10
Groovy Example (string from vendor gets converted from a string using groovy decodeBase64())
def rawData = vendorRawData.decodeBase64()
def sampleDataRaw = rawData
Assert.assertEquals(15616, rawData.length)
//Show byte data for msTime
println sampleDataRaw[7]
println sampleDataRaw[6]
//Show byte data for dmWindow
println sampleDataRaw[14]
println sampleDataRaw[13]
def msTime = ToUInt16(sampleDataRaw, 6)
def dmWindow = ToUInt16(sampleDataRaw, 13)
Assert.assertEquals(399, msTime)
Assert.assertEquals(10, dmWindow)
**Asserts fail with**
399 fro msTime is actually 36609
10 from dmWindow is actually 2560
Groovy Output from Byte values in println
1
-113
0
10
There is a discrepancy between the two methods. The first one ToUInt16 assumes big-endian byte order. i.e the first byte is the most significant byte.
But ToUInt32 assumes little-endian byte order (a strange choice). So the first byte is least significant.
A corrected implementation would look like:
public static long toUInt32(byte[] bytes, int offset) {
long result = Byte.toUnsignedLong(bytes[offset+3]);
result |= Byte.toUnsignedLong(bytes[offset+2]) << 8;
result |= Byte.toUnsignedLong(bytes[offset+1]) << 16;
result |= Byte.toUnsignedLong(bytes[offset]) << 24;
return result;
}
Where the array indexing is 'reversed'.
(I also changed the hacky looking bitmasking to clearer calls to Byte.toUnsignedLong, which does the same thing.)
What I actually found, was that the ToInt16 is actually giving me the results I wanted, not the ToUInt16 in the solution. I have checked quite a few results and they all match the .Net output.
The link from #pinkfloydx33 where I could see the source code, is actually what led me to try and use ToInt16 instead of ToUInt16.
I was trying to save some space in a program and needed to use byte, I got on to a code that looks like this
private static final long MAX = 1000000000L;
private static final long SQRT_MAX = (long) Math.sqrt(MAX) + 1;
private static final int MEMORY_SIZE = (int) (MAX >> 4);
private static byte[] array = new byte[MEMORY_SIZE];
private void getbit(Long i)
{
byte block = array[(int) (i >> 4)];
byte mask = (byte) (1 << ((i >> 1) & 7));
return ((block & mask) != 0);
}
I cant understand what this means? In block why are we using i >> 4 should it not be i >> 3 since each byte is 8 bits? I also dont get what mask is doing?
I'm just getting started with byte's, any links to sources will be helpful
Here is some context - Source Code
Regarding the lowest 8 bits of 'i', this is what I can gather (where the MSB is bit 7 and the LSB is bit 0):
The value in the top 4 bits of 'i' represents an index to 'array'.
The value of 'block' is set to the value located at the above index in 'array'.
The value in bits 1-3 of 'i' represents a bit index to be masked out (the function will return true if the bit at that index in 'block' is 1).
Note: Bit 0 of 'i' seems to be unused.
I know that's not a specific answer, but I hope it helps to point you in the right direction.
I didn't look at the context source code though.
i was wondering if the solution for this documented here is still the solution or is there any other way getting an int from 4 bytes?
thank you.
EDIT: im getting the byte[] from sockets .read
EDIT: int recvMsgSize = in.read(Data, 0, BufferSize); if recvMsgSize is -1 i know the connection has been dropped.
how do i detect this when im using DataInputStream instead of InputStream?
thanks.
EDIT: apologies for being a yoyo regarding accepting the right answer. but after mihi's updated final response, it would appear that the method is solid and cuts down extended coding and in my opinion best practice.
You have to be very careful with any widening conversion and numeric promotion, but the code below converts 4 byte into int:
byte b1 = -1;
byte b2 = -2;
byte b3 = -3;
byte b4 = -4;
int i = ((0xFF & b1) << 24) | ((0xFF & b2) << 16) |
((0xFF & b3) << 8) | (0xFF & b4);
System.out.println(Integer.toHexString(i)); // prints "fffefdfc"
See also
Java code To convert byte to Hexadecimal
Pay attention to the need to mask with & 0xFF -- you'll probably end up doing a lot of this if you're working with byte since all arithmetic operations promote to int (or long)
If you have them already in a byte[] array, you can use:
int result = ByteBuffer.wrap(bytes).getInt();
or, if you have Google's guava-libraries on your classpath, you have the shortcut:
int result = Ints.fromByteArray(array);
which has the advantage that you have similarly nice APIs for other types (Longs.fromByteArray, Shorts.fromByteArray, etc).
Depending on where you get those 4 bytes from:
http://docs.oracle.com/javase/7/docs/api/java/io/DataInput.html#readInt()
http://docs.oracle.com/javase/7/docs/api/java/nio/ByteBuffer.html#getInt(int)
You can of course still do it manually, but in most cases using one of those (if you have to convert a byte array with lots of bytes, you might want to use a DataInputStream around a ByteArrayInputStream for example) is easier.
Edit: If you need to change the endianness, you will have to use a ByteBuffer, or reverse the bytes yourself, or do the conversion yourself, as DataInput does not support changing the endianness.
Edit2: When you get them from the socket input stream, I'd wrap that one into a DataInputStream and use it for reading all kinds of data. Especially since InputStream.read(byte[]) will not guarantee to fill the whole byte array... DataInputStream.readFully does.
DataInputStream in = new DataInputStream(socket.getInputStream());
byte aByte = in.readByte();
int anInt = in.readInt();
int anotherInt = in.readInt();
short andAShort = in.readShort(); // 11 bytes read :-)
byte[] lotOfBytes = new byte[anInt];
in.readFully(lotOfBytes);
Edit3: When reading multiple times from a stream, they will continue reading where you stopped, i. e. aByte will be byte 0, anInt will be bytes 1 to 4, anotherInt will be bytes 5 to 8, etc. readFully will read on after all that and will block until it has read lotOfbytes.
When the stream stops (the connection drops) you will get EOFException instead of -1, so if you get -1, the int really was -1.
If you do not want to parse any bytes at all, you can skip() them. Parsing one byte in 2 different ways is not possible with DataInputStream (i. e. read first an int from byte 0 to 3, then one from byte 2 to 5), but usually not needed either.
Example:
// read messages (length + data) until the stream ends:
while (true) {
int messageLength;
try {
messageLength = in.readInt(); // bytes 0 to 3
} catch (EOFException ex) {
// connection dropped, so handle it, for example
return;
}
byte[] message = new byte[messageLength];
in.readFully(message);
// do something with the message.
}
// all messages handled.
Hope this answers your additional questions.
A solution in functional style (just for variety, imho not very convinient in use):
private int addByte (int base, byte appendix) {
return (base << 4) + appendix;
}
public void test() {
byte b1 = 5, b2 = 5, byte b3 = 0, b4 = 1;
int result = addByte (addByte (addByte (addByte (0, b1), b2), b3), b4);
}
As mihi said, it depends on where you are getting those bytes from, but this code might be of use:
int myNumber = (((int)byteOne) << 0) |
(((int)byteTwo) << 8) |
(((int)byteThree) << 16) |
(((int)byteFour) << 24);
I'm working with java.
I have a byte array (8 bits in each position of the array) and what I need to do is to put together 2 of the values of the array and get a value.
I'll try to explain myself better; I'm extracting audio data from a audio file. This data is stored in a byte array. Each audio sample has a size of 16 bits. If the array is:
byte[] audioData;
What I need is to get 1 value from samples audioData[0] and audioData[1] in order to get 1 audio sample.
Can anyone explain me how to do this?
Thanks in advance.
I'm not a Java developer so this could be completely off-base, but have you considered using a ByteBuffer?
Assume the LSB is at data[0]
int val;
val = (((int)data[0]) & 0x00FF) | ((int)data[1]<<8);
As suggested before, Java has classes to help you with this. You can wrap your array with a ByteBuffer and then get an IntBuffer view of it.
ByteBuffer bb = ByteBuffer.wrap(audioData);
// optional: bb.order(ByteOrder.BIG_ENDIAN) or bb.order(ByteOrder.LITTLE_ENDIAN)
IntBuffer ib = bb.asIntBuffer();
int firstInt = ib.get(0);
ByteInputStream b = new ByteInputStream(audioData);
DataInputStream data = new DataInputStream(b);
short value = data.readShort();
The advantage of the above code is that you can keep reading the rest of 'data' in the same way.
A simpler solution for just two values might be:
short value = (short) ((audioData[0]<<8) | (audioData[1] & 0xff));
This simple solution extracts two bytes, and pieces them together with the first byte being the higher order bits and the second byte the lower order bits (this is known as Big-Endian; if your byte array contained Little-Endian data, you would shift the second byte over instead for 16-bit numbers; for Little-Endian 32-bit numbers, you would have to reverse the order of all 4 bytes, because Java's integers follow Big-Endian ordering).
easier way in Java to parse an array of bytes to bits is JBBP usage
class Parsed { #Bin(type = BinType.BIT_ARRAY) byte [] bits;}
final Parsed parsed = JBBPParser.prepare("bit:1 [_] bits;").parse(theByteArray).mapTo(Parsed.class);
the code will place parsed bits of each byte as 8 bytes in the bits array of the Parsed class instance
You can convert to a short (2 bytes) by logical or-ing the two bytes together:
short value = ((short) audioData[0]) | ((short) audioData[1] << 8);
I suggest you take a look at Preon. In Preon, you would be able to say something like this:
class Sample {
#BoundNumber(size="16") // Size of the sample in bits
int value;
}
class AudioFile {
#BoundList(size="...") // Number of samples
Sample[] samples;
}
byte[] buffer = ...;
Codec<AudioFile> codec = Codecs.create(AudioFile.class);
AudioFile audioFile = codec.decode(buffer);
You can do it like this, no libraries or external classes.
byte myByte = (byte) -128;
for(int i = 0 ; i < 8 ; i++) {
boolean val = (myByte & 256) > 0;
myByte = (byte) (myByte << 1);
System.out.print(val ? 1 : 0);
}
System.out.println();
byte myByte = 0x5B;
boolean bits = new boolean[8];
for(int i = 0 ; i < 8 ; i++)
bit[i] = (myByte%2 == 1);
The results is an array of zeros and ones where 1=TRUE and 0=FALSE :)