SourceDataLine buffersize resulting in clicks and stops in playback - java

I am trying to play back audio I create in realtime in my application with a SourceDataLine. When opening the SourceDataLine with sdl.open(format), it creates a default buffer of 32000 (my sample rate), so effectively one second. But since my application should be low-latency, I have tried to use a smaller buffer. (sdl.open(format, buffer);)
On generating sound, I use a buffer of 512 sampels at the moment (haven't figured out the best value there, if you have any insight, I would appreciate it)
Some pseudocode for my algorithm:
int pos = 0;
int max = 512;
byte sampleBuffer[] = new byte[max];
while(active) {
sampleBuffer[pos++] = generateSample(); // actually I generate doubles and make bytes out of em later, but who cares
if (pos == max) {
sdl.write(sampleBuffer, 0, pos);
pos = 0;
}
}
When I try to use my own buffer size (I tried everything from max to max * 2 // * 4; * 8; * 16), I get a lot of clicks and noise.
If you guys have any insight on the right way to go here, I would really appreciate it. I don't know how much bigger my SourceDataLine buffer should be than the chunks that I write to the Line, if at all. Are there any tricks to getting this smooth? I am quite certain that my program generates the audio fast enough, so that should not be the problem.

Related

How Buffer Streams works internally in Java

I'm reading about Buffer Streams. I searched about it and found many answers that clear my concepts but still have little more questions.
After searching, I have come to know that, Buffer is temporary memory(RAM) which helps program to read data quickly instead hard disk. and when Buffers empty then native input API is called.
After reading little more I got answer from here that is.
Reading data from disk byte-by-byte is very inefficient. One way to
speed it up is to use a buffer: instead of reading one byte at a time,
you read a few thousand bytes at once, and put them in a buffer, in
memory. Then you can look at the bytes in the buffer one by one.
I have two confusion,
1: How/Who data filled in Buffers? (native API how?) as quote above, who filled thousand bytes at once? and it will consume same time. Suppose I have 5MB data, and 5MB loaded once in Buffer in 5 Seconds. and then program use this data from buffer in 5 seconds. Total 10 seconds. But if I skip buffering, then program get direct data from hard disk in 1MB/2sec same as 10Sec total. Please clear my this confusion.
2: The second one how this line works
BufferedReader inputStream = new BufferedReader(new FileReader("xanadu.txt"));
As I'm thinking FileReader write data to buffer, then BufferedReader read data from buffer memory? Also explain this.
Thanks.
As for the performance of using buffering during read/write, it's probably minimal in impact since the OS will cache too, however buffering will reduce the number of calls to the OS, which will have an impact.
When you add other operations on top, such as character encoding/decoding or compression/decompression, the impact is greater as those operations are more efficient when done in blocks.
You second question said:
As I'm thinking FileReader write data to buffer, then BufferedReader read data from buffer memory? Also explain this.
I believe your thinking is wrong. Yes, technically the FileReader will write data to a buffer, but the buffer is not defined by the FileReader, it's defined by the caller of the FileReader.read(buffer) method.
The operation is initiated from outside, when some code calls BufferedReader.read() (any of the overloads). BufferedReader will then check it's buffer, and if enough data is available in the buffer, it will return the data without involving the FileReader. If more data is needed, the BufferedReader will call the FileReader.read(buffer) method to get the next chunk of data.
It's a pull operation, not a push, meaning the data is pulled out of the readers by the caller.
All the stuff is done by a private method named fill() i give you for educational purpose, but all java IDE let you see the source code yourself :
private void fill() throws IOException {
int dst;
if (markedChar <= UNMARKED) {
/* No mark */
dst = 0;
} else {
/* Marked */
int delta = nextChar - markedChar;
if (delta >= readAheadLimit) {
/* Gone past read-ahead limit: Invalidate mark */
markedChar = INVALIDATED;
readAheadLimit = 0;
dst = 0;
} else {
if (readAheadLimit <= cb.length) {
/* Shuffle in the current buffer */
// here copy the read chars in a memory buffer named cb
System.arraycopy(cb, markedChar, cb, 0, delta);
markedChar = 0;
dst = delta;
} else {
/* Reallocate buffer to accommodate read-ahead limit */
char ncb[] = new char[readAheadLimit];
System.arraycopy(cb, markedChar, ncb, 0, delta);
cb = ncb;
markedChar = 0;
dst = delta;
}
nextChar = nChars = delta;
}
}
int n;
do {
n = in.read(cb, dst, cb.length - dst);
} while (n == 0);
if (n > 0) {
nChars = dst + n;
nextChar = dst;
}
}

How to adjust microphone sensitivity while recording audio in android

I'm working on a voice recording app. In it, I have a Seekbar to change the input voice gain.
I couldn't find any way to adjust the input voice gain.
I am using the AudioRecord class to record voice.
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
RECORDER_SAMPLERATE, RECORDER_CHANNELS,
RECORDER_AUDIO_ENCODING, bufferSize);
recorder.startRecording();
I've seen an app in the Google Play Store using this functionality.
As I understand you don't want any automatic adjustments, only manual from the UI. There is no built-in functionality for this in Android, instead you have to modify your data manually.
Suppose you use read (short[] audioData, int offsetInShorts, int sizeInShorts) for reading the stream. So you should just do something like this:
float gain = getGain(); // taken from the UI control, perhaps in range from 0.0 to 2.0
int numRead = read(audioData, 0, SIZE);
if (numRead > 0) {
for (int i = 0; i < numRead; ++i) {
audioData[i] = (short)Math.min((int)(audioData[i] * gain), (int)Short.MAX_VALUE);
}
}
Math.min is used to prevent overflow if gain is greater than 1.
Dynamic microphone sensitivity is not a thing that the hardware or operating system is capable of as it requires analysis on the recorded sound. You should implement your own algorithm to analyze the recorded sound and adjust (amplify or decrease) the sound level on your own.
You can start by analyzing last few seconds and find a multiplier that is going to "balance" the average amplitude. The multiplier must be inversely proportional to the average amplitude to balance it.
PS: If you still want to do it, the mic levels are accessible when you have a root access, but I am still not sure -and don't think it is possible- if you can change the settings while recording. Hint: "/system/etc/snd_soc_msm" file.
Solution by OP.
I have done it using
final int USHORT_MASK = (1 << 16) - 1;
final ByteBuffer buf = ByteBuffer.wrap(data).order(
ByteOrder.LITTLE_ENDIAN);
final ByteBuffer newBuf = ByteBuffer.allocate(
data.length).order(ByteOrder.LITTLE_ENDIAN);
int sample;
while (buf.hasRemaining()) {
sample = (int) buf.getShort() & USHORT_MASK;
sample *= db_value_global;
newBuf.putShort((short) (sample & USHORT_MASK));
}
data = newBuf.array();
os.write(data);
This is working implementation based on ByteBuffer for 16bit audio. It's important to clamp the increased value from both sides since short is signed. It's also important to set the native byte order to ByteBuffer since audioRecord.read() returns native endian bytes.
You may also want to perform audioRecord.read() and following code in a loop, calling data.clear() after each iteration.
double gain = 2.0;
ByteBuffer data = ByteBuffer.allocateDirect(SAMPLES_PER_FRAME).order(ByteOrder.nativeOrder());
int audioInputLengthBytes = audioRecord.read(data, SAMPLES_PER_FRAME);
ShortBuffer shortBuffer = data.asShortBuffer();
for (int i = 0; i < audioInputLengthBytes / 2; i++) { // /2 because we need the length in shorts
short s = shortBuffer.get(i);
int increased = (int) (s * gain);
s = (short) Math.min(Math.max(increased, Short.MIN_VALUE), Short.MAX_VALUE);
shortBuffer.put(i, s);
}

Java algorithm for normalizing audio

I'm trying to normalize an audio file of speech.
Specifically, where an audio file contains peaks in volume, I'm trying to level it out, so the quiet sections are louder, and the peaks are quieter.
I know very little about audio manipulation, beyond what I've learnt from working on this task. Also, my math is embarrassingly weak.
I've done some research, and the Xuggle site provides a sample which shows reducing the volume using the following code: (full version here)
#Override
public void onAudioSamples(IAudioSamplesEvent event)
{
// get the raw audio byes and adjust it's value
ShortBuffer buffer = event.getAudioSamples().getByteBuffer().asShortBuffer();
for (int i = 0; i < buffer.limit(); ++i)
buffer.put(i, (short)(buffer.get(i) * mVolume));
super.onAudioSamples(event);
}
Here, they modify the bytes in getAudioSamples() by a constant of mVolume.
Building on this approach, I've attempted a normalisation modifies the bytes in getAudioSamples() to a normalised value, considering the max/min in the file. (See below for details). I have a simple filter to leave "silence" alone (ie., anything below a value).
I'm finding that the output file is very noisy (ie., the quality is seriously degraded). I assume that the error is either in my normalisation algorithim, or the way I manipulate the bytes. However, I'm unsure of where to go next.
Here's an abridged version of what I'm currently doing.
Step 1: Find peaks in file:
Reads the full audio file, and finds this highest and lowest values of buffer.get() for all AudioSamples
#Override
public void onAudioSamples(IAudioSamplesEvent event) {
IAudioSamples audioSamples = event.getAudioSamples();
ShortBuffer buffer =
audioSamples.getByteBuffer().asShortBuffer();
short min = Short.MAX_VALUE;
short max = Short.MIN_VALUE;
for (int i = 0; i < buffer.limit(); ++i) {
short value = buffer.get(i);
min = (short) Math.min(min, value);
max = (short) Math.max(max, value);
}
// assign of min/max ommitted for brevity.
super.onAudioSamples(event);
}
Step 2: Normalize all values:
In a loop similar to step1, replace the buffer with normalized values, calling:
buffer.put(i, normalize(buffer.get(i));
public short normalize(short value) {
if (isBackgroundNoise(value))
return value;
short rawMin = // min from step1
short rawMax = // max from step1
short targetRangeMin = 1000;
short targetRangeMax = 8000;
int abs = Math.abs(value);
double a = (abs - rawMin) * (targetRangeMax - targetRangeMin);
double b = (rawMax - rawMin);
double result = targetRangeMin + ( a/b );
// Copy the sign of value to result.
result = Math.copySign(result,value);
return (short) result;
}
Questions:
Is this a valid approach for attempting to normalize an audio file?
Is my math in normalize() valid?
Why would this cause the file to become noisy, where a similar approach in the demo code doesn't?
I don't think the concept of "minimum sample value" is very meaningful, since the sample value just represents the current "height" of the sound wave at a certain time instant. I.e. its absolute value will vary between the peak value of the audio clip and zero. Thus, having a targetRangeMin seems to be wrong and will probably cause some distortion of the waveform.
I think a better approach might be to have some sort of weight function that decreases the sample value based on its size. I.e. bigger values are decreased by a large percentage than smaller values. This would also introduce some distortion, but probably not very noticeable.
Edit: here is a sample implementation of such a method:
public short normalize(short value) {
short rawMax = // max from step1
short targetMax = 8000;
//This is the maximum volume reduction
double maxReduce = 1 - targetMax/(double)rawMax;
int abs = Math.abs(value);
double factor = (maxReduce * abs/(double)rawMax);
return (short) Math.round((1 - factor) * value);
}
For reference, this is what your algorithm did to a sine curve with an amplitude of 10000:
This explains why the audio quality becomes much worse after being normalized.
This is the result after running with my suggested normalize method:
"normalization" of audio is the process of increasing the level of the audio such that the maximum is equal to some given value, usually the maximum possible value. Today, in another question, someone explained how to do this (see #1): audio volume normalization
However, you go on to say "Specifically, where an audio file contains peaks in volume, I'm trying to level it out, so the quiet sections are louder, and the peaks are quieter." This is called "compression" or "limiting" (not to be confused with the type of compression such as that used in encoding MP3s!). You can read more about that here: http://en.wikipedia.org/wiki/Dynamic_range_compression
A simple compressor is not particularly hard to implement, but you say your math "is embarrassingly weak." So you might want to find one that's already built. You might be able to find a compressor implemented in http://sox.sourceforge.net/ and convert that from C to Java. The only java implementation of compressor I know of who's source is available (and it's not very good) is in this book
As an alternative to solve your problem, you might be able to normalize your file in segments of say 1/2 a second each, and then connect the gain values you use for each segment using linear interpolation. You can read about linear interpolation for audio here: http://blog.bjornroche.com/2010/10/linear-interpolation-for-audio-in-c-c.html
I don't know if the source code is available for the levelator, but that's something else you can try.

Wav comparison, same file

I'm currently stumped. I've been looking around and experimenting with audio comparison. I've found quite a bit of material, and a ton of references to different libraries and methods to do it.
As of now I've taken Audacity and exported a 3min wav file called "long.wav" and then split the first 30seconds of that into a file called "short.wav". I figured somewhere along the line I could visually log (log.txt) the data through java for each and should be able to see at least some visual similarities among the values.... here's some code
Main method:
int totalFramesRead = 0;
File fileIn = new File(filePath);
BufferedWriter writer = new BufferedWriter(new FileWriter(outPath));
writer.flush();
writer.write("");
try {
AudioInputStream audioInputStream =
AudioSystem.getAudioInputStream(fileIn);
int bytesPerFrame =
audioInputStream.getFormat().getFrameSize();
if (bytesPerFrame == AudioSystem.NOT_SPECIFIED) {
// some audio formats may have unspecified frame size
// in that case we may read any amount of bytes
bytesPerFrame = 1;
}
// Set an arbitrary buffer size of 1024 frames.
int numBytes = 1024 * bytesPerFrame;
byte[] audioBytes = new byte[numBytes];
try {
int numBytesRead = 0;
int numFramesRead = 0;
// Try to read numBytes bytes from the file.
while ((numBytesRead =
audioInputStream.read(audioBytes)) != -1) {
// Calculate the number of frames actually read.
numFramesRead = numBytesRead / bytesPerFrame;
totalFramesRead += numFramesRead;
// Here, do something useful with the audio data that's
// now in the audioBytes array...
if(totalFramesRead <= 4096 * 100)
{
Complex[][] results = PerformFFT(audioBytes);
int[][] lines = GetKeyPoints(results);
DumpToFile(lines, writer);
}
}
} catch (Exception ex) {
// Handle the error...
}
audioInputStream.close();
} catch (Exception e) {
// Handle the error...
}
writer.close();
Then PerformFFT:
public static Complex[][] PerformFFT(byte[] data) throws IOException
{
final int totalSize = data.length;
int amountPossible = totalSize/Harvester.CHUNK_SIZE;
//When turning into frequency domain we'll need complex numbers:
Complex[][] results = new Complex[amountPossible][];
//For all the chunks:
for(int times = 0;times < amountPossible; times++) {
Complex[] complex = new Complex[Harvester.CHUNK_SIZE];
for(int i = 0;i < Harvester.CHUNK_SIZE;i++) {
//Put the time domain data into a complex number with imaginary part as 0:
complex[i] = new Complex(data[(times*Harvester.CHUNK_SIZE)+i], 0);
}
//Perform FFT analysis on the chunk:
results[times] = FFT.fft(complex);
}
return results;
}
At this point I've tried logging everywhere: audioBytes before transforms, Complex values, and FFT results.
The problem: No matter what values I log, the log.txt of each wav file is completely different. I'm not understanding it. Given that I took the small.wav from the large.wav (and they have all the same properties) there should be a very heavy similarity among either the raw wav byte[] data... or Complex[][] fft data... or something thus far..
How can I possibly try to compare these files if the data isn't even close to similar at any point of these calculations.
I know I'm missing quite a bit of knowledge with regards to audio analysis, and this is why I come to the board for help! Thanks for any info, help, or fixes you can offer!!
Have you looked at MARF? It is a well-documented Java library used for audio recognition.
It is used to recognize speakers (for transcription or securing software) but the same features should be able to be used to classify audio samples. I'm not familiar with it but it looks like you'd want to use the FeatureExtraction class to extract an array of features from each audio sample and then create a unique id.
For 16-bit audio, 3e-05 isn't really that different from zero. So a file of zeros is pretty much the same as a file of zeros (maybe missing equality by some tiny rounding errors.)
ADDED:
For your comparison, read in and plot, using some Java plotting library, a portion of each of the two waveforms when they get past the portion that's mostly (close to) zero.
I think for debugging you better try use matlab to plot out. Since matlab is much more powerful in dealing with this problem.
You use "wavread" to the file, and "stft" to get the short time Fourier Transformation which is a complex number Matrix. Then simply abs(Matrix) to get the magnitude of each complex number. Show the image with imshow(abs(Matrix),[]).
I don't know how do you compare the whole file and 30s clip (by looking at the stft image?)
I don't know how are you comparing both audio files, but, seeing some service that offer music recognition (like TrackId or MotoID), these services take a small sample of the music you're hearing (10-20 secs), then process them in their server, i theorize that they have samples that long or less and that they have a database of (or calculate it on the fly) patterns of that samples (in your case Fourier Transforms), in your case, you may need to break your long audio file in chunks of or smaller size than your sample data, in the first case you may find a specific chunk that resembles more the pattern in your sample data, in the second case your smaller chunks may resamble a part of your sample data and you can calculate the probability that the sample data belongs to a respective audio file.
I think you are looking at Acoustic Fingerprinting
It's hard, and there are libraries to do it.
If you want to implement it yourself, this is a whitepaper on the shazam algorithm.

Java audio effects - Distortion algorithm on Android

I'm working on an application for android that does some real-time processing of audio from the mic. The sampling and playback is working effectively, but I am having difficulty with implementing the first audio effect - distortion. The audio comes in buffers of shorts, so each time one of these is received I attempt to map the values to the full size of a signed short, and then essentially clip these values if they are above a certain level. The audio that comes from this is certainly distorted, but not in a desirable way. I've included my code for accomplishing this. Can anyone see an error here?
public void onMarkerReached(AudioRecord recorder) {
// TODO Auto-generated method stub
short max = maxValue(buffers[ix]);
short multiplier;
if(max!=0)
multiplier = (short) (0x7fff/max);
else
multiplier = 0x7fff;
double distLvl =.8;
short distLvlSho = 31000;
short max2 =100;
for(int i=0;i<buffers[ix].length;i++){
buffers[ix][i]=(short) (buffers[ix][i]*multiplier);
if(buffers[ix][i]>distLvlSho)
buffers[ix][i]=distLvlSho;
else if(buffers[ix][i]<-distLvlSho)
buffers[ix][i]=(short)-distLvlSho;
buffers[ix][i]=(short) (buffers[ix][i]/multiplier);
}
The buffers array is a 2D array of shorts, and the processing is to be done on just one of the array-within-arrays, here buffers[ix].
As far as I see in the end what you get is just a clipping of the source with a clip threshold which follows the proportion clipThr/max(input)=distLvlSho/0x7fff. most of the input this way is basically unchanged.
If you actually wanted to distort the signal you should apply some kind nonlinear function to the whole signal (plus eventually clipping near sample max to simulate the analog saturation)
A few simple models for distortion are listed in this book : http://books.google.it/books?id=h90HIV0uwVsC&printsec=frontcover#v=onepage&q&f=false
The simplest is a simmetrical soft clipping (see page 118). Here's your method modified with that soft clip function, see if it fits your needs for distorted sound (I tested it by making up a few sinusoids on input and using excel to plot the output)
In the same chapter you'll find a simple tube modeling and a fuzz filter modeling (there are a few exponentials on those so if performance is an issue you might want to approximate those).
public void onMarkerReachedSoftClip(short[] buffer) {
double th=1.0/3.0;
double multiplier = 1.0/0x7fff; // normalize input to double -1,1
double out = 0.0;
for(int i=0;i<buffer.length;i++){
double in = multiplier*(double)buffer[i];
double absIn = java.lang.Math.abs(in);
if(absIn<th){
out=(buffer[i]*2*multiplier);
}
else if(absIn<2*th){
if(in>0)out= (3-(2-in*3)*(2-in*3))/3;
else if(in<0)out=-(3-(2-absIn*3)*(2-absIn*3))/3;
}
else if(absIn>=2*th){
if(in>0)out=1;
else if(in<0)out=-1;
}
buffer[i] = (short)(out/multiplier);
}
}
If you multiply 2 short integers, the result requires a long integer or the result can overflow.
e.g. 1000 * 1000 = 1000000 , which is too big for a 16-bit short integer.
So you need to perform a scaling operation (divide or right shift) before you convert the multiplication result to a short value for storage. Something like:
result_short = (short)( ( short_op_1 * short_op_2 ) >> 16 );

Categories

Resources