AudioRecord producing gaps of zeroes on Android 5.01 - java

Using AudioRecord, I have attempted to write a test app to record a couple of seconds of audio to be displayed to the screen. However, I seem to get a repeating pattern of zero value regions as shown below. I'm not sure if this is normal behaviour or an error in my code.
MainActivity.java
public class MainActivity extends Activity implements OnClickListener
{
private static final int SAMPLE_RATE = 44100;
private Button recordButton, playButton;
private String filePath;
private boolean recording;
private AudioRecord record;
private short[] data;
private TestView testView;
#Override
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Button recordButton = (Button) this.findViewById(R.id.recordButton);
recordButton.setOnClickListener(this);
Button playButton = (Button)findViewById(R.id.playButton);
playButton.setOnClickListener(this);
FrameLayout frame = (FrameLayout)findViewById(R.id.myFrame);
frame.addView(testView = new TestView(this));
}
#Override
public void onClick(View v)
{
if(v.getId() == R.id.recordButton)
{
if(!recording)
{
int bufferSize = AudioRecord.getMinBufferSize( SAMPLE_RATE,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
record = new AudioRecord( MediaRecorder.AudioSource.MIC,
SAMPLE_RATE,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
bufferSize * 2);
data = new short[10 * SAMPLE_RATE]; // Records up to 10 seconds
new Thread()
{
#Override
public void run()
{
recordAudio();
}
}.start();
recording = true;
Toast.makeText(this, "recording...", Toast.LENGTH_SHORT).show();
}
else
{
recording = false;
Toast.makeText(this, "finished", Toast.LENGTH_SHORT).show();
}
}
else if(v.getId() == R.id.playButton)
{
testView.invalidate();
Toast.makeText(this, "play/pause", Toast.LENGTH_SHORT).show();
}
}
void recordAudio()
{
record.startRecording();
int index = 0;
while(recording)
{
try {
Thread.sleep(50);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
int result = record.read(data, index, SAMPLE_RATE); // read 1 second at a time
if(result == AudioRecord.ERROR_INVALID_OPERATION || result == AudioRecord.ERROR_BAD_VALUE)
{
App.d("SOME SORT OF RECORDING ERROR MATE");
return;
}
else
{
index += result; // increment by number of bytes read
App.d("read: "+result);
}
}
record.stop();
data = Arrays.copyOf(data, index);
testView.setData(data);
}
#Override
protected void onPause()
{
super.onPause();
}
}
TestView.java
public class TestView extends View
{
private short[] data;
Paint paint = new Paint();
Path path = new Path();
float min, max;
public TestView(Context context)
{
super(context);
paint.setColor(Color.BLACK);
paint.setStrokeWidth(1);
paint.setStyle(Style.FILL_AND_STROKE);
}
void setData(short[] data)
{
min = Short.MAX_VALUE;
max = Short.MIN_VALUE;
this.data = data;
for(int i = 0; i < data.length; i++)
{
if(data[i] < min)
min = data[i];
if(data[i] > max)
max = data[i];
}
}
#Override
protected void onDraw(Canvas canvas)
{
canvas.drawRGB(255, 255, 255);
if(data != null)
{
float interval = (float)this.getWidth()/data.length;
for(int i = 0; i < data.length; i+=10)
canvas.drawCircle(i*interval,(data[i]-min)/(max - min)*this.getHeight(),5 ,paint);
}
super.onDraw(canvas);
}
}

Your navigation bar icons make it look like you are probably running on Android 5, and there is a bug in the Android 5.0 release which can cause precisely the problem you are seeing.
Recording to shorts gave an erroneous return value on the L preview, and while substantially reworking the code in the course of fixing that they mistakenly doubled the offset argument in the 5.0 release. Your code increments the index by the (correct) amount it has read in each call, but a pointer math mistake in the audio internals will double the offset you pass, meaning that each period of recording ends up followed by an equal period of unwritten-to buffer, which you see as those gaps of zeroes.
The issue was reported at http://code.google.com/p/android/issues/detail?id=80866
A patch submitted at that time last fall was declined as they said they had already dealt with it internally. Looking at the git history for AOSP 5.1, that would appear to have been internal commit 283a9d9e1 of November 13, which was not yet public when I encountered it later that month. While I haven't tried this on 5.1 yet, it seems like that should fix it, so most likely it is broken from 5.0-5.02 (and in a different way on the L preview) but works correctly with 4.4 and earlier, as well as with 5.1 and later.
The simplest workaround for consistent behavior across broken and unbroken release versions is to avoid ever passing a non-zero offset when recording shorts - that's how I fixed the program where I encountered the problem. A more complicated idea would be to try to figure out if you are on a broken version, and if so halve the passed argument. One method would be to detect the device version, but it's conceivable some vendor or custom ROM 5.0 builds might have been patched, so you could go a step further and do a short recording with a test offset to a zeroed buffer, then scan it to see where the non-zero data actually starts.

Do not pass half the offset to the read-function as suggested in the accepted answer. The offset is an integer and might be an uneven number. This will result in poor audio quality and would be incompatible to android versions other than 5.0.1. and 5.0.2. I used the following work-around, which works for all android versions. I changed:
short[] buffer = new short[frame_size*(frame_rate)];
num = record.read(buffer, offset, frame_size);
into
short[] buffer = new short[frame_size*(frame_rate)];
short[] buffer_bugfix = new short[frame_size];
num = record.read(buffer_bugfix, 0, frame_size);
System.arraycopy(buffer_bugfix, 0, buffer, offset, frame_size);
In words instead of letting the read-function copy the data to the offset position of the large buffer, I let the read-function copy the data to the smaller buffer. I then insert this data manually to the offset position of the large buffer.

I can't check right now your code but I can provide you with some sample code you can test:
private static int channel_config = AudioFormat.CHANNEL_IN_MONO;
private static int format = AudioFormat.ENCODING_PCM_16BIT;
private static int Fs = 16000;
private static int minBufferSize;
private boolean isRecording;
private boolean isProcessing;
private boolean isNewAudioFragment;
private final static int bytesPerSample = 2; // As it is 16bit PCM
private final double amplification = 1.0; // choose a number as you like
private static int frameLength = 512; // number of samples per frame => 32[ms] #Fs = 16[KHz]
private static int windowLength = 16; // number of frames per window => 512[ms] #Fs = 16[KHz]
private static int maxBufferedWindows = 8; // number of buffered windows => 4096 [ms] #Fs = 16[KHz]
private static int bufferSize = frameLength*bytesPerSample;
private static double[] hannWindow = new double[frameLength*bytesPerSample];
private Queue<byte[]> queue = new LinkedList<byte[]>();
private Semaphore semaphoreProcess = new Semaphore(0, true);
private RecordSignal recordSignalThread;
private ProcessSignal processSignalThread;
public static class RecorderSingleton {
public static RecorderSingleton instance = new RecorderSingleton();
private AudioRecord recordInstance = null;
private RecorderSingleton() {
minBufferSize = AudioRecord.getMinBufferSize(Fs, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
while(minBufferSize>bufferSize) {
bufferSize = bufferSize*2;
}
}
public boolean init() {
recordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC, Fs, channel_config, format, bufferSize);
if (recordInstance.getState() != AudioRecord.STATE_INITIALIZED) {
Log.d("audiotestActivity", "Fail to initialize AudioRecord object");
Log.d("audiotestActivity", "AudioRecord.getState()=" + recordInstance.getState());
}
if (recordInstance.getState() == AudioRecord.STATE_UNINITIALIZED) {
return false;
}
return true;
}
public int getBufferSize() {return bufferSize;}
public boolean start() {
if (recordInstance != null && recordInstance.getState() != AudioRecord.STATE_UNINITIALIZED) {
if (recordInstance.getRecordingState() != AudioRecord.RECORDSTATE_STOPPED) {
recordInstance.stop();
}
recordInstance.release();
}
if (!init()) {
return false;
}
recordInstance.startRecording();
return true;
}
public int read(byte[] audioBuffer) {
if (recordInstance == null) {
return AudioRecord.ERROR_INVALID_OPERATION;
}
int ret = recordInstance.read(audioBuffer, 0, bufferSize);
return ret;
}
public void stop() {
if (recordInstance == null) {
return;
}
if(recordInstance.getState()==AudioRecord.STATE_UNINITIALIZED) {
Log.d("AudioTest", "instance uninitialized");
return;
}
if(recordInstance.getState()==AudioRecord.STATE_INITIALIZED) {
recordInstance.stop();
recordInstance.release();
}
}
}
public class RecordSignal implements Runnable {
private boolean cancelled = false;
public void run() {
Looper.prepare();
// We're important...android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
int bufferRead = 0;
byte[] inAudioBuffer;
if (!RecorderSingleton.instance.start()) {
return;
}
try {
Log.d("audiotestActivity", "Recorder Started");
while(isRecording) {
inAudioBuffer = null;
inAudioBuffer = new byte[bufferSize];
bufferRead = RecorderSingleton.instance.read(inAudioBuffer);
if (bufferRead == AudioRecord.ERROR_INVALID_OPERATION) {
throw new IllegalStateException("read() returned AudioRecord.ERROR_INVALID_OPERATION");
} else if (bufferRead == AudioRecord.ERROR_BAD_VALUE) {
throw new IllegalStateException("read() returned AudioRecord.ERROR_BAD_VALUE");
}
queue.add(inAudioBuffer);
semaphoreProcess.release();
}
}
finally {
// Close resources...
stop();
}
Looper.loop();
}
public void stop() {
RecorderSingleton.instance.stop();
}
public void cancel() {
setCancelled(true);
}
public boolean isCancelled() {
return cancelled;
}
public void setCancelled(boolean cancelled) {
this.cancelled = cancelled;
}
}
public class ProcessSignal implements Runnable {
public void run() {
Looper.prepare();
//android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_DEFAULT);
while(isProcessing) {
try {
semaphoreProcess.acquire();
byte[] outAudioBuffer = new byte[frameLength*bytesPerSample*(bufferSize/(frameLength*bytesPerSample))];
outAudioBuffer = queue.element();
if(queue.size()>0) {
// do something, process your samples
}
queue.poll();
}
catch (InterruptedException e) {
e.printStackTrace();
}
}
Looper.loop();
}
}
and to start and stop simply:
public void startAudioTest() {
if(recordSignalThread!=null) {
recordSignalThread.stop();
recordSignalThread.cancel();
recordSignalThread = null;
}
if(processSignalThread!=null) {
processSignalThread = null;
}
recordSignalThread = new RecordSignal();
processSignalThread = new ProcessSignal();
new Thread(recordSignalThread).start();
new Thread(processSignalThread).start();
isRecording = true;
isProcessing = true;
}
public void stopAudioTest() {
isRecording = false;
isProcessing = false;
if(processSignalThread!=null) {
processSignalThread = null;
}
if(recordSignalThread!=null) {
recordSignalThread.cancel();
recordSignalThread = null;
}
}

Related

Android - Using MediaMuxer with MediaExtractor and PCM stream leads to corrupted video frames

I am writing a class for an app which supports streaming and recording video. In short, when the phone is streaming and recording, audio is saved in a PCM file, and video is saved in an mp4 file, using a MediaRecorder. My goal is, when the recording completes, to use a MediaMuxer and combine both inputs to a new, combined .mp4 file.
I've tried using a MediaMuxer to encode the audio and extract the video using a MediaExtractor. Both the original video and audio files are intact, and the output files contains proper audio, yet the video seems corrupted, as if frames are skipped.
This is the code that I am currently using:
public class StreamRecordingMuxer {
private static final String TAG = StreamRecordingMuxer.class.getSimpleName();
private static final String COMPRESSED_AUDIO_FILE_MIME_TYPE = "audio/mp4a-latm";
private static final int CODEC_TIMEOUT = 5000;
private int bitrate;
private int sampleRate;
private int channelCount;
// Audio state
private MediaFormat audioFormat;
private MediaCodec mediaCodec;
private MediaMuxer mediaMuxer;
private ByteBuffer[] codecInputBuffers;
private ByteBuffer[] codecOutputBuffers;
private MediaCodec.BufferInfo audioBufferInfo;
private String outputPath;
private int audioTrackId;
private int totalBytesRead;
private double presentationTimeUs;
// Video state
private int videoTrackId;
private MediaExtractor videoExtractor;
private MediaFormat videoFormat;
private String videoPath;
private int videoTrackIndex;
private int frameMaxInputSize;
private int rotationDegrees;
public StreamRecordingMuxer(final int bitrate, final int sampleRate, int channelCount) {
this.bitrate = bitrate;
this.sampleRate = sampleRate;
this.channelCount = channelCount;
}
public void setOutputPath(final String outputPath) {
this.outputPath = outputPath;
}
public void setVideoPath(String videoPath) {
this.videoPath = videoPath;
}
public void prepare() {
if (outputPath == null) {
throw new IllegalStateException("The output path must be set first!");
}
try {
audioFormat = MediaFormat.createAudioFormat(COMPRESSED_AUDIO_FILE_MIME_TYPE, sampleRate, channelCount);
audioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, bitrate);
if (videoPath != null) {
videoExtractor = new MediaExtractor();
videoExtractor.setDataSource(videoPath);
videoFormat = findVideoFormat(videoExtractor);
}
mediaCodec = MediaCodec.createEncoderByType(COMPRESSED_AUDIO_FILE_MIME_TYPE);
mediaCodec.configure(audioFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mediaCodec.start();
codecInputBuffers = mediaCodec.getInputBuffers();
codecOutputBuffers = mediaCodec.getOutputBuffers();
audioBufferInfo = new MediaCodec.BufferInfo();
mediaMuxer = new MediaMuxer(outputPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
if (videoPath != null) {
videoTrackId = mediaMuxer.addTrack(videoFormat);
mediaMuxer.setOrientationHint(rotationDegrees);
}
totalBytesRead = 0;
presentationTimeUs = 0;
} catch (IOException e) {
Log.e(TAG, "Exception while initializing StreamRecordingMuxer", e);
}
}
public void stop() {
Log.d(TAG, "Stopping StreamRecordingMuxer");
handleEndOfStream();
mediaCodec.stop();
mediaCodec.release();
mediaMuxer.stop();
mediaMuxer.release();
if (videoExtractor != null) {
videoExtractor.release();
}
}
private void handleEndOfStream() {
int inputBufferIndex = mediaCodec.dequeueInputBuffer(CODEC_TIMEOUT);
mediaCodec.queueInputBuffer(inputBufferIndex, 0, 0, (long) presentationTimeUs, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
writeAudioOutputs();
}
private MediaFormat findVideoFormat(MediaExtractor extractor) {
MediaFormat videoFormat;
int videoTrackCount = extractor.getTrackCount();
for (int i = 0; i < videoTrackCount; i++) {
videoFormat = extractor.getTrackFormat(i);
Log.d(TAG, "Video Format " + videoFormat.toString());
String mimeType = videoFormat.getString(MediaFormat.KEY_MIME);
if (mimeType.startsWith("video/")) {
videoTrackIndex = i;
frameMaxInputSize = videoFormat.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE);
rotationDegrees = videoFormat.getInteger(MediaFormat.KEY_ROTATION);
// frameRate = videoFormat.getInteger(MediaFormat.KEY_FRAME_RATE);
// videoDuration = videoFormat.getLong(MediaFormat.KEY_DURATION);
return videoFormat;
}
}
return null;
}
private void writeVideoToMuxer() {
ByteBuffer buffer = ByteBuffer.allocate(frameMaxInputSize);
MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo();
videoExtractor.unselectTrack(videoTrackIndex);
videoExtractor.selectTrack(videoTrackIndex);
while (true) {
buffer.clear();
int sampleSize = videoExtractor.readSampleData(buffer, 0);
if (sampleSize < 0) {
videoExtractor.unselectTrack(videoTrackIndex);
break;
}
videoBufferInfo.size = sampleSize;
videoBufferInfo.presentationTimeUs = videoExtractor.getSampleTime();
videoBufferInfo.flags = videoExtractor.getSampleFlags();
mediaMuxer.writeSampleData(videoTrackId, buffer, videoBufferInfo);
videoExtractor.advance();
}
}
private void encodeAudioPCM(InputStream is) throws IOException {
byte[] tempBuffer = new byte[2 * sampleRate];
boolean hasMoreData = true;
boolean stop = false;
while (!stop) {
int inputBufferIndex = 0;
int currentBatchRead = 0;
while (inputBufferIndex != -1 && hasMoreData && currentBatchRead <= 50 * sampleRate) {
inputBufferIndex = mediaCodec.dequeueInputBuffer(CODEC_TIMEOUT);
if (inputBufferIndex >= 0) {
ByteBuffer buffer = codecInputBuffers[inputBufferIndex];
buffer.clear();
int bytesRead = is.read(tempBuffer, 0, buffer.limit());
if (bytesRead == -1) {
mediaCodec.queueInputBuffer(inputBufferIndex, 0, 0, (long) presentationTimeUs, 0);
hasMoreData = false;
stop = true;
} else {
totalBytesRead += bytesRead;
currentBatchRead += bytesRead;
buffer.put(tempBuffer, 0, bytesRead);
mediaCodec.queueInputBuffer(inputBufferIndex, 0, bytesRead, (long) presentationTimeUs, 0);
presentationTimeUs = 1000000L * (totalBytesRead / 2) / sampleRate;
}
}
}
writeAudioOutputs();
}
is.close();
}
public void start(InputStream inputStream) throws IOException {
Log.d(TAG, "Starting encoding of InputStream");
encodeAudioPCM(inputStream);
Log.d(TAG, "Finished encoding of InputStream");
if (videoPath != null) {
writeVideoToMuxer();
}
}
private void writeAudioOutputs() {
int outputBufferIndex = 0;
while (outputBufferIndex != MediaCodec.INFO_TRY_AGAIN_LATER) {
outputBufferIndex = mediaCodec.dequeueOutputBuffer(audioBufferInfo, CODEC_TIMEOUT);
if (outputBufferIndex >= 0) {
ByteBuffer encodedData = codecOutputBuffers[outputBufferIndex];
encodedData.position(audioBufferInfo.offset);
encodedData.limit(audioBufferInfo.offset + audioBufferInfo.size);
if ((audioBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0 && audioBufferInfo.size != 0) {
mediaCodec.releaseOutputBuffer(outputBufferIndex, false);
} else {
mediaMuxer.writeSampleData(audioTrackId, codecOutputBuffers[outputBufferIndex], audioBufferInfo);
mediaCodec.releaseOutputBuffer(outputBufferIndex, false);
}
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
audioFormat = mediaCodec.getOutputFormat();
audioTrackId = mediaMuxer.addTrack(audioFormat);
mediaMuxer.start();
}
}
}
}
I've finally managed to find an answer, unrelated to the actual Muxer code: it turns out, when creating the audio file, the presentation times were miscalculated.

JPG vs WebP performance on Android

I´m trying to get performance stats about how Android load, decode and render WebP images against JPG, but my results are a little confuse.
Decoding WebP images to Bitmap are slow than JPG.
Some stats:
WebP 66% less file size than JPG, 267% more time to decode.
WebP 38% less file size than JPG, 258% more time to decode.
WebP 89% less file size than JPG, 319% more time to decode.
Has someone know about any issue on performance, or why WebP decoding is harder than JPG.
This is my test:
public class BulkLoadFromDisk implements Runnable {
private static final String TAG = "BulkLoadFromDisk";
private static final int TIMES = 10;
private final ResourceProvider resourceProvider;
private final Activity context;
private final int counter;
private long averageLoadTimeNano;
private long averageConvertTimeNano;
private final ImagesFactory.FORMAT format;
private final CompleteListener listener;
public BulkLoadFromDisk(Activity context, ResourceProvider resourceProvider,
CompleteListener listener, ImagesFactory.FORMAT format) {
this.resourceProvider = resourceProvider;
this.context = context;
this.counter = resourceProvider.length();
this.format = format;
this.listener = listener;
}
#Override
public void run() {
try {
Thread.sleep(200);
} catch (InterruptedException e) {
Log.e(TAG, e.getMessage(), e);
}
try {
String file;
long loadBegin, loadEnd;
long convertBegin, convertEnd;
Bitmap bitmap; Drawable d;
String extension = "." + format.name().toLowerCase();
InputStream inputStream;
for(int j = 0; j < TIMES; j++) {
for(int index = 0; index < counter; index++) {
file = resourceProvider.get(index).concat(extension);
inputStream = context.getAssets().open(file);
// Load bitmap from file
loadBegin = System.nanoTime();
bitmap = BitmapFactory.decodeStream(inputStream);
assert (bitmap != null);
loadEnd = System.nanoTime();
// Convert bitmap to drawable
convertBegin = System.nanoTime();
d = new BitmapDrawable(context.getResources(), bitmap);
assert (d != null);
convertEnd = System.nanoTime();
averageLoadTimeNano += (loadEnd - loadBegin);
averageConvertTimeNano += (convertEnd - convertBegin);
}
}
averageLoadTimeNano = averageLoadTimeNano / (TIMES * counter);
averageConvertTimeNano = averageConvertTimeNano / (TIMES * counter);
if(listener != null && context != null) {
context.runOnUiThread(new Runnable() {
#Override
public void run() {
listener.onComplete(BulkLoadFromDisk.this);
}
});
}
}
catch (final IOException e) {
if(listener != null && context!= null) {
context.runOnUiThread(new Runnable() {
#Override
public void run() {
listener.onError(e);
}
});
}
} finally {
System.gc();
}
}
public interface CompleteListener {
void onComplete(BulkLoadFromDisk task);
void onError(Exception e);
}
public long getAverageLoadTimeNano() {
return averageLoadTimeNano;
}
public long getAverageConvertTimeNano() {
return averageConvertTimeNano;
}
public ImagesFactory.FORMAT getFormat() {
return format;
}
public String resultToString() {
final StringBuffer sb = new StringBuffer("BulkLoadFromDisk{");
sb.append("averageLoadTimeNano=").append(Utils.nanosToBest(averageLoadTimeNano).first
+ Utils.nanosToBest(averageLoadTimeNano).second);
sb.append(", averageConvertTimeNano=").append(Utils.nanosToBest(averageConvertTimeNano).first
+ Utils.nanosToBest(averageConvertTimeNano).second);
sb.append(", format=").append(format);
sb.append('}');
return sb.toString();
}
I know this is an old question and i haven't studied the in-depths of WebP yet, but it's probably because it's a more complex algorith, hence why it has better compression ratios than JPEG. WebP is based on the VP8 codec, which is itself an royalty-free competitor to the widely-used, and heavy, h264 format.
JPEG is widely used, however, it's a really old format and is considerably simpler than the VP8 codec of WebP.

Audio file is play late after write in buffer in android Audio Tack

when i increase the size of buffer the audio sound that will write on buffer is play late.And when i discrease the size of buffer the file is played correctly means the file is play on time not late. any one can help ... The buffer size is 64k.
public class MediaSPK
{
private static final int RECORDER_SAMPLERATE = 16000;
private static final int RECORDER_CHANNELS = AudioFormat.CHANNEL_OUT_MONO;
private static final int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
VaxSIPUserAgent m_objVaxSIPUserAgent;
boolean m_bMuteSpk = false;
boolean m_bPlay = false;
AudioTrack m_objAudioTrack = null;
public MediaSPK(VaxSIPUserAgent objVaxSIPUserAgent)
{
m_objVaxSIPUserAgent = objVaxSIPUserAgent;
}
public void OpenSpk()
{
int nMinBuffSize = AudioTrack.getMinBufferSize(RECORDER_SAMPLERATE, RECORDER_CHANNELS, RECORDER_AUDIO_ENCODING);
//Log.i("size SPK", "" + m_nMinBuffSize);
m_objAudioTrack = new AudioTrack(AudioManager.STREAM_VOICE_CALL, RECORDER_SAMPLERATE, RECORDER_CHANNELS, RECORDER_AUDIO_ENCODING, 64000, AudioTrack.MODE_STREAM);
m_objAudioTrack.play();
m_bPlay = false;
}
public void PlaySpk(byte[] aData, int nDataSize)
{
if(m_bMuteSpk)
{
byte[] aDataSilence = new byte[nDataSize];
m_objAudioTrack.write(aDataSilence, 0, nDataSize);
}
else
{
m_objAudioTrack.write(aData, 0, nDataSize);
}
}
public void Mute(boolean bEnable)
{
//m_bMuteSpk = bEnable;
//m_objAudioTrack
}
public void CloseSpk()
{
if(m_objAudioTrack == null)
return;
try
{
m_objAudioTrack.stop();
m_objAudioTrack.release();
m_objAudioTrack = null;
}
catch (IllegalStateException e)
{
e.printStackTrace();
}
}
}
May be you are putting the whole operation on the single thread try to split out the operation play the audio in a seperate thread by using
Handler mHandler = new Handler();
mHandler.postDelayed(new Runnable(), delay);

Stuck at getting array from another thread

I have a small, but very strange problem...
I need to read fragments from file and place them into array, which is out of reading thread, but when i wants to get them from current thread, i'm getting empty arrray.
My brain crashed at this stuff:
private int fragmentSize = 262144, fragmentCacheElements = 32, fragmentCacheUpdates = 0;
// Cache 8Mb (265Kb*32)(262144*32)
private String[] fragmentCache;
private boolean needCacheUpdate, end;
private Thread cacheThread = new Thread(new Runnable()
{
String[] fCache = new String[fragmentCacheElements];
#Override
public void run()
{
while (!end) {
for (int i = 0; i < fragmentCacheElements; ++i) {
fCache[i] = new String(loadFragment(i + fragmentCacheUpdates * fragmentCacheElements));
}
while (true) {
if (needCacheUpdate) {
++fragmentCacheUpdates;
fragmentCache = fCache;
// fragment[0] != null
needCacheUpdate = false;
break;
}
}
}
}
});
public static void main(String[] args)
{
fragmentCache = new String[fragmentCacheElements];
cacheThread.start();
updateCache();
// Notifying client
}
// Getting fragment from cache to send it to client
// AND WHY fragment[0] == null ???
private String getCache(int id)
{
if (id >= fragmentCacheUpdates * fragmentCacheElements) {
updateCache();
}
return fragmentCache[id - (fragmentCacheUpdates - 1) * fragmentCacheElements];
}
private void updateCache()
{
needCacheUpdate = true;
while (true) {
if (!needCacheUpdate) {
break;
}
}
}
Any suggestions?
Try
fragmentCache = Arrays.copyOf(fCache, fCache.length);

Android - AudioRecord: Detect a pulse-width modulated signal over the audio jack (mic)

I try to dectect a square wave signal over the audio jack in near real time (Mic). For that reason I use the class AudioRecord in streaming mode. But my problem is, that my phone (mic) always works different. Sometimes I use a threshold of 20'000 and sometimes I have to adjust my threshold to 1'000 to detect the edge (of the first pulse). The voltage range of the signal is 0 to 3V. I'm not sure if my mic isn't working right or if the adc use different reference voltages?!
I haven't got any idea how to solve this problem.
I really hope you can help me.
Here my source code:
public class ReceiveCom extends AsyncTask implements Layers{
private AudioRecord audioRecord;
private int sampleRate = 44100;
private short[] audioData;
private int sizeInShorts;
private boolean isRunning=false;
private int receiveBuffer;
private int minBuffersize;
private boolean ready=false;
private int audioResult,dataCounter=0;
private short[] dataBits = new short[8];
#TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public ReceiveCom(){
try{
minBuffersize = AudioRecord.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
audioRecord = new AudioRecord(AudioSource.MIC, sampleRate, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT, minBuffersize);
if (audioRecord.getState() != AudioRecord.STATE_INITIALIZED){
throw new Exception("AudioRecord init failed");
}
audioData = new short[2*minBuffersize];
Log.e("ERROR", "ReceiveCom constructor: OK ");
}
catch(IllegalArgumentException e){
Log.e("ERROR", "IllegalArgumentException: " + e);
}
catch(Exception e){
Log.e("ERROR", "ConstructorException: " + e);
}
finally{
Log.e("ERROR", "ReceiveCom constructor: END ");
}
if(Build.VERSION.SDK_INT >= 17){
if(AutomaticGainControl.isAvailable()){
Log.e("INFO", "AGC is available");
//AutomaticGainControl.create(audioRecord.getAudioSessionId());
}
}
}
protected Void doInBackground(Void... arg0) {
audioRecord.setPositionNotificationPeriod(minBuffersize);
audioRecord.setRecordPositionUpdateListener( new OnRecordPositionUpdateListener(){
#Override
public void onPeriodicNotification(AudioRecord myRecorder) {
int timeCounter=0,bitCounter=0,i=0;
boolean edgeTriggered=false, bitReady=false;
int dB,edgeThreshold=10000,idleThreshold=edgeThreshold/5;
for(i=2; i<audioResult; i=i+1){
if(edgeTriggered==false && audioData[i-2]<=idleThreshold && audioData[i-1]>=edgeThreshold && audioData[i]>=edgeThreshold ){
Log.e("DEBUG","audioData["+i+"]: "+audioData[i]);
timeCounter++;
edgeTriggered=true;
bitReady=false;
}
}
}
#Override
public void onMarkerReached(AudioRecord recorder) { }
});
audioRecord.startRecording();
while(isRunning){
audioResult = audioRecord.read(audioData, 0, minBuffersize);
}
audioRecord.stop();
audioRecord.release();
return null;
}
Sometimes there's an AGC or other pre-processing-filter on MIC. You can try AudioSource.VOICE_RECOGNITION instead of AudioSource.MIC...

Categories

Resources