Mixing audio android - java

I'm developing applications for Android. I need to have some kind of audio mixing effect.
How can I combine 2 or more audio tracks into 1 audio track?
Or adding an audio track into a certain stream of an audio?
Can anyone give me samples where I can head start?

Maybe you could try to port this code to Android?
Mixing of multiple AudioInputStreams to one AudioInputStream.
This class takes a collection of AudioInputStreams and mixes
them together. Being a subclass of AudioInputStream itself,
reading from instances of this class behaves as if the mixdown
result of the input streams is read
.

Related

Java Sound API: Recording and monitoring same input source

I'm trying to record from the microphone to a wav file as per this example. At the same time, I need to be able to test for input level/volume and send an alert if it's too low. I've tried what's described in this link and seems to work ok.
The issue comes when trying to record and read bytes at the same time using one TargetDataLine (bytes read for monitoring are being skipped for recording and vice-versa.
Another thing is that these are long processes (hours probably) so memory usage should be considered.
How should I proceed here? Any way to clone TargetDataLine? Can I buffer a number of bytes while writing them with AudioSystem.write()? Is there any other way to write to a .wav file without filling the system memory?
Thanks!
If you are using a TargetDataLine for capturing audio similar to the example given in the Java Tutorials, then you have access to a byte array called "data". You can loop through this array to test the volume level before outputting it.
To do the volume testing, you will have to convert the bytes to some sort of sensible PCM data. For example, if the format is 16-bit stereo little-endian, you might take two bytes and assemble to either a signed short or a signed, normalized float, and then test.
I apologize for not looking more closely at your examples before posting my "solution".
I'm going to suggest that you extend InputStream, making a customized version that also performs the volume test. Override the 'read' method so that it obtains the byte that it returns from the code you have that tests the volume. You'll have to modify the volume-testing code to work on a per-byte basis and to pass through the required byte.
You should then be able to use this extended InputStream as an argument when you create the AudioInputStream for the output-to-wav stage.
I've used this approach to save audio successfully via two data sources: once from an array that is populated beforehand, once from a streaming audio mix passing through a "mixer" I wrote to combine audio data sources. The latter would be more like what you need to do. I haven't done it from a microphone source, though. But the same approach should work, as far as I can tell.

How to insert blank bytes, while recording audio using JAVA Sound API?

I am writing a software in Java, to record small presentations on your PC.
The software has three components as of now :
a Webcam Recorder -> uses openCV
a Microphone Recorder -> uses JAVA Sound API
and a PPT displayer -> uses Apache POI API
I am planning to add writing pad component later .
Approach is to record visual and audio components separately and finally merge the video and audio files using ffmpeg system call.
In the GUI, I have given a button to mute the microphone while recording but video is still getting recorded (may be to avoid some noise while recording).
In the code implementation however, when the audio input is paused I am planning to add byte value 0 in the audio stream, byte array, so as to make sure both audio and video length are same before merging.
Is this approach good ?
If yes, how to know how many bytes to insert in the byte array so as to make it of same length as of video.
If no, please suggest some solution approaches for this problem.

Android play and record radio stream simultaneously

I have Android application, which stream radio stream with m3u extension. It is playing this stream in MediaPlayer through method setDatasource(url). Of course, this works greaet because I don't need to take a care about reading inputstream, buffer that, repeatedly send to the MediaPlayer with seeking the last position and so on.
But now I need to stream and also record the stream. How should I do that? What are your recommendations?
Like use NanoHTTPD (but how, where should I start to study own webserver logic)
Or use SockerHandler somehow?
Or any other solution? I don't know even, where should I start to study this problem.
Many thanks

Changing audio input using Java?

I'd like to modify the audio input stream, the stream that would come
from my microphone.
I have looked through the java.sound package API, but did not entirely understand it,
nor how to modify direct sound input.
Does anyone here know how to do that, or know an API that is capable of doing it?
You want a mixture of things:
The Java Sound system: http://www.oracle.com/technetwork/java/index-139508.html
A trail for it: http://docs.oracle.com/javase/tutorial/sound/index.html
Using audio controls: http://docs.oracle.com/javase/1.5.0/docs/guide/sound/programmer_guide/chapter6.html (part of a wider set of documentation)
If you are able to give more information about what you want to do to the audio stream, it's likely we'll be able to give you more specific advice.

Audio playing too fast

If any of my fellow Xuggler users can tell me what I'm doing wrong, that would be awesome! I am doing the following:
Reading from ogv (ogg video)
Queueing the audio and video
Writing back to ogv
sounds simple right? The problem I am experiencing in my QueueMixer class is that the output file plays the audio 2x too fast, and no matter what I check or change with regards to pts it doesnt improve.
The eclipse project and all files being used are at this link:
http://dl.dropbox.com/u/7316897/paul-xuggled.zip
To run a test, compile and execute the StreamManager class; a test ogv file is included. Before anyone asks, yes I have to queue the data as it will be mixed with other data in a future version.
Audio has a sample rate. Make sure that you copy the sample rate from the input to the output, otherwise the system might use a default (like 44KHz while the original data was sampled at 22KHz which would create such an effect).
The fix is to multiply the sample count by two when extracting them from the ShortBuffer.
samples = new short[(int) audioSamples.getNumSamples() * 2];
audioSamples.getByteBuffer().asShortBuffer().get(samples);
Having half the samples causes the audio to play twice as fast.

Categories

Resources