Im trying to play six audio tracks simultaneously on the click of a jbutton, but upon click it plays the first track and waits until it finishes to play the second track, and so on. Here is my code
button.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
if (e.getSource() == button) {
System.out.println("Button Pressed");
AudioPlayerExample2 player1 = new AudioPlayerExample2();
AudioPlayerExample2 player2 = new AudioPlayerExample2();
AudioPlayerExample2 player3 = new AudioPlayerExample2();
AudioPlayerExample2 player4 = new AudioPlayerExample2();
AudioPlayerExample2 player5 = new AudioPlayerExample2();
AudioPlayerExample2 player6 = new AudioPlayerExample2();
player1.play(track1);
player2.play(track2);
player3.play(track3);
player4.play(track4);
player5.play(track5);
player6.play(track6);
}
}
});
and the audio player imported
public class AudioPlayerExample2 {
private static final int BUFFER_SIZE = 4096;
public void play(String audioFilePath) {
File audioFile = new File(audioFilePath);
try {
AudioInputStream audioStream = AudioSystem.getAudioInputStream(audioFile);
AudioFormat format = audioStream.getFormat();
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
SourceDataLine audioLine = (SourceDataLine) AudioSystem.getLine(info);
audioLine.open(format);
audioLine.start();
System.out.println("Playback started.");
byte[] bytesBuffer = new byte[BUFFER_SIZE];
int bytesRead = -1;
while ((bytesRead = audioStream.read(bytesBuffer)) != -1) {
audioLine.write(bytesBuffer, 0, bytesRead);
}
audioLine.drain();
audioLine.close();
audioStream.close();
System.out.println("Playback completed.");
} catch (UnsupportedAudioFileException ex) {
System.out.println("The specified audio file is not supported.");
ex.printStackTrace();
} catch (LineUnavailableException ex) {
System.out.println("Audio line for playing back is unavailable.");
ex.printStackTrace();
} catch (IOException ex) {
System.out.println("Error playing the audio file.");
ex.printStackTrace();
}
}
public static void main(String[] args) {
String audioFilePath = "";
AudioPlayerExample2 player = new AudioPlayerExample2();
player.play(audioFilePath);
}}
While the track is playing, the button also remains clicked, so I am unable to use my volume jslider as well. Thanks for the help!
The way you've written the play method it will block until a stream has completely played - meaning the streams will play one after the other. One option is to fork a new thread for each stream. This will avoid the blocking problem but introduces another problem and that is the threads will all be in a race to startup. This means the streams will not all necessarily start at the exact same time (although you could use a signal to get them pretty close to synchronized).
A better approach I think is to use read from all of the files and write to one SourceDataLine all in a single thread. This means you have to manually mix the signals together yourself. Assuming all of your files have the same sample rate and bit depth it is not too difficult. I've assumed 16-bit samples. If your files are different then you can figure out how to deal with that.
public void play(String[] audioFilePath) {
int numStreams = audioFilePath.length;
// Open all of the file streams
AudioInputStream[] audioStream = new AudioInputStream[numStreams];
for (int i = 0; i < numStreams; i++)
audioStream[i] = AudioSystem.getAudioInputStream(audioFile);
// Open the audio line.
AudioFormat format = audioStream[0].getFormat();
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
SourceDataLine audioLine = (SourceDataLine) AudioSystem.getLine(info);
audioLine.open(format);
audioLine.start();
while (true) {
// Read a buffer from each stream and mix into an array of
// doubles.
byte[] bytesBuffer = new byte[BUFFER_SIZE];
double[] mixBuffer = new double[BUFFER_SIZE/2];
int maxSamplesRead = -1;
for (int i = 0 ; i < numStreams; i++)
{
int bytesRead = audioStream.read(bytesBuffer);
if (bytesRead != -1) {
int samplesRead = bytesRead/2;
if (samplesRead > maxSamplesRead) {
maxSamplesRead = samplesRead;
}
for (int j = 0 ; j < bytesRead/2 ; j++) {
double sample = ((bytesBuffer[j*2] << 8) | bytesBuffer[j*2+1]) / 32768.0;
mixBuffer[j] += sample;
}
}
}
// Convert the mixed samples back into a byte array and play.
if (maxSamplesRead > 0) {
for (int i = 0; i < maxSamplesRead; i++) {
// rescale data between -1 and 1
mixBuffer[i] /= numStreams;
// and now back to 16-bit
short sample16 = (short)(mixBuffer * 32768);
// and back to bytes
bytesBuffer[i*2] = (byte)(sample16 >> 8);
bytesBuffer[i*2+1] = (byte)(sample16);
}
audioLine.write(bytesBuffer, 0, maxSamplesRead*2);
}
else {
// All of the streams are empty so cleanup.
audioLine.drain();
audioLine.close();
for (int i = 0 ; i < numStreams; i++)
audioStream[i].close();
break;
}
}
}
And call it by passing an array of filenames (which I would recommend replacing track1, track2, etc... with)
button.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
if (e.getSource() == button) {
System.out.println("Button Pressed");
AudioPlayerExample2 player = new AudioPlayerExample2(allTracks);
}
}
});
A third alternative that might be even better is to derive a class from InputStream that supports multiple files and does the mixing internally. With this approach you could use most of your existing AudioPlayerExample2 class but there would be only one instance of it. It would be a bit more involved than I care to get right now.
P.S. I haven't tried compiling any of this. I'm just trying to get across the idea.
Related
I am feeding audio from an electric cello into the mic port on my computer, and I would like my program to understand when no audio is being inputted and if audio is being inputted, what note/frequency is being played.
I am able to get the cello to play through the targetdataline and out to the sourcedataline in java. I also implemented a live frequency detection portion using fft according to Java: How to get current frequency of audio input?, but it doesn't work that well with the cello. It does perform somewhat well when I whistle, however.
I would appreciate any guidance into understanding how to use the information gained by the targetdataline and the cello output to see what is being played. Alternative approaches such as using different applications are welcome.
AudioFormat format = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 44100, 16, 2, 4, 44100, false);
try {
//making SourceDataLine for writing
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
final SourceDataLine sourceLine = (SourceDataLine)AudioSystem.getLine(info);
sourceLine.open(format/*, #*/);
//#5800 is the smallest i got it to work so far
//making TargetDataLine for getting in
info = new DataLine.Info(TargetDataLine.class, format);
final TargetDataLine targetLine = (TargetDataLine)AudioSystem.getLine(info);
targetLine.open(format);
final byte[] buf = new byte[2048]; // <--- increase this for higher frequency resolution
final int numberOfSamples = buf.length / format.getFrameSize();
final JavaFFT fft = new JavaFFT(numberOfSamples);
Thread liveThread = new Thread() {
#Override public void run() {
int readBytes;
try {
while(true) {
readBytes = targetLine.read(buf, 0, buf.length);
sourceLine.write(buf, 0, readBytes);
final float[] samples = decode(buf, format);
final float[][] transformed = fft.transform(samples);
final float[] realPart = transformed[0];
final float[] imaginaryPart = transformed[1];
final double[] magnitudes = toMagnitudes(realPart, imaginaryPart);
System.out.println("length" + magnitudes.length);
System.out.println(ecello.findMaxMagnitude(magnitudes));
}
}
catch(Exception e) {
e.printStackTrace();
}
}
};
targetLine.start();
sourceLine.start();
liveThread.start();
System.out.println("Started recording...");
Thread.sleep(3000000);
targetLine.stop();
targetLine.close();
System.out.println("Ended recording");
System.exit(0);
}
catch(Exception e) {
e.printStackTrace();
}
}
private int findMaxMagnitude(double[] input){
//Calculates Maximum Magnitude of the array
double max = input[0];
double temp;
int index = 0;
for(int i = 1; i<input.length; i++){
temp = input[i];
if(temp>max){
max = temp;;
index = i;
}
}
return index;
}
Using this fft on the cello input has not given good results. I think I can detect when no input is being played by checking the magnitude of the biggest frequency and see if it passes a threshold, but that is future work.
I am getting live audio streaming over the network in the form of RTP packets and I have to write a code to Capture, Buffer and play the audio stream.
Problem
Now to solve this problem I have written two threads one for capture the audio and another for playing it. Now when I start both the threads my capture threads running slower than playing thread :(
Buffer Requirement
RTP Audio Packets.
8kHz, 16-bit Linear Samples (Linear PCM).
4 frames of 20ms audio will be sent in each RTP Packet.
Do not play until AudioStart=24 (# of 20ms frames) have arrived.
While playing ... if the # of 20ms frames in buffer reaches 0 ...
stop playing until AudioStart frames are buffered then restart.
While playing ... if the # of 20ms frames in buffer exceeds
AudioBufferHigh=50 then delete 24 frames (in easiest manner -- delete
from buffer or just drop next 6 RTP messages).
What I have done so far..
Code
BufferManager.java
public abstract class BufferManager {
protected static final Integer ONE = new Integer(1);
protected static final Integer TWO = new Integer(2);
protected static final Integer THREE = new Integer(3);
protected static final Integer BUFFER_SIZE = 5334;//5.334KB
protected static volatile Map<Integer, ByteArrayOutputStream> bufferPool = new ConcurrentHashMap<>(3, 0.9f, 2);
protected static volatile Integer captureBufferKey = ONE;
protected static volatile Integer playingBufferKey = ONE;
protected static Boolean running;
protected static volatile Integer noOfFrames = 0;
public BufferManager() {
//captureBufferKey = ONE;
//playingBufferKey = ONE;
//noOfFrames = new Integer(0);
}
protected void switchCaptureBufferKey() {
if(ONE.intValue() == captureBufferKey.intValue())
captureBufferKey = TWO;
else if(TWO.intValue() == captureBufferKey.intValue())
captureBufferKey = THREE;
else
captureBufferKey = ONE;
//printBufferState("SWITCHCAPTURE");
}//End of switchWritingBufferKey() Method.
protected void switchPlayingBufferKey() {
if(ONE.intValue() == playingBufferKey.intValue())
playingBufferKey = TWO;
else if(TWO.intValue() == playingBufferKey.intValue())
playingBufferKey = THREE;
else
playingBufferKey = ONE;
}//End of switchWritingBufferKey() Method.
protected static AudioFormat getFormat() {
float sampleRate = 8000;
int sampleSizeInBits = 16;
int channels = 1;
boolean signed = true;
boolean bigEndian = true;
return new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);
}
protected int getByfferSize() {
return bufferPool.get(ONE).size()
+ bufferPool.get(TWO).size()
+ bufferPool.get(THREE).size();
}
protected static void printBufferState(String flag) {
int a = bufferPool.get(ONE).size();
int b = bufferPool.get(TWO).size();
int c = bufferPool.get(THREE).size();
System.out.println(flag + " == TOTAL : [" + (a + b +c) + "bytes] ");
// int a,b,c;
// System.out.println(flag + "1 : [" + (a = bufferPool.get(ONE).size()) + "bytes], 2 : [" + (b = bufferPool.get(TWO).size())
// + "bytes] 3 : [" + (c = bufferPool.get(THREE).size()) + "bytes], TOTAL : [" + (a + b +c) + "bytes] ");
}
}//End of BufferManager Class.
AudioCapture.java
public class AudioCapture extends BufferManager implements Runnable {
private static final Integer RTP_HEADER_SIZE = 12;
private InetAddress ipAddress;
private DatagramSocket serverSocket;
long lStartTime = 0;
public AudioCapture(Integer port) throws UnknownHostException, SocketException {
super();
running = Boolean.TRUE;
bufferPool.put(ONE, new ByteArrayOutputStream(BUFFER_SIZE));
bufferPool.put(TWO, new ByteArrayOutputStream(BUFFER_SIZE));
bufferPool.put(THREE, new ByteArrayOutputStream(BUFFER_SIZE));
this.ipAddress = InetAddress.getByName("0.0.0.0");
serverSocket = new DatagramSocket(port, ipAddress);
}
#Override
public void run() {
System.out.println();
byte[] receiveData = new byte[1300];
DatagramPacket receivePacket = null;
lStartTime = System.currentTimeMillis();
receivePacket = new DatagramPacket(receiveData, receiveData.length);
byte[] packet = new byte[receivePacket.getLength() - RTP_HEADER_SIZE];
ByteArrayOutputStream buff = bufferPool.get(captureBufferKey);
while (running) {
if(noOfFrames <= 50) {
try {
serverSocket.receive(receivePacket);
packet = Arrays.copyOfRange(receivePacket.getData(), RTP_HEADER_SIZE, receivePacket.getLength());
if((buff.size() + packet.length) > BUFFER_SIZE) {
switchCaptureBufferKey();
buff = bufferPool.get(captureBufferKey);
}
buff.write(packet);
noOfFrames += 4;
} catch (SocketException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} // End of try-catch block.
} else {
//System.out.println("Packet Ignored, Buffer reached to its maximum limit ");
}//End of if-else block.
} // End of while loop.
}//End of run() Method.
}
AudioPlayer.java
public class AudioPlayer extends BufferManager implements Runnable {
long lStartTime = 0;
public AudioPlayer() {
super();
}
#Override
public void run() {
AudioFormat format = getFormat();
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
SourceDataLine line = null;
try {
line = (SourceDataLine) AudioSystem.getLine(info);
line.open(format);
line.start();
} catch (LineUnavailableException e1) {
e1.printStackTrace();
}
while (running) {
if (noOfFrames >= 24) {
ByteArrayOutputStream out = null;
try {
out = bufferPool.get(playingBufferKey);
InputStream input = new ByteArrayInputStream(out.toByteArray());
byte buffer[] = new byte[640];
int count;
while ((count = input.read(buffer, 0, buffer.length)) != -1) {
if (count > 0) {
InputStream in = new ByteArrayInputStream(buffer);
AudioInputStream ais = new AudioInputStream(in, format, buffer.length / format.getFrameSize());
byte buff[] = new byte[640];
int c = 0;
if((c = ais.read(buff)) != -1)
line.write(buff, 0, buff.length);
}
}
} catch (IOException e) {
e.printStackTrace();
}
/*byte buffer[] = new byte[1280];
try {
int count;
while ((count = ais.read(buffer, 0, buffer.length)) != -1) {
if (count > 0) {
line.write(buffer, 0, count);
}
}
} catch (IOException e) {
e.printStackTrace();
}*/
out.reset();
noOfFrames -= 4;
try {
if (getByfferSize() >= 10240) {
Thread.sleep(15);
} else if (getByfferSize() >= 5120) {
Thread.sleep(25);
} else if (getByfferSize() >= 0) {
Thread.sleep(30);
}
} catch (InterruptedException e) {
e.printStackTrace();
}
} else {
// System.out.println("Number of frames :- " + noOfFrames);
}
}
}// End of run() method.
}// End of AudioPlayer Class class.
any help or pointer to the helpful link will be appreciable Thanks...
This answer explains a few challenges with streaming.
In a nutshell, your client needs to deal with two issues:
1) The clock (crystals) on the client and server are not perfectly in sync. The server may be a fraction of a Hz faster/slower than the client. The client continuously match the infer the clock rate of the server by examining the rate that rtp packets are delivered. The client then adjusts the playback rate via sample rate conversion. So instead of playing back at 48k, it may play back at 48000.0001 Hz.
2) Packets loss, out of order arrivals, etc. must be dealt with. If you lose packets, you need to still keep a place holder for those packets in your buffer stream otherwise your audio will skip and sound crackly and become unaligned. The simplest method would be to replace those missing packets with silence but the volume of adjacent packets should be adjusted to avoid sharp envelope changes snapping to 0.
Your design seems a bit unorthodox. I have had success using a ring buffer instead. You will have to deal with edge cases as well.
I always state that streaming media is not a trivial task.
Hey guys so I'm trying to read a stream from a bluetooth device continuously streaming integers like this:
-11
121
123
1234
-11
I have everything working with some code I found online, but to do some of the processing the numbers need to be ints as opposed to Strings, parseInt is taking up too much CPU, and I tried using a buffered stream with no avail.
Here is the current method:
void beginListenForData()
{
final Handler handler = new Handler();
final byte delimiter = 10; //This is the ASCII code for a newline character
stopWorker = false;
readBufferPosition = 0;
readBuffer = new byte[1024];
workerThread = new Thread(new Runnable()
{
public void run()
{
while(!Thread.currentThread().isInterrupted() && !stopWorker)
{
try
{
int bytesAvailable = mmInputStream.available();
if(bytesAvailable > 0)
{
byte[] packetBytes = new byte[bytesAvailable];
mmInputStream.read(packetBytes);
for(int i=0;i<bytesAvailable;i++)
{
byte b = packetBytes[i];
if(b == delimiter)
{
byte[] encodedBytes = new byte[readBufferPosition];
System.arraycopy(readBuffer, 0, encodedBytes, 0, encodedBytes.length);
final String data = new String(encodedBytes, "US-ASCII");
readBufferPosition = 0;
handler.post(new Runnable()
{
public void run()
{
myLabel.setText(data);
}
});
}
else
{
readBuffer[readBufferPosition++] = b;
}
}
}
}
catch (IOException ex)
{
stopWorker = true;
}
}
}
});
workerThread.start();
}
If it makes a difference, the data is coming from an Arudino, which I can modify the way it streams.
Thanks!
Use a DataInputStream wrapped around a BufferedInputStream, and the readInt() method. This assumes network byte order in the integers of course.
Forget all this arraycopy() stuff.
I'm trying to record sets of 4 seconds from the AudioRecord, process them and than record again and so on. I'm recording at 44100 samples/second, you can see in the code below. I have to mention that I record, or at least I should record sets of pulses of 19Khz.
int frequency = 44100;
int blockSize = 44100;
int channelConfiguration = AudioFormat.CHANNEL_IN_MONO;
int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
final int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, audioEncoding);
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, frequency, channelConfiguration,
audioEncoding, blockSize * 8); // if I multiply blockSize by 4 it will only give 88200 buffer size
// start recording until explicitly stopped
while ( <stopCondition> ) {
recData = new ByteArrayOutputStream();
dos = new DataOutputStream(recData);
short[] buffer = new short[blockSize * 4]; // Save the raw PCM
// timePassed = false;
// timer.cancel();
// timer.start();
audioRecord.startRecording();
// while (!timePassed) {
int bufferReadResult = audioRecord.read(buffer, 0, blockSize * 4);
for (int i = 0; i < blockSize && i < bufferReadResult; i++) {
try {
dos.writeShort(buffer[i]);
// buffer[i] = 0;
} catch (IOException e) {
e.printStackTrace();
}
}
// }
audioRecord.stop();
try {
dos.flush();
dos.close();
} catch (IOException e1) {
e1.printStackTrace();
}
... process the recorded data
As you can see from the code (commented) I have tried using a countDownTimer which stops the recording after 4 seconds have passed. It was only reading from audioRecord 1 second at a time (the blockSize wasn't multiplied by 4), but I didn't know exactly if the buffer gets overridden, since the offset is 0 - I assume it does but I'm still not sure :) Using this approach it didn't record the number of pulses played in 4 seconds - 16 pulses played, only 2-3 recorded.
Than I tried without the timer and read from audioRecord blockSize * 4 = 44100 (=1s) * 4 = 4s, but in Audacity I see only 1 second being recorded - somewhere in the code, not shown here, I write the recorded data to a file to check the output. Again number of recorded pulses are not ok, but that's obvious since I have only 1 second of recorded data.
Is there a better way of continuously recording sets of X seconds and process them? Or if my approaches are ok than what I am doing wrong?
Thank you.
Give a try to below code.
Timer myTimer = new Timer();
myTimer.schedule(new TimerTask() {
#Override
public void run() {
startRecording();
}
}, 4000);
startRecording Method
private void startRecording() {
myFilename = getFilename();
recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(output_formats[currentFormat]);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
recorder.setOutputFile(getFilename());
recorder.setOnErrorListener(errorListener);
recorder.setOnInfoListener(infoListener);
try {
recorder.prepare();
recorder.start();
} catch (IllegalStateException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
getFilename
private String getFilename() {
String filepath = Environment.getExternalStorageDirectory().getPath();
File file = new File(filepath, AUDIO_RECORDER_FOLDER);
SimpleDateFormat sdfDate = new SimpleDateFormat("dd-MMM-yyyy hh-mm-ss");
String currentDateandTime = sdfDate.format(new Date());
if (!file.exists()) {
file.mkdirs();
}
return (file.getAbsolutePath() + "/" + currentDateandTime + file_exts[currentFormat]);
}
This will start new recording each after 4 seconds and getFilename will give name to your recording as currentDate and time.
Hope you are looking same what i understood from your question still if i miss understood you then please correct me.
My android application is getting data from Polar Heart Rate Monitor through Bluetooth connection.
My problem is that I am getting such a string:
��������������������������������������������������
My code for getting the data:
final Handler handler = new Handler();
final byte delimiter = 10; //This is the ASCII code for a newline character
stopWorker = false;
readBufferPosition = 0;
readBuffer = new byte[1024];
workerThread = new Thread(new Runnable()
{
public void run()
{
while(!Thread.currentThread().isInterrupted() && !stopWorker)
{
try
{
int bytesAvailable = mmInputStream.available();
if(bytesAvailable > 0)
{
byte[] packetBytes = new byte[bytesAvailable];
mmInputStream.read(packetBytes);
for(int i=0;i<bytesAvailable;i++)
{
byte b = packetBytes[i];
if(b == delimiter)
{
byte[] encodedBytes = new byte[readBufferPosition];
// System.arraycopy(readBuffer, 0, encodedBytes, 0, encodedBytes.length);
final String data = new String(encodedBytes, "ASCII");
readBufferPosition = 0;
handler.post(new Runnable()
{
public void run()
{
pulsText.setText(data);
}
});
}
else
{
readBuffer[readBufferPosition++] = b;
}
}
}
}
catch (IOException ex)
{
stopWorker = true;
}
}
}
});
workerThread.start();
I tried to change this line in few ways but I am still getting incorrect data:
final String data = new String(encodedBytes, "ASCII");
How can I solve this issue ?
Please help !!!
The sensor doesn't give you printable strings (like e.g. NMEA does) but binary data that you need to parse. You could look into the MyTracks Polar Sensor data parser for inspiration.
You are using available and read incorrectly (but the way you use you could have luck most of the time).
The Units of Measurement (based on Raspberry Pi Challenge at JavaOne) "Heart of Glass" project shows, how this can be parsed into a typesafe Heartbeat Unit for display or transfer to other systems.