I have a problem counting voice input frequency from the audio input of my microphone. Can anyone help me with this?
I'm supposed to get an audio input from my microphone and count its frequency.
This is my code just to show how I did it; and if anyone can identify a faulty implementation.
package STLMA;
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
/**
*
* #author CATE GABRIELLE
*/
import java.io.*;
import javax.sound.sampled.*;
public class SpeechDetection {
boolean stopCapture = false;
ByteArrayOutputStream byteArrayOutputStream;
TargetDataLine targetDataLine; // This is the object that acquires data from
// the microphone and delivers it to the program
// the declaration of three instance variables used to create a SourceDataLine
// object that feeds data to the speakers on playback
AudioFormat audioFormat;
AudioInputStream audioInputStream;
SourceDataLine sourceDataLine;
double voiceFreq = 0;
FileOutputStream fout;
AudioFileFormat.Type fileType;
public static String closestSpeaker;
public SpeechDetection(){
captureAudio();
}
private void captureAudio(){
try{
audioFormat = getAudioFormat();
DataLine.Info dataLineInfo = new
DataLine.Info(TargetDataLine.class,audioFormat);
// object that describes the data line that we need to handle the acquisition
// of the audio data from the microphone. The first parameter makes the audio
// data readable
targetDataLine = (TargetDataLine)AudioSystem.getLine(dataLineInfo);
// object to handle data acquisition
targetDataLine.open(audioFormat);
//from the microphone that matches
targetDataLine.start();
// the information encapsulated in the DataLine.Info object
Thread captureThread = new Thread(new CaptureThread());
captureThread.start();
} catch (Exception e) {
System.out.println(e);
System.exit(0);
}
}
private AudioFormat getAudioFormat(){
float sampleRate = 8000.0F; // The number of samples that will be acquired
//8000,11025,16000,22050,44100 each second for each channel of audio data.
int sampleSizeInBits = 16; //The number of bits that will be used to
//8,16 describe the value of each audio sample.
int channels = 1; // Two channels for stereo, and one channel for mono.
//1,2
boolean signed = true; // Whether the description of each audio sample
//true,false
//consists of both positive and negative values, or positive values only.
boolean bigEndian = false;
//true,false
return new AudioFormat(sampleRate,sampleSizeInBits,channels,signed,bigEndian);
}
//Inner class to capture data from microphone
class CaptureThread extends Thread {
byte tempBuffer[] = new byte[8000];
// byte buffer variable to contain the raw audio data
int countzero;
// counter variable to count the number of zero's
short convert[] = new short[tempBuffer.length];
// short variable that is appropriate to
// collect the audio input for porcessing
// public void start(){
// Thread voices = new Thread(this);
// voices.start();
// }
#Override
public void run(){
// a continuous thread to process the continuous audio input
byteArrayOutputStream = new ByteArrayOutputStream(); // the object to write the
// raw audio input to the byte buffer variable
stopCapture = false;
try{
while(!stopCapture){
int cnt = targetDataLine.read(tempBuffer,0,tempBuffer.length);
// reads the raw audio input
// and returns the number of bytes actually read
byteArrayOutputStream.write(tempBuffer, 0, cnt);
// writing the number of bytes read to the
// container
try{
countzero = 0;
for(int i=0; i < tempBuffer.length; i++){
// the loop that stores the whole audio data
convert[i] = tempBuffer[i];
// to the convert variable which is a short data type,
if(convert[i] == 0){countzero++;}
// then counts the number of zero's
}
voiceFreq = (countzero/2)+1;
// calculates the number of frequency and
// stores to the voiceFreq variable
if(voiceFreq>=80 && voiceFreq<=350)
System.out.println("Voice"+voiceFreq);
else
System.out.println("Unvoice"+voiceFreq);
}catch(StringIndexOutOfBoundsException e)
{System.out.println(e.getMessage());}
Thread.sleep(0);
}
byteArrayOutputStream.close();
}catch (Exception e) {
System.out.println(e);
System.exit(0);
}
}
}
public static void main(String [] args){
SpeechDetection voiceDetector1 = new SpeechDetection();
// voiceDetector1.setSize(300,100);
// voiceDetector1.setDefaultCloseOperation(EXIT_ON_CLOSE);
// voiceDetector1.setVisible(true);
}
}
by the way, "voiceFreq" stands for voice frequency.
My goal here is to know if the input is a voice or a noise.
I hope someone could help me with my problem. Thank you and a happy New Year.
I would think for detecting whether something is a potential voice or a noise, one would want to do an FFT on a section of data and see whether the frequency components were within some range of "normal voice".
Maybe see Reliable and fast FFT in Java for some FFT information.
Related
I want to ask the repetitive question of how to record the audio send to the speakers. But I want some insights to the previously answered.
I went to this page: Capturing speaker output in Java
I saw this code posted by a developer:
import javax.sound.sampled.*;
import java.io.*;
public class JavaSoundRecorder {
// record duration, in milliseconds
static final long RECORD_TIME = 10000; // 1 minute
// path of the wav file
File wavFile = new File("E:/RecordAudio.wav");
// format of audio file
AudioFileFormat.Type fileType = AudioFileFormat.Type.WAVE;
// the line from which audio data is captured
TargetDataLine line;
/**
* Defines an audio format
*/
AudioFormat getAudioFormat() {
float sampleRate = 16000;
int sampleSizeInBits = 8;
int channels = 1;
boolean signed = true;
boolean bigEndian = true;
AudioFormat format = new AudioFormat(sampleRate, sampleSizeInBits,
channels, signed, bigEndian);
return format;
}
/**
* Captures the sound and record into a WAV file
*/
void start() {
try {
AudioFormat format = getAudioFormat();
DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
// checks if system supports the data line
if (!AudioSystem.isLineSupported(info)) {
System.out.println("Line not supported");
System.exit(0);
}
line = (TargetDataLine) AudioSystem.getLine(info);
line.open(format);
line.start(); // start capturing
System.out.println("Start capturing...");
AudioInputStream ais = new AudioInputStream(line);
System.out.println("Start recording...");
// start recording
AudioSystem.write(ais, fileType, wavFile);
} catch (LineUnavailableException ex) {
ex.printStackTrace();
} catch (IOException ioe) {
ioe.printStackTrace();
}
}
/**
* Closes the target data line to finish capturing and recording
*/
void finish() {
line.stop();
line.close();
System.out.println("Finished");
}
/**
* Entry to run the program
*/
public static void main(String[] args) {
final JavaSoundRecorder recorder = new JavaSoundRecorder();
// creates a new thread that waits for a specified
// of time before stopping
Thread stopper = new Thread(new Runnable() {
public void run() {
try {
Thread.sleep(RECORD_TIME);
} catch (InterruptedException ex) {
ex.printStackTrace();
}
recorder.finish();
}
});
stopper.start();
// start recording
recorder.start();
}
}
Now I have some questions I want to ask.
This code runs OK on my windows OS but it doesn't work on my Ubuntu on the same machine(dual boot). In Ubuntu it records silence and I tried to get all mixers but can't get it working
I want to get the output going to the speakers and I am getting the output of the speakers. The sound of the vicinity with a very little sound of what I actually want.
Please answer my queries of the above 2 questions.
What I want? I want the clear audio that is currently being played and fetched to the speakers of my laptop. I don't want the audio that is already emitted and then re-recorded because that is bad. Also I need a reason as of why my Ubuntu is not supporting this code.(This is vague info but I am using BlueJ in windows to run this and NetBeans on Ubuntu(without sudo)).
I saw some YouTube videos to understand the theory:
https://www.youtube.com/watch?v=GVtl19L9GxU
https://www.youtube.com/watch?v=PTs01qr9RlY
I read 1 and a half page documentation of oracle here: https://docs.oracle.com/javase/tutorial/sound/accessing.html
There was this thing mentioned in the docs:
An applet running with the applet security manager can play, but not record, audio.
An application running with no security manager can both play and record audio.
An application running with the default security manager can play, but not record, audio.
But I don't think I turned any security manager.
In the end I found no success in what I want to do. Instead of going further in the documentation I thought to ask the question here.
I want to play the microphone input in realtime using the JavaFX media player (to analyse its frequencies). The problem is, that the MediaPlayer only accepts Strings as source. I know how to write the microphone input into a byte array and into a file.
Using the byte array as source for the MediaPlayer is (for me) not possible. I tried using a temporary file, but that causes the following error:
Exception in thread "JavaFX Application Thread" MediaException: MEDIA_UNSUPPORTED : Empty signature!
I think this is, because I'm using a file as input while I'm still writing new data into it. My full code until now:
public class Music {
static AudioFormat format;
static DataLine.Info info;
public static void input(int i, int j, int pinState) {
format = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 44100, 16, 2, 4, 44100, false);
try {
info = new DataLine.Info(TargetDataLine.class, format);
final TargetDataLine targetLine = (TargetDataLine) AudioSystem.getLine(info);
targetLine.open();
AudioInputStream audioStream = new AudioInputStream(targetLine);
File temp = File.createTempFile("Input", ".wav");
temp.deleteOnExit();
Thread targetThread = new Thread() {
public void run() {
targetLine.start();
try {
AudioSystem.write(audioStream, AudioFileFormat.Type.WAVE, temp);
} catch (IOException e) {
e.printStackTrace();
}
}
};
targetThread.start();
Media media = new Media(temp.toURI().toURL().toString());
MediaPlayer player = new MediaPlayer(media);
player.setAudioSpectrumThreshold(-100);
player.setMute(false);
player.setAudioSpectrumListener(new AudioSpectrumListener() {
#Override
public void spectrumDataUpdate(double timestamp, double duration, float[] magnitudes, float[] phases) {
if(Var.nodeController[i] == 3) { //testing if the targetLine should keep on capturing sound
} else {
targetLine.stop();
targetLine.close();
player.stop();
}
}
});
player.play();
} catch (LineUnavailableException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
I need to find a solution to use the microphone input as MediaPlayer input, either using a file or a byte array or any other possible solution.
I am going to speculate that the following library might be helpful.
https://github.com/SzymonKatra/AudioAnalyzer
Note that it makes use of FFmpeg which claims to be:
A complete, cross-platform solution to record, convert and stream
audio and video.
The key component (based on my cursory look-over) seems to be the class FFTAnalyzer.java which the documentation says is based upon an FFT code/algorithm from Princeton.
So, the basic plan (I believe) would be the following:
obtain microphone input (a stream of bytes) from targetdataline
convert N frames of bytes to normalized floats or doubles and package as an array to send to FFTAnalyzer.analyze(double[] samples)
request the analysis results via FFTAnalyzer.getAmplitudes()
examine the result for each equalization band and apply smoothing as appropriate
repeat
I'm unclear as to exactly how much of the library is needed. It could be that since you are not dealing with video or cross-platform issues, only a class or two from this library would be needed.
I am trying to add sound to a game I am making, but every time I try to load the sound, I get a Stream Closed Exception. I don't understand why this is happening.
Loads the sound:
public class WavPlayer extends Thread {
/*
* #param s The path of the wav file.
* #return The sound data loaded into the WavSound object
*/
public static WavSound loadSound(String s){
// Get an input stream
InputStream is = WavPlayer.class.getClassLoader().getResourceAsStream(s);
AudioInputStream audioStream;
try {
// Buffer the input stream
BufferedInputStream bis = new BufferedInputStream(is);
// Create the audio input stream and audio format
audioStream = AudioSystem.getAudioInputStream(bis); //!Stream Closed Exception occurs here
AudioFormat format = audioStream.getFormat();
// The length of the audio file
int length = (int) (audioStream.getFrameLength() * format.getFrameSize());
// The array to store the samples in
byte[] samples = new byte[length];
// Read the samples into array to reduce disk access
// (fast-execution)
DataInputStream dis = new DataInputStream(audioStream);
dis.readFully(samples);
// Create a sound container
WavSound sound = new WavSound(samples, format, (int) audioStream.getFrameLength());
// Don't start the sound on load
sound.setState(SoundState.STATE_STOPPED);
// Create a new player for each sound
new WavPlayer(sound);
return sound;
} catch (Exception e) {
// An error. Mustn't happen
}
return null;
}
// Private variables
private WavSound sound = null;
/**
* Constructs a new player with a sound and with an optional looping
*
* #param s The WavSound object
*/
public WavPlayer(WavSound s) {
sound = s;
start();
}
/**
* Runs the player in a separate thread
*/
#Override
public void run(){
// Get the byte samples from the container
byte[] data = sound.getData();
InputStream is = new ByteArrayInputStream(data);
try {
// Create a line for the required audio format
SourceDataLine line = null;
AudioFormat format = sound.getAudioFormat();
// Calculate the buffer size and create the buffer
int bufferSize = sound.getLength();
// System.out.println(bufferSize);
byte[] buffer = new byte[bufferSize];
// Create a new data line to write the samples onto
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
line = (SourceDataLine) AudioSystem.getLine(info);
// Open and start playing on the line
try {
if (!line.isOpen()) {
line.open();
}
line.start();
} catch (Exception e){}
// The total bytes read
int numBytesRead = 0;
boolean running = true;
while (running) {
// Destroy this player if the sound is destroyed
if (sound.getState() == SoundState.STATE_DESTROYED) {
running = false;
// Release the line and release any resources used
line.drain();
line.close();
}
// Write the data only if the sound is playing or looping
if ((sound.getState() == SoundState.STATE_PLAYING)
|| (sound.getState() == SoundState.STATE_LOOPING)) {
numBytesRead = is.read(buffer, 0, buffer.length);
if (numBytesRead != -1) {
line.write(buffer, 0, numBytesRead);
} else {
// The samples are ended. So reset the position of the
// stream
is.reset();
// If the sound is not looping, stop it
if (sound.getState() == SoundState.STATE_PLAYING) {
sound.setState(SoundState.STATE_STOPPED);
}
}
} else {
// Not playing. so wait for a few moments
Thread.sleep(Math.min(1000 / Global.FRAMES_PER_SECOND, 10));
}
}
} catch (Exception e) {
// Do nothing
}
}
The error message I get is: "Exception in thread "main" java.io.IOException: Stream closed
at java.io.BufferedInputStream.getInIfOpen(BufferedInputStream.java:134)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
at java.io.DataInputStream.readInt(DataInputStream.java:370)
at com.sun.media.sound.WaveFileReader.getFMT(WaveFileReader.java:224)
at com.sun.media.sound.WaveFileReader.getAudioInputStream(WaveFileReader.java:140)
at javax.sound.sampled.AudioSystem.getAudioInputStream(AudioSystem.java:1094)
at stm.sounds.WavPlayer.loadSound(WavPlayer.java:42)
at stm.STM.(STM.java:265)
at stm.STM.main(STM.java:363)"
Most probably the file path in this line is not correct:
WavPlayer sound1 = WavPlayer.loadSound("coin.wav");
You should pass the path of the 'coin.wav' file instead of just its name.
For instance if its under a folder named sounds, which let's say right under the root of project, that parameter should be 'sounds/coin.wav'.
The problem is in your static method loadSound. This method returns null when an exception is thrown. You catch it but you do nothing with it,
NEVER make empty catch.
Catch specific exceptions.
I would change your method signature loadSound as
public static WavSound loadSound(String s) throws Exception // rather than exception specific exception!!
And then your method without try-catch
Using Java is it possible to capture the speaker output? This output is not being generated by my program but rather by other running applications. Can this be done with Java or will I need to resort to C/C++?
I had a Java based app. that used Java Sound to tap into the sound flowing through the system to make a trace of it. It worked well on my own (Windows based) machine, but failed completely on some others.
It was determined that in order to get it working on those machines, would take nothing short of an audio loop-back in either software or hardware (e.g. connect a lead from the speaker 'out' jack to the microphone 'in' jack).
Since all I really wanted to do was plot the trace for music, and I figured how to play the target format (MP3) in Java, it became unnecessary to pursue the other option further.
(And I also heard that Java Sound on Mac. was horribly broken, but I never looked closely into it.)
Java is not the best tool when dealing with the OS. If you need/want to use it for this task, probably you will end using Java Native Interface (JNI), linking to libraries compiled in other languages (probably c/c++).
Take an AUX cable, connect to HEADPHONE JACK and other end to MICROPHONE JACK and run this code
https://www.codejava.net/coding/capture-and-record-sound-into-wav-file-with-java-sound-api
import javax.sound.sampled.*;
import java.io.*;
public class JavaSoundRecorder {
// record duration, in milliseconds
static final long RECORD_TIME = 60000; // 1 minute
// path of the wav file
File wavFile = new File("E:/Test/RecordAudio.wav");
// format of audio file
AudioFileFormat.Type fileType = AudioFileFormat.Type.WAVE;
// the line from which audio data is captured
TargetDataLine line;
/**
* Defines an audio format
*/
AudioFormat getAudioFormat() {
float sampleRate = 16000;
int sampleSizeInBits = 8;
int channels = 2;
boolean signed = true;
boolean bigEndian = true;
AudioFormat format = new AudioFormat(sampleRate, sampleSizeInBits,
channels, signed, bigEndian);
return format;
}
/**
* Captures the sound and record into a WAV file
*/
void start() {
try {
AudioFormat format = getAudioFormat();
DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
// checks if system supports the data line
if (!AudioSystem.isLineSupported(info)) {
System.out.println("Line not supported");
System.exit(0);
}
line = (TargetDataLine) AudioSystem.getLine(info);
line.open(format);
line.start(); // start capturing
System.out.println("Start capturing...");
AudioInputStream ais = new AudioInputStream(line);
System.out.println("Start recording...");
// start recording
AudioSystem.write(ais, fileType, wavFile);
} catch (LineUnavailableException ex) {
ex.printStackTrace();
} catch (IOException ioe) {
ioe.printStackTrace();
}
}
/**
* Closes the target data line to finish capturing and recording
*/
void finish() {
line.stop();
line.close();
System.out.println("Finished");
}
/**
* Entry to run the program
*/
public static void main(String[] args) {
final JavaSoundRecorder recorder = new JavaSoundRecorder();
// creates a new thread that waits for a specified
// of time before stopping
Thread stopper = new Thread(new Runnable() {
public void run() {
try {
Thread.sleep(RECORD_TIME);
} catch (InterruptedException ex) {
ex.printStackTrace();
}
recorder.finish();
}
});
stopper.start();
// start recording
recorder.start();
}
}
I'm trying to play a PCM file in Android using the AudioTrack class. I can get the file to play just fine, but I cannot reliably tell when playback has finished. AudioTrack.getPlayState says playback has stopped when it hasn't finished playing. I'm having the same problem with AudioTrack.setNotificationMarkerPosition, and I'm pretty sure my marker is set to the end of the file (although I'm not completely sure I'm doing it right). Likewise, playback continues when getPlaybackHeadPosition is at the end of the file and has stopped incrementing. Can anyone help?
I found that using audioTrack.setNotificationMarkerPosition(audioLength) and audioTrack.setPlaybackPositionUpdateListener worked for me. See the following code:
// Get the length of the audio stored in the file (16 bit so 2 bytes per short)
// and create a short array to store the recorded audio.
int audioLength = (int) (pcmFile.length() / 2);
short[] audioData = new short[audioLength];
DataInputStream dis = null;
try {
// Create a DataInputStream to read the audio data back from the saved file.
InputStream is = new FileInputStream(pcmFile);
BufferedInputStream bis = new BufferedInputStream(is);
dis = new DataInputStream(bis);
// Read the file into the music array.
int i = 0;
while (dis.available() > 0) {
audioData[i] = dis.readShort();
i++;
}
// Create a new AudioTrack using the same parameters as the AudioRecord.
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, RECORDER_SAMPLE_RATE, RECORDER_CHANNEL_OUT,
RECORDER_AUDIO_ENCODING, audioLength, AudioTrack.MODE_STREAM);
audioTrack.setNotificationMarkerPosition(audioLength);
audioTrack.setPlaybackPositionUpdateListener(new OnPlaybackPositionUpdateListener() {
#Override
public void onPeriodicNotification(AudioTrack track) {
// nothing to do
}
#Override
public void onMarkerReached(AudioTrack track) {
Log.d(LOG_TAG, "Audio track end of file reached...");
messageHandler.sendMessage(messageHandler.obtainMessage(PLAYBACK_END_REACHED));
}
});
// Start playback
audioTrack.play();
// Write the music buffer to the AudioTrack object
audioTrack.write(audioData, 0, audioLength);
} catch (Exception e) {
Log.e(LOG_TAG, "Error playing audio.", e);
} finally {
if (dis != null) {
try {
dis.close();
} catch (IOException e) {
// don't care
}
}
}
This works for me:
do{ // Montior playback to find when done
x = audioTrack.getPlaybackHeadPosition();
}while (x< pcmFile.length() / 2);