What exactly does AudioInputStream.read method return? - java

I have some problems finding out, what I actually read with the AudioInputStream. The program below just prints the byte-array I get but I actually don't even know, if the bytes are actually the samples, so the byte-array is the audio wave.
File fileIn;
AudioInputStream audio_in;
byte[] audioBytes;
int numBytesRead;
int numFramesRead;
int numBytes;
int totalFramesRead;
int bytesPerFrame;
try {
audio_in = AudioSystem.getAudioInputStream(fileIn);
bytesPerFrame = audio_in.getFormat().getFrameSize();
if (bytesPerFrame == AudioSystem.NOT_SPECIFIED) {
bytesPerFrame = 1;
}
numBytes = 1024 * bytesPerFrame;
audioBytes = new byte[numBytes];
try {
numBytesRead = 0;
numFramesRead = 0;
} catch (Exception ex) {
System.out.println("Something went completely wrong");
}
} catch (Exception e) {
System.out.println("Something went completely wrong");
}
and in some other part, I read some bytes with this:
try {
if ((numBytesRead = audio_in.read(audioBytes)) != -1) {
numFramesRead = numBytesRead / bytesPerFrame;
totalFramesRead += numFramesRead;
}
} catch (Exception e) {
System.out.println("Had problems reading new content");
}
So first of all, this code is not from me. This is my first time, reading audio-files so I got some help from the inter-webs. (Found the link:
Java - reading, manipulating and writing WAV files
stackoverflow, who would have known.
The question is, what are the bytes in audioBytes representing? Since the source is a 44kHz, stereo, there have to be 2 waves hiding in there somewhere, am I right? so how do I filter the important informations out of these bytes?
// EDIT
So what I added is this function:
public short[] Get_Sample() {
if(samplesRead == 1024) {
Read_Buffer();
samplesRead = 4;
} else {
samplesRead = samplesRead + 4;
}
short sample[] = new short[2];
sample[0] = (short)(audioBytes[samplesRead-4] + 256*audioBytes[samplesRead-3]);
sample[1] = (short)(audioBytes[samplesRead-2] + 256*audioBytes[samplesRead-1]);
return sample;
}
where Read_Buffer() reads the next 1024 (or less) Bytes and loads them into audioBytes. sample[0] is used for the left side, sample[1] for the right side. But I'm still not sure since the waves i get from this look quite "noisy". (Edit: the used WAV actually used little-endian byte order so I had to change the calculation.)

AudioInputStream read() method returns the raw audio data. You don't know what is the 'construction' of data before you read the audio format with getFormat() which returns AudioFormat. From AudioFormat you can getChannels() and getSampleSizeInBits() and more... This is because the AudioInputStream is made for known format.
If you calculate a sample value you have different possibilities with signes and
endianness of the data (in case of 16-bit sample). To make a more generic code
use your AudioFormat object returned from AudioInputStream to get more info
about the data buffer:
encoding() : PCM_SIGNED, PCM_UNSIGNED ...
bigEndian() : true or false
As you already discovered the incorrect sample building may lead to some disturbed sound. If you work with various files it may case a problems in the future. If you won't provide a support for some formats just check what says AudioFormat and throw exception (e.g. javax.sound.sampled.UnsupportedAudioFileException). It will save your time.

Related

Bytebuffor operations crash android app in extracting raw data from small amr files

When I use mediaCodec to decode an AMR file it outputs a byte buffer but when i try to convert the byte buffer into array of doubles the app crashes.
I tried taking out a single byte form the byte buffer and the app crashes as well. Any operation on the byte buffer causes my app to crash.
decoder.start();
inputBuffers = decoder.getInputBuffers();
outputBuffers = decoder.getOutputBuffers();
end_of_input_file = false;
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
ByteBuffer data = readData(info);
int samplesRead = info.size;
byte[] bytesArray = new byte[data.remaining()];
bytesArray = getByteArrayFromByteBuffer(data);
here the app crashes.
This is the readData method:
private ByteBuffer readData(MediaCodec.BufferInfo info) {
if (decoder == null)
return null;
for (;;) {
// Read data from the file into the codec.
if (!end_of_input_file) {
int inputBufferIndex = decoder.dequeueInputBuffer(10000);
if (inputBufferIndex >= 0) {
int size = mExtractor.readSampleData(inputBuffers[inputBufferIndex], 0);
if (size < 0) {
// End Of File
decoder.queueInputBuffer(inputBufferIndex, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
end_of_input_file = true;
} else {
decoder.queueInputBuffer(inputBufferIndex, 0, size, mExtractor.getSampleTime(), 0);
mExtractor.advance();
}
}
}
// Read the output from the codec.
if (outputBufferIndex >= 0)
// Ensure that the data is placed at the start of the buffer
outputBuffers[outputBufferIndex].position(0);
outputBufferIndex = decoder.dequeueOutputBuffer(info, 10000);
if (outputBufferIndex >= 0) {
// Handle EOF
if (info.flags != 0) {
decoder.stop();
decoder.release();
decoder = null;
return null;
}
// Release the buffer so MediaCodec can use it again.
// The data should stay there until the next time we are called.
decoder.releaseOutputBuffer(outputBufferIndex, false);
return outputBuffers[outputBufferIndex];
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
// This usually happens once at the start of the file.
outputBuffers = decoder.getOutputBuffers();
}
}
}
This is the getByteArrayFromByteBuffer method:
private static byte[] getByteArrayFromByteBuffer(ByteBuffer byteBuffer) {
byte[] bytesArray = new byte[byteBuffer.remaining()];
byteBuffer.get(bytesArray, 0, bytesArray.length);
return bytesArray;
}
````
I would like to get the output of the decoder into a double array.
You have two bugs/misconceptions in your code.
First issue:
// Read the output from the codec.
if (outputBufferIndex >= 0)
// Ensure that the data is placed at the start of the buffer
outputBuffers[outputBufferIndex].position(0);
outputBufferIndex = decoder.dequeueOutputBuffer(info, 10000);
Before calling dequeueOutputBuffer, you can't know what output buffer it will return. And even if that, you can't control where in the buffer the codec will place its output. The decoder will choose one buffer out of the available ones and place the output data wherever it wants within that buffer. (In practice it is almost always at the start anyway.)
If you want to set up the position/limit in any specific way within the output buffer, you do that after dequeueOutputBuffer has returned it to you.
Just remove the whole if statement and position() call here.
Second bug, which is the real major issue:
// Release the buffer so MediaCodec can use it again.
// The data should stay there until the next time we are called.
decoder.releaseOutputBuffer(outputBufferIndex, false);
return outputBuffers[outputBufferIndex];
You are only allowed to touch and use a buffer from outputBuffers after it has been returned by dequeueOutputBuffer up until you return it to the decoder by releaseOutputBuffer. As your comment says, after releaseOutputBuffers, MediaCodec can use it again. Your ByteBuffer variable is only a reference to the same data as the codec can write new output into - it's not a copy here yet. So you cannot call releaseOutputBuffer until you have finished using the ByteBuffer object.
Bonus point:
Before using the ByteBuffer, it might be good to set the position/limit for it, like this:
outputBuffers[outputBufferIndex].position(info.offset);
outputBuffers[outputBufferIndex].limit(info.offset + info.size);
Some decoders do this implicitly, but not necessarily all of them.

Java Audio SourceDataLine does not support PCM_FLOAT

I am trying to play a buffer of audio using Java on Linux.
I am getting the following exception when attempting to open the line (not when I write the audio to it)...
Exception in thread "main" java.lang.IllegalArgumentException: No line matching interface SourceDataLine supporting format PCM_FLOAT 44100.0 Hz, 16 bit, mono, 2 bytes/frame, is supported.
public boolean open()
{
try {
int smpSizeInBits = bytesPerSmp * 8;
int frameSize = bytesPerSmp * channels; // just an fyi, frameSize does not always == bytesPerSmp * channels for non PCM encodings
int frameRate = (int)smpRate; // again this might not be the case for non PCM encodings.
boolean isBigEndian = false;
AudioFormat af = new AudioFormat(AudioFormat.Encoding.PCM_FLOAT , smpRate, smpSizeInBits, channels, frameSize, frameRate, isBigEndian);
DataLine.Info info = new DataLine.Info(SourceDataLine.class, af);
int bufferSizeInBytes = bufferSizeInFrames * channels * bytesPerSmp;
line = (SourceDataLine) AudioSystem.getLine(info);
line.open(af, bufferSizeInBytes);
open = true;
}
catch(LineUnavailableException e) {
System.out.println("PcmFloatPlayer: Unable to open, line unavailble.");
}
return open;
}
I am wondering if my assumptions about what PCM_FLOAT encoding is, are actually incorrect.
I have some code that reads in a wav file. The wavfile is mono, 16bit, uncompressed format. I then convert the audio to floats in range of -1.0 to 1.0 for processing.
I assumed the PCM_FLOAT encoding is just raw PCM data that has been converted to float values between -1.0 and 1.0. Is this correct?
I then assumed that the SourceDataLine would convert the float audio to the appropriate format based on my passed format info (mono, 16bit, 2bytes/frame). Again is this assumption incorrect?
Must I convert my float -1.0 to 1.0 audio back to my desired output format, and set the SourceDataLine to PCM_SIGNED (assuming that is my desired format)?
EDIT:
In addition, when I called AudioSystem.getTargetEncodings(), with PCM_FLOAT, it returns three encodings. Does that mean that it will accept PCM_FLOAT, and be capable to converting to the returned encodings, based on what the underlying audio system supports?
AudioFormat.Encoding[] encodings = AudioSystem.getTargetEncodings(AudioFormat.Encoding.PCM_FLOAT);
for(AudioFormat.Encoding e : encodings)
System.out.println(e);
results in...
PCM_SIGNED
PCM_UNSIGNED
PCM_FLOAT
I don't know that I'll be able to answer your direct questions. But maybe the code I can show you, which I know works (including on Linux), will help you arrive at a workable solution. I have programs that generate audio signals via incoming cues, but also custom-made Synths, and I do all the mixing and effects with PCM floats in the range -1 to 1. To output, I convert the floats to a standard "CD Quality" format that Java supports.
Here is the format I use for the outputting SourceDataLine:
AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 44100, 16, 2, 4, 44100, false);
You'll probably want to make this mono instead of stereo. But I should say, it seems to me that if you are able to read an incoming wav file with a different format, you should be able to play back that same format, assuming you reverse all the steps taken to convert the incoming data to PCM.
For the standard "CD Quality" format, to go from pcm signed floats to bytes, there is an intermediate step of inflating to the range of a signed short (-32768 to 32767).
public static byte[] fromBufferToAudioBytes(byte[] audioBytes, float[] buffer)
{
for (int i = 0, n = buffer.length; i < n; i++)
{
buffer[i] *= 32767;
audioBytes[i*2] = (byte) buffer[i];
audioBytes[i*2 + 1] = (byte)((int)buffer[i] >> 8 );
}
return audioBytes;
}
This is taken from the AudioCue library that I wrote and posted on github.
I find it reduces headaches to just deal with the one AudioFormat, to make conversions with Audacity to the one format, and not try make provisions for multiple formats. But that is just a personal preference, and I don't know if that strategy would work for your situation or not.
Hope there is something here that helps!
public class Main {
public static void main(String[] args) throws InterruptedException {
Thread t1 = new Thread2();
t1.start();
Thread t2 = new thread3();
t2.start();
Thread.sleep(5000);
}
}
import javax.sound.sampled.*;
import java.io.File;
import java.io.IOException;
import java.util.logging.Level;
import java.util.logging.Logger;
public class Thread2 extends Thread implements Runnable {
#Override
public void run() {
playWav("C:/Windows/Media/feel_good_x.wav");
}
private static void playWav(String soundFilePath) {
File sFile = new File(soundFilePath);
if (!sFile.exists()) {
String ls = System.lineSeparator();
System.err.println("немає в директорії»+
ls + "(" + soundFilePath + ")" + ls);
return;
}
try {
Clip clip;
try (AudioInputStream audioInputStream = AudioSystem.
getAudioInputStream(sFile.getAbsoluteFile())) {
clip = AudioSystem.getClip();
clip.setFramePosition(0);
clip.open(audioInputStream);
}
clip.start();
}
catch (UnsupportedAudioFileException | IOException | LineUnavailableException ex) {
Logger.getLogger("playWav()").log(Level.SEVERE, null, ex);
}
}
}

How to properly detect, decode and play a radio stream?

I am currently trying to write a jukebox-like application in Java that is able to play any audio source possible, but encountered some difficulties when trying to play radio streams.
For playback I use JLayer from JavaZoom, that works fine as long as the target is a direct media file or a direct media stream (I can play PCM, MP3 and OGG just fine). However I encounter difficulties when trying to play radio streams which either contain pre-media data like a m3u/pls file (which I could fix by adding a detection beforehand), or data that is streamed on port 80 while a web-page exists at the same location and the media transmitted depends on the type of request. In the later case, whenever I try to stream the media, I instead get the HTML data.
Example link of a stream that is hidden behind a web-page: http://stream.t-n-media.de:8030
This is playable in VLC, but if you put it into a browser or my application you'll receive an HTML file.
Is there:
A ready-made, free solution that I could use in place of JLayer? Preferably open source so I can study it?
A tutorial that can help me to write a solution on my own?
Or can someone give me an example on how to properly detect/request a media stream?
Thanks in advance!
import java.io.*;
import java.net.*;
import javax.sound.sampled.*;
import javax.sound.midi.*;
/**
* This class plays sounds streaming from a URL: it does not have to preload
* the entire sound into memory before playing it. It is a command-line
* application with no gui. It includes code to convert ULAW and ALAW
* audio formats to PCM so they can be played. Use the -m command-line option
* before MIDI files.
*/
public class PlaySoundStream {
// Create a URL from the command-line argument and pass it to the
// right static method depending on the presence of the -m (MIDI) option.
public static void main(String[ ] args) throws Exception {
if (args[0].equals("-m")) streamMidiSequence(new URL(args[1]));
else streamSampledAudio(new URL(args[0]));
// Exit explicitly.
// This is needed because the audio system starts background threads.
System.exit(0);
}
/** Read sampled audio data from the specified URL and play it */
public static void streamSampledAudio(URL url)
throws IOException, UnsupportedAudioFileException,
LineUnavailableException
{
AudioInputStream ain = null; // We read audio data from here
SourceDataLine line = null; // And write it here.
try {
// Get an audio input stream from the URL
ain=AudioSystem.getAudioInputStream(url);
// Get information about the format of the stream
AudioFormat format = ain.getFormat( );
DataLine.Info info=new DataLine.Info(SourceDataLine.class,format);
// If the format is not supported directly (i.e. if it is not PCM
// encoded), then try to transcode it to PCM.
if (!AudioSystem.isLineSupported(info)) {
// This is the PCM format we want to transcode to.
// The parameters here are audio format details that you
// shouldn't need to understand for casual use.
AudioFormat pcm =
new AudioFormat(format.getSampleRate( ), 16,
format.getChannels( ), true, false);
// Get a wrapper stream around the input stream that does the
// transcoding for us.
ain = AudioSystem.getAudioInputStream(pcm, ain);
// Update the format and info variables for the transcoded data
format = ain.getFormat( );
info = new DataLine.Info(SourceDataLine.class, format);
}
// Open the line through which we'll play the streaming audio.
line = (SourceDataLine) AudioSystem.getLine(info);
line.open(format);
// Allocate a buffer for reading from the input stream and writing
// to the line. Make it large enough to hold 4k audio frames.
// Note that the SourceDataLine also has its own internal buffer.
int framesize = format.getFrameSize( );
byte[ ] buffer = new byte[4 * 1024 * framesize]; // the buffer
int numbytes = 0; // how many bytes
// We haven't started the line yet.
boolean started = false;
for(;;) { // We'll exit the loop when we reach the end of stream
// First, read some bytes from the input stream.
int bytesread=ain.read(buffer,numbytes,buffer.length-numbytes);
// If there were no more bytes to read, we're done.
if (bytesread == -1) break;
numbytes += bytesread;
// Now that we've got some audio data to write to the line,
// start the line, so it will play that data as we write it.
if (!started) {
line.start( );
started = true;
}
// We must write bytes to the line in an integer multiple of
// the framesize. So figure out how many bytes we'll write.
int bytestowrite = (numbytes/framesize)*framesize;
// Now write the bytes. The line will buffer them and play
// them. This call will block until all bytes are written.
line.write(buffer, 0, bytestowrite);
// If we didn't have an integer multiple of the frame size,
// then copy the remaining bytes to the start of the buffer.
int remaining = numbytes - bytestowrite;
if (remaining > 0)
System.arraycopy(buffer,bytestowrite,buffer,0,remaining);
numbytes = remaining;
}
// Now block until all buffered sound finishes playing.
line.drain( );
}
finally { // Always relinquish the resources we use
if (line != null) line.close( );
if (ain != null) ain.close( );
}
}
// A MIDI protocol constant that isn't defined by javax.sound.midi
public static final int END_OF_TRACK = 47;
/* MIDI or RMF data from the specified URL and play it */
public static void streamMidiSequence(URL url)
throws IOException, InvalidMidiDataException, MidiUnavailableException
{
Sequencer sequencer=null; // Converts a Sequence to MIDI events
Synthesizer synthesizer=null; // Plays notes in response to MIDI events
try {
// Create, open, and connect a Sequencer and Synthesizer
// They are closed in the finally block at the end of this method.
sequencer = MidiSystem.getSequencer( );
sequencer.open( );
synthesizer = MidiSystem.getSynthesizer( );
synthesizer.open( );
sequencer.getTransmitter( ).setReceiver(synthesizer.getReceiver( ));
// Specify the InputStream to stream the sequence from
sequencer.setSequence(url.openStream( ));
// This is an arbitrary object used with wait and notify to
// prevent the method from returning before the music finishes
final Object lock = new Object( );
// Register a listener to make the method exit when the stream is
// done. See Object.wait( ) and Object.notify( )
sequencer.addMetaEventListener(new MetaEventListener( ) {
public void meta(MetaMessage e) {
if (e.getType( ) == END_OF_TRACK) {
synchronized(lock) {
lock.notify( );
}
}
}
});
// Start playing the music
sequencer.start( );
// Now block until the listener above notifies us that we're done.
synchronized(lock) {
while(sequencer.isRunning( )) {
try { lock.wait( ); } catch(InterruptedException e) { }
}
}
}
finally {
// Always relinquish the sequencer, so others can use it.
if (sequencer != null) sequencer.close( );
if (synthesizer != null) synthesizer.close( );
}
}
}
I have used this piece of code in one of my projects that deal with Audio streaming and was working just fine.
Furthermore, you can see similar examples here:
Java Audio Example
Just reading the javadoc of AudioSystem give me an idea.
There is an other signature for getAudioInputStream: you can give it an InputStream instead of a URL.
So, try to manage to get the input stream by yourself and add the needed headers so that you get the stream instead the html content:
URLConnection uc = url.openConnection();
uc.setRequestProperty("<header name here>", "<header value here>");
InputStream in = uc.getInputStream();
ain=AudioSystem.getAudioInputStream(in);
Hope this help.
I know this answer comes late, but I had the same issue: I wanted to play MP3 and AAC audio and also wanted the user to insert PLS/M3U links. Here is what I did:
First I tried to parse the type by using the simple file name:
import de.webradio.enumerations.FileExtension;
import java.net.URL;
public class FileExtensionParser {
/**
*Parses a file extension
* #param filenameUrl the url
* #return the filename. if filename cannot be determined by file extension, Apache Tika parses by live detection
*/
public FileExtension parseFileExtension(URL filenameUrl) {
String filename = filenameUrl.toString();
if (filename.endsWith(".mp3")) {
return FileExtension.MP3;
} else if (filename.endsWith(".m3u") || filename.endsWith(".m3u8")) {
return FileExtension.M3U;
} else if (filename.endsWith(".aac")) {
return FileExtension.AAC;
} else if(filename.endsWith((".pls"))) {
return FileExtension.PLS;
}
URLTypeParser parser = new URLTypeParser();
return parser.parseByContentDetection(filenameUrl);
}
}
If that fails, I use Apache Tika to do a kind of live detection:
public class URLTypeParser {
/** This class uses Apache Tika to parse an URL using her content
*
* #param url the webstream url
* #return the detected file encoding: MP3, AAC or unsupported
*/
public FileExtension parseByContentDetection(URL url) {
try {
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
InputStream in = connection.getInputStream();
BodyContentHandler handler = new BodyContentHandler();
AudioParser parser = new AudioParser();
Metadata metadata = new Metadata();
parser.parse(in, handler, metadata);
return parseMediaType(metadata);
} catch (IOException e) {
e.printStackTrace();
} catch (TikaException e) {
e.printStackTrace();
} catch (SAXException e) {
e.printStackTrace();
}
return FileExtension.UNSUPPORTED_TYPE;
}
private FileExtension parseMediaType(Metadata metadata) {
String parsedMediaType = metadata.get("encoding");
if (parsedMediaType.equalsIgnoreCase("aac")) {
return FileExtension.AAC;
} else if (parsedMediaType.equalsIgnoreCase("mpeg1l3")) {
return FileExtension.MP3;
}
return FileExtension.UNSUPPORTED_TYPE;
}
}
This will also solve the HTML problem, since the method will return FileExtension.UNSUPPORTED for HTML content.
I combined this classes together with a factory pattern and it works fine. The live detection takes only about two seconds.
I don't think that this will help you anymore but since I struggled almost three weeks I wanted to provide a working answer. You can see the whole project at github: https://github.com/Seppl2202/webradio

Reading and playing audio file

I'm having some trouble with reading and playing certain audio clips on Android 2.0.1 (Motorola Droid A855). Below is the code segment that I use. It works fine for some files, but for other files it just doesn't exit the while loop. I have tried checking
InputStream.available()
method but with no luck. I even printed out the number of bytes it reads properly before getting stuck. It seems that it gets stuck in the loop at the last round of read (have less than < 512 bytes left), and doesn't exit the loop.
int sampleFreq = 44100;
int minBufferSize = AudioTrack.getMinBufferSize(sampleFreq, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT);
int bufferSize = 512;
AudioTrack at = new AudioTrack(AudioManager.STREAM_MUSIC, sampleFreq, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, minBufferSize, AudioTrack.MODE_STREAM);
InputStream input;
try {
File fileID=new File(Environment.getExternalStorageDirectory(),resourceID);
input = new FileInputStream( fileID);
int filesize=(int)fileID.length();
int i=0,byteread=0;
byte[] s = new byte[bufferSize];
at.play();
while((i = input.read(s, 0, bufferSize))>-1){
at.write(s, 0, i);
//at.flush();
byteread+=i;
Log.i(TAG,"playing audio "+byteread+"\t"+filesize);
}
at.stop();
at.release();
input.close();
} catch (FileNotFoundException e) {
// TODO
e.printStackTrace();
} catch (IOException e) {
// TODO
e.printStackTrace();
}
Audio files are around 1-2MB in size and are in wav format. Following is an example of the logging-
> : playing audio 1057280 1058474
> : playing audio 1057792 1058474
> : playing audio 1058304 1058474
Any idea why this is happening as it runs perfectly for some of the audio files.
Make sure your call to write() always delivers a byte size which is an integral number of samples.
For your 16 bit stereo mode, that should be an integral multiple of 4 bytes.
Additionally, at least before the final write, for stutter-free operation you should really respect the minimum buffer size of the audio subsystem and deliver at least that much data in each call to the audio write method.
If your source data is a .wav file, make sure you actually skip the header and read samples only starting from a valid payload chunk.

Sound wave from TargetDataLine

Currently I am trying to record a sound wave from a mic and display amplitude values in realtime in Java. I came across Targetdataline but I am having a bit of trouble understanding I get data from it.
Sample code from Oracle states:
line = (TargetDataLine) AudioSystem.getLine(info);
line.open(format, line.getBufferSize());
ByteArrayOutputStream out = new ByteArrayOutputStream();
int numBytesRead;
byte[] data = new byte[line.getBufferSize() / 5];
// Begin audio capture.
line.start();
// Here, stopped is a global boolean set by another thread.
while (!stopped) {
// Read the next chunk of data from the TargetDataLine.
numBytesRead = line.read(data, 0, data.length);
****ADDED CODE HERE*****
// Save this chunk of data.
out.write(data, 0, numBytesRead);
}
So I am currently trying to add code to get a input stream of amplitude values however I get a ton of bytes when I print what the variable data is at the added code line.
for (int j=0; j<data.length; j++) {
System.out.format("%02X ", data[j]);
}
Does anyone who has used TargetDataLine before know how I can make use of it?
For anyone who has trouble using TargetDataLine for sound extraction in the future, the class WaveData by Ganesh Tiwari contains a very helpful method that turns bytes into a float array (http://code.google.com/p/speech-recognition-java-hidden-markov-model-vq-mfcc/source/browse/trunk/SpeechRecognitionHMM/src/org/ioe/tprsa/audio/WaveData.java):
public float[] extractFloatDataFromAudioInputStream(AudioInputStream audioInputStream) {
format = audioInputStream.getFormat();
audioBytes = new byte[(int) (audioInputStream.getFrameLength() * format.getFrameSize())];
// calculate durationSec
float milliseconds = (long) ((audioInputStream.getFrameLength() * 1000) / audioInputStream.getFormat().getFrameRate());
durationSec = milliseconds / 1000.0;
// System.out.println("The current signal has duration "+durationSec+" Sec");
try {
audioInputStream.read(audioBytes);
} catch (IOException e) {
System.out.println("IOException during reading audioBytes");
e.printStackTrace();
}
return extractFloatDataFromAmplitudeByteArray(format, audioBytes);
}
Using this I can get sound amplitude data.

Categories

Resources