I'm creating .wav files using marytts.
The kpbs of .wav files changing according to voice I'm using with the code below. I would like to write every audio file in 128 kpbs.
Due to the program I am planning to use generated .wav files and only supports 128 kpbs, is there a way to write the .wav files always 128kpbs?
This is my code:
AudioInputStream audio = marytts.generateAudio(text); //generate audio from text
AudioSystem.write(audio, AudioFileFormat.Type.WAVE, new File("F:\\temp\\" + filename + ".wav"));//save audio as .wav to the static location with filename
return true;//function completed so return true
I managed to find an answer to my question.
Maybe later someone also ask the same so I'm giving my solution.
Under the class I wrote these global variables as I wanted my wav
static AudioFormat.Encoding defaultEncoding = AudioFormat.Encoding.PCM_SIGNED;
static float fDefaultSampleRate = 8000;
static int nDefaultSampleSizeInBits = 16;
static int nDefaultChannels = 1;
static int frameSize = 2;
static float frameRate= 8000;
static boolean bDefaultBigEndian = false;
And changed my code like this.
I created a format as I wanted, generated audio from text, changed audio in my format and wrote it.
AudioFormat defaultFormat = new AudioFormat(defaultEncoding,fDefaultSampleRate,nDefaultSampleSizeInBits,nDefaultChannels,frameSize,frameRate,bDefaultBigEndian);
AudioInputStream GeneratedAudio = marytts.generateAudio(text); //generate audio from text
AudioInputStream audio = AudioSystem.getAudioInputStream(defaultFormat, GeneratedAudio);
AudioSystem.write(audio, AudioFileFormat.Type.WAVE, new File("F:\\temp\\" + filename + ".wav"));//save audio as .wav to the static location with filename
return true;//function completed so return true
Related
The question of reading .WAV files in and putting them into an array in Java has been asked on here numerous times. I've looked at dozens of ways of doing so. What I am trying to do is obtain the same values that are produced opening a WAV file with MATLAB's native WAVread function. The particular file I am trying to analyze (in an effort to perform signal processing) is in stereo, with a sample rate of 10000 Hz and is a 32-bit float (which I suspect is part of the problem).
In MATLAB, all of the values for this particular file are in Double format, and the maximum value is 3.0991 and the minimum vale is -3.0530. I also get these same values in Python using the scipy.io.wavfile.read package.
When utilizing the following Java code to read the file in and then convert it into a type Double array:
public static double[] toDoubleArray(byte[] byteArray){
int times = Double.SIZE / Byte.SIZE;
double[] doubles = new double[byteArray.length / times];
for(int i=0;i<doubles.length;i++){
doubles[i] = ByteBuffer.wrap(byteArray, i*times, times).getDouble();
}
return doubles;
}
public static void main(String[] args) throws UnsupportedAudioFileException, IOException {
Path path = FileSystems.getDefault().getPath("").toAbsolutePath();
File file = new File(path + "/antonio_deep_fast_1.wav");
ByteArrayOutputStream out = new ByteArrayOutputStream();
AudioInputStream in = AudioSystem.getAudioInputStream(file);
int read;
byte[] buff = new byte[1024];
try {
while ((read = in.read(buff)) > 0)
{
out.write(buff, 0, read);
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
out.flush();
byte[] audioBytes = out.toByteArray();
for(byte byteMe:audioBytes)
System.out.println(byteMe);
double[]doubles = toDoubleArray(audioBytes);
I get values as high as 8 and as low as -9. I've had similar issues with other adaptions of Java's AudioInputStream class, while if I try to use the Java Wav File IO package, it does not support the 32-bit float WAV file.
I have probably spent over a dozen hours trying to get this to read in properly. Part of the issue, no doubt, is my improper understanding of the nature of WAV files. Would anyone be able to point me in the right direction to read the file in a similar way MATLAB (or scipy) does?
This code is producing WAV files that don't work in many apps.
When I check in a RIFFVIEWER app it complains about invalid RIFF length. BWFMetaEdit claims the file is truncated. Some tolerant apps like Audacity will play them.
Am I doing something wrong here or is java audio buggy?
// The essence data is PCM formatted, so convert it to a WAVE file
File extractPCM(WAVEPCMDescriptor descriptor, EssenceData data, String name) {
try {
Stream stream = data.getEssenceStream();
URI uri = stream.getStreamURI();
int hashCode = uri.hashCode();
File file = new File(mediaDir,
name + "_" +
String.format("%08X", hashCode)
+ ".wav"
);
if (file.exists()) {
return file;
}
mediaDir.mkdir(); // Ensure exists
log("Copying essence data stream");
stream.setPosition(0);
ByteBuffer buff = stream.read((int)stream.getLength());
stream.close();
buff.flip();
AudioFormat format = new AudioFormat(
Encoding.PCM_SIGNED,
(float)descriptor.getSampleRate().doubleValue(),
(int)descriptor.getQuantizationBits(),
(int)descriptor.getChannelCount(),
(int)descriptor.getBlockAlign(),
(float)descriptor.getAverageBytesPerSecond(),
false
);
AudioInputStream input = new AudioInputStream(
new ByteArrayInputStream(buff.array()), format, buff.capacity());
AudioSystem.write(input,
AudioFileFormat.Type.WAVE, file);
log("Extracted file " + file);
return file;
} catch (EndOfDataException | IllegalArgumentException | IOException e1) {
log(e1);
return null;
}
}
It looks like I was supplying the length as bytes instead of sample frames.
When I divide the buffer size by the frame size, non tolerant apps stop complaining.
AudioInputStream input = new AudioInputStream(
new ByteArrayInputStream(buff.array()), format, buff.capacity() / format.getFrameSize());
for a project I'm working on, I want to be able to concatenate multiple .wav files.
Through my research I was able to come up with this code:
File sample1 = new File("F:\\Programming\\Resources\\Java_Sound\\trumpet1.wav");
File sample2 = new File("F:\\Programming\\Resources\\Java_Sound\\trumpet2.wav");
File fileOut = new File("F:\\Programming\\Resources\\Java_Sound\\Test.wav");
AudioInputStream audio1 = AudioSystem.getAudioInputStream(sample1);
AudioInputStream audio2 = AudioSystem.getAudioInputStream(sample2);
AudioInputStream audioBuild = new AudioInputStream(new SequenceInputStream(audio1, audio2), audio1.getFormat(), audio1.getFrameLength() + audio2.getFrameLength());
//for(int i = 0; i < 5; i++){
// audioBuild = new AudioInputStream(new SequenceInputStream(audioBuild, audio2), audioBuild.getFormat(), audioBuild.getFrameLength() + audio2.getFrameLength());
//}
AudioSystem.write(audioBuild, AudioFileFormat.Type.WAVE, fileOut);
it works fine for combining two .wav files, however when I uncomment the for loop the produced .wav file only plays audio for the first concatenation. The audio track appears to end early, as wmp's duration bar only goes about 1\5 of the way across the screen.
I've assumed that the problem is with the header in the created .wav file. I've researched many different web pages discussing how a header in constructed, but all of them had slightly different definitions, but all said the header should be in hex. When converting the stream (not the audio stream, a standard FileInputStream) the headers I had were in decimal. Additionally, after the RIFF part, and before the WAVE part, is supposed to be the size of the whole file, not including the first 8 bytes. However some of mine included hyphens. To be honest I have no clue what those mean. Ignoring them however, the size of the test file after uncommenting the code above is still a larger number.
So after researching both how to concatenate multiple audio files, and how to create\manage .wav headers, I still have no clue why the rest of my audio isn't playing, if it even exists. Any help is greatly appreciated. Thanks in advance.
It might be because the input streams cannot be read more than once. Once you read an input stream, it will be at its end and attempt to read further will read no more bytes from that stream.
This should work with a slight modification, keep creating new audio input streams in your loop:
File sample1 = new File("f1.wav");
File sample2 = new File("f2.wav");
File fileOut = new File("combined.wav");
AudioInputStream audio1 = AudioSystem.getAudioInputStream(sample1);
AudioInputStream audio2 = AudioSystem.getAudioInputStream(sample2);
AudioInputStream audioBuild = new AudioInputStream(new SequenceInputStream(audio1, audio2), audio1.getFormat(), audio1.getFrameLength() + audio2.getFrameLength());
for(int i = 0; i < 5; i++)
{
audioBuild = new AudioInputStream(new SequenceInputStream(audioBuild, /* keep creating new input streams */ AudioSystem.getAudioInputStream(sample2)), audioBuild.getFormat(), audioBuild.getFrameLength() + audio2.getFrameLength());
}
AudioSystem.write(audioBuild, AudioFileFormat.Type.WAVE, fileOut);
Also, ensure your audio formats for the files are exactly the same. That is, same sample rate, same channel count, same bits per sample. Otherwise you'll need additional code to do sample conversion.
This is what I used to join any amount of wave files. I looped through a list of the string values for the wave file paths, and each time I join the previous resulting AudioInputStream with the next clip.
List<String> paths;
AudioInputStream clip1 = null;
for (String path : paths)
{
if(clip1 == null)
{
clip1 = AudioSystem.getAudioInputStream(new File(path));
continue;
}
AudioInputStream clip2 = AudioSystem.getAudioInputStream(new File(path));
AudioInputStream appendedFiles = new AudioInputStream(
new SequenceInputStream(clip1, clip2),
clip1.getFormat(),
clip1.getFrameLength() + clip2.getFrameLength());
clip1 = appendedFiles;
}
AudioSystem.write(clip1, AudioFileFormat.Type.WAVE, new File("exported.wav"));
This question already has answers here:
Convert audio stream to WAV byte array in Java without temp file
(5 answers)
Closed 9 years ago.
After extensive research into the subject I have reached a brick wall.
All I want to do is add a collection of .wav files into a byte array, one after another, and output them all into one complete newly created .wav file. I extract all the .wav data into a byte array, skipping the .wav header and going straight for the data, then when it comes to writing it to the newly created .wav file I get an error like:
Error1: javax.sound.sampled.UnsupportedAudioFileException: could not get audio input stream from input stream
Error2: could not get audio input stream from input stream
The code is:
try
{
String path = "*********";
String path2 = path + "newFile.wav";
File filePath = new File(path);
File NewfilePath = new File(path2);
String [] folderContent = filePath.list();
int FileSize = 0;
for(int i = 0; i < folderContent.length; i++)
{
RandomAccessFile raf = new RandomAccessFile(path + folderContent[i], "r");
FileSize = FileSize + (int)raf.length();
}
byte[] FileBytes = new byte[FileSize];
for(int i = 0; i < folderContent.length; i++)
{
RandomAccessFile raf = new RandomAccessFile(path + folderContent[i], "r");
raf.skipBytes(44);
raf.read(FileBytes);
raf.close();
}
boolean success = NewfilePath.createNewFile();
InputStream byteArray = new ByteArrayInputStream(FileBytes);
AudioInputStream ais = AudioSystem.getAudioInputStream(byteArray);
AudioSystem.write(ais, Type.WAVE, NewfilePath);
}
Your byte array doesn't contain any header information which probably means that AutoSystem.write doesn't think it is really WAV data.
Can you create suitable header for your combined data?
Update: This question might hold the answer for you.
My goal is to play an mp3 file from Java. With every approach that I took, it always fails with a LineUnavailableException.
AudioInputStream inputStream = AudioSystem.getAudioInputStream(new URL("http://localhost:8080/agriserver/facebook/sound/test6.mp3"));
Clip clip = AudioSystem.getClip(info);
clip.open(inputStream);
clip.start();
Failed attempts to fix it:
Use Sun's mp3 plugin.
Use Jlayer 3rd party library
Use Tritonus 3rd party library
Re-encode the mp3 with Sony Sound Forge, Adobe Sound Booth, all no luck
Re-encode the mp3 with different encode rates and sampling rates
Try to use JMF
Use random mp3 from the Internet that plays fine in other applications
Read postings with the same error. None of the postings have an answer that helped resolve the issue.
Here is the exception:
Exception in thread "main" javax.sound.sampled.LineUnavailableException: line with format MPEG1L3 48000.0 Hz, unknown bits per sample, stereo, unknown frame size, 41.666668 frames/second, not supported.
at com.sun.media.sound.DirectAudioDevice$DirectDL.implOpen(DirectAudioDevice.java:494)
at com.sun.media.sound.DirectAudioDevice$DirectClip.implOpen(DirectAudioDevice.java:1280)
at com.sun.media.sound.AbstractDataLine.open(AbstractDataLine.java:107)
at com.sun.media.sound.DirectAudioDevice$DirectClip.open(DirectAudioDevice.java:1061)
at com.sun.media.sound.DirectAudioDevice$DirectClip.open(DirectAudioDevice.java:1151)
at Demo.playMp3(Demo.java:83)
Apparently, the mp3 has to be read into one stream. That stream has to be read into a second stream to decode it. The below code worked:
// read the file
AudioInputStream rawInput = AudioSystem.getAudioInputStream(new ByteArrayInputStream(data));
// decode mp3
AudioFormat baseFormat = rawInput.getFormat();
AudioFormat decodedFormat = new AudioFormat(
AudioFormat.Encoding.PCM_SIGNED, // Encoding to use
baseFormat.getSampleRate(), // sample rate (same as base format)
16, // sample size in bits (thx to Javazoom)
baseFormat.getChannels(), // # of Channels
baseFormat.getChannels()*2, // Frame Size
baseFormat.getSampleRate(), // Frame Rate
false // Big Endian
);
AudioInputStream decodedInput = AudioSystem.getAudioInputStream(decodedFormat, rawInput);
OK - Let's start by ruling out your MP3 files and your code.
Pick an MP3 file that you have and
play it with any MP3 player.
Download
http://www.javazoom.net/javalayer/sources/jlayer1.0.1.zip
Extract jl1.0.1.jar from zip file
and put in your classpath
Cut and Paste the code at the end of this answer into your dev environment.
compile and run making sure your mp3
file in step 1 is the parameter to
the file. (In my case I had this "C:\\Users\\romain\\Music\\Al DiMeola\\Elegant Gypsy\\01 Flight over Rio Al DiMeola.mp3")
I tested this and it works fine.
import java.io.BufferedInputStream;
import java.io.FileInputStream;
import javazoom.jl.player.Player;
public class MP3 {
private String filename;
private Player player;
// constructor that takes the name of an MP3 file
public MP3(String filename) {
this.filename = filename;
}
public void close() { if (player != null) player.close(); }
// play the MP3 file to the sound card
public void play() {
try {
FileInputStream fis = new FileInputStream(filename);
BufferedInputStream bis = new BufferedInputStream(fis);
player = new Player(bis);
}
catch (Exception e) {
System.out.println("Problem playing file " + filename);
System.out.println(e);
}
// run in new thread to play in background
new Thread() {
public void run() {
try { player.play(); }
catch (Exception e) { System.out.println(e); }
}
}.start();
}
// test client
public static void main(String[] args) {
String filename = args[0];
MP3 mp3 = new MP3(filename);
mp3.play();
}
}