I need to split flac file to many pieces. I am using jFLAC library to read flac files
FLACDecoder decoder = new FLACDecoder(inputStream);
then I am trying to decode parent file between to SeekPoints
decoder.decode(seekPointFrom, seekPointTo);
I also don't quite understand how properly to get this seekpoints for seconds value. For example I need first seekpoint from 0 seconds and second to 150 seconds. How to get right seek points objects? Seekpoint cinstructor is
/**
* The constructor.
* #param sampleNumber The sample number of the target frame
* #param streamOffset The offset, in bytes, of the target frame with respect to beginning of the first frame
* #param frameSamples The number of samples in the target frame
*/
public SeekPoint(long sampleNumber, long streamOffset, int frameSamples) {
this.sampleNumber = sampleNumber;
this.streamOffset = streamOffset;
this.frameSamples = frameSamples;
}
also decoder have some listener that listen every read chunk action.
#Override
public void processPCM(ByteData pcm) {
try {
outputStream.write(pcm.getData(), 0, pcm.getLen());
} catch (IOException e) {
e.printStackTrace();
}
}
When writing is done I am tying to play new flac file but my player alerts that file incorrect. What I need to do that my flac files will open right? Maybe I need to write some header to this file or something else?
In regards to the FLAC SeekPoints, there is no guarantee that there will be one that corresponds to a given second - there might only be a few SeekPoints in the entire audio file.
As such, I recently updated jFLAC with a seek function to get you to at least the closest audio frame:
https://github.com/nguillaumin/jflac/commit/585940af97157eb11b60c15cc8cb13ef3fc27ce3
In regards to writing out a new file, the decoded data will be in raw PCM samples. So you will need to pipe it into a FLAC encoder if you want a valid FLAC file as the output. Alternately you could write out a Wave header and dump the raw PCM samples, then convert that resultant Wave file into a FLAC file.
Related
Hey I wrote a program which cuts a specific area from a wav file.
But I realized that the cut is very hard so I wanted to fade it in and out. My problem is that I have no idea how to achieve that in java because I'm very new to the sound library from java.
Could someone give a hint or a tip how to achieve that or tip for a resource where I can find the answer?
Here is some code I wrote before:
AudioInputStream in = null;
AudioInputStream out = null;
File originalFile = new File(filePath);
if (originalFile.exists() && originalFile.isFile())
{
File editedFile = new File(newPath);
try
{
in = AudioSystem.getAudioInputStream(originalFile);
AudioFileFormat fileFormat = AudioSystem.getAudioFileFormat(originalFile);
AudioFormat format = fileFormat.getFormat();
int bytesPerSecond = format.getFrameSize() * (int) format.getFrameRate();
in.skip(start * bytesPerSecond);
long framesOfAudioToCopy = trackDuration * (int) format.getFrameRate();
// out is the audiostream which contains the output wav file
out = new AudioInputStream(in, format, framesOfAudioToCopy);
// so I guess here would be the right place to fade the audio file
// just before writing it to the disk
AudioSystem.write(out, fileFormat.getType(), editedFile);
System.out.println("Trimming done!");
System.out.println();
}
catch (UnsupportedAudioFileException e)
{
errorMessage = e.getMessage();
}
catch (IOException e)
{
errorMessage = e.getMessage();
}
You can try using the controls provided in javax.sound.sampled, a tutorial is here, but I've never had much luck with them. My experience is that if they exist for a given system (pc/os dependent) there are still issues as the volume changes only occur at buffer boundaries.
Note that the very last part of the tutorial suggests manipulating the audio directly. To do this requires multiple steps.
1) get a hold of the individual bytes of the sound file
There is a code example of this in the very next tutorial on Using Files and Format Converters, the section "reading sound files". In this code example, note the point where we have a comment marking the point where access to the individual bytes has been provided:
// Here, do something useful with the audio data that's
// now in the audioBytes array...
2) Convert bytes to PCM (depends on the audio format)
3) multiply by a volume factor
4) increment or decrement the factor (if fading)
5) convert the PCM back to bytes
6) pack and ship via a SourceDataLine (again depends on audio format)
All the steps have been described before in greater detail in StackOverflow and should be searchable, though I don't know how easy it will be at this point to find them.
There are a couple free libraries that will allow real-time volume fading. I wrote AudioCue for this (and real time frequency and panning) and there is also TinySound.
PS I am happy to answer questions and take suggestions for improvements in presentation for the library I wrote.
I'm working on a system to play, pause and stop music. I'm testing this out with 2 different wav files, one with a length of 45 seconds and other with a length of 3:35 minutes. The problem I'm having is that the 45 second wav file plays without any problem. The 3:35 minute wav file on the other hand, doesn't load. Is there a maximum time limit to wav files in java or is it possible the wav file is broken? It plays without any problem on windows app "groove music".
I've searched around on stack overflow but no one seemed to experience the same problem as I am, one wav file playing, the other one not.
Error code I'm getting:
javax.sound.sampled.LineUnavailableException: line with format PCM_FLOAT 44100.0 Hz, 32 bit, stereo, 8 bytes/frame, not supported.
The method i use for playing the wav file.
public static void playAudio(String name) {
try {
System.out.println("NOTE: Playing audio");
clip = AudioSystem.getClip();
AudioInputStream inputStream = AudioSystem.getAudioInputStream(
Engine.class.getResourceAsStream("/Audio/" + name));
clip.open(inputStream);
clip.start();
} catch(Exception e) {
System.out.println("ERROR: Failed to load audio");
}
}
Calling the method
Engine.playAudio("easy2.wav");
Picture of the wav files in the "src/Audio/" folder
Given the error message, one thing you could try is after opening the AudioInputStream from the resource, do the following:
AudioFormat fmt = inputStream.getFormat();
fmt = new AudioFormat(fmt.getSampleRate(),
16,
fmt.getChannels(),
true,
true);
inputStream = AudioSystem.getAudioInputStream(fmt, inputStream);
This attempts to convert the stream away from a floating-point format, hopefully to something that the system is more likely to support. (Also see JavaDoc for AudioSystem.getAudioInputStream(AudioFormat, AudioInputStream).)
You can run the following code to find out what formats are likely available for playback:
Arrays.stream(AudioSystem.getSourceLineInfo(new Line.Info(SourceDataLine.class)))
.filter(info -> info instanceof DataLine.Info)
.map(info -> (DataLine.Info) info)
.flatMap(info -> Arrays.stream(info.getFormats()))
.forEach(System.out::println);
If the above method of converting the audio stream doesn't work, then probably your best bet is to just convert the file with some editor, such as Audacity. Java sound is unfortunately still pretty limited in what formats it supports by default. (The most reliable format is probably CDDA, which is 44100Hz, 16-bit signed LPCM.)
There may also be 3rd-party SPIs which support conversions from floating-point PCM.
The problem I was facing originated in the wav file itself. Since a wav file can have one of several different formats, the one it was encoded with was not compatible with java. The solution was to change the bitrate and Hz of the wav file to match an encoding that java supported.
More about wav formats: https://en.wikipedia.org/wiki/WAV
I'm trying simply to convert a .mov file into .webm using Xuggler, which should work as FFMPEG supports .webm files.
This is my code:
IMediaReader reader = ToolFactory.makeReader("/home/user/vids/2.mov");
reader.addListener(ToolFactory.makeWriter("/home/user/vids/2.webm", reader));
while (reader.readPacket() == null);
System.out.println( "Finished" );
On running this, I get this error:
[main] ERROR org.ffmpeg - [libvorbis # 0x8d7fafe0] Specified sample_fmt is not supported.
[main] WARN com.xuggle.xuggler - Error: could not open codec (../../../../../../../csrc/com/xuggle/xuggler/StreamCoder.cpp:831)
Exception in thread "main" java.lang.RuntimeException: could not open stream com.xuggle.xuggler.IStream#-1921013728[index:1;id:0;streamcoder:com.xuggle.xuggler.IStreamCoder#-1921010088[codec=com.xuggle.xuggler.ICodec#-1921010232[type=CODEC_TYPE_AUDIO;id=CODEC_ID_VORBIS;name=libvorbis;];time base=1/44100;frame rate=0/0;sample rate=44100;channels=1;];framerate:0/0;timebase:1/90000;direction:OUTBOUND;]: Operation not permitted
at com.xuggle.mediatool.MediaWriter.openStream(MediaWriter.java:1192)
at com.xuggle.mediatool.MediaWriter.getStream(MediaWriter.java:1052)
at com.xuggle.mediatool.MediaWriter.encodeAudio(MediaWriter.java:830)
at com.xuggle.mediatool.MediaWriter.onAudioSamples(MediaWriter.java:1441)
at com.xuggle.mediatool.AMediaToolMixin.onAudioSamples(AMediaToolMixin.java:89)
at com.xuggle.mediatool.MediaReader.dispatchAudioSamples(MediaReader.java:628)
at com.xuggle.mediatool.MediaReader.decodeAudio(MediaReader.java:555)
at com.xuggle.mediatool.MediaReader.readPacket(MediaReader.java:469)
at com.mycompany.xugglertest.App.main(App.java:13)
Java Result: 1
Any ideas?
There's a funky thing going on with Xuggler where it doesn't always allow you to set the sample rate of IAudioSamples. You'll need to use an IAudioResampler.
Took me a while to figure this out. This post by Marty helped a lot, though his code is outdated now.
Here's how you fix it.
.
Before encoding
I'm assuming here that audio input has been properly set up, resulting in an IStreamCoder called audioCoder.
After that's done, you are probably initiating an IMediaWriter and adding an audio stream like so:
final IMediaWriter oggWriter = ToolFactory.makeWriter(oggOutputFile);
// Using stream 1 'cause there is also a video stream.
// For an audio only file you should use stream 0.
oggWriter.addAudioStream(1, 1, ICodec.ID.CODEC_ID_VORBIS,
audioCoder.getChannels(), audioCoder.getSampleRate());
Now create an IAudioResampler:
IAudioResampler oggResampler = IAudioResampler.make(audioCoder.getChannels(),
audioCoder.getChannels(),
audioCoder.getSampleRate(),
audioCoder.getSampleRate(),
IAudioSamples.Format.FMT_FLT,
audioCoder.getSampleFormat());
And tell your IMediaWriter to update to its sample format:
// The stream 1 here is consistent with the stream we added earlier.
oggWriter.getContainer().getStream(1).getStreamCoder().
setSampleFormat(IAudioSamples.Format.FMT_FLT);
.
During encoding
You are currently probably initiating an IAudioSamples and filling it with audio data, like so:
IAudioSamples audioSample = IAudioSamples.make(512, audioCoder.getChannels(),
audioCoder.getSampleFormat());
int bytesDecoded = audioCoder.decodeAudio(audioSample, packet, offset);
Now initiate an IAudioSamples for our resampled data:
IAudioSamples vorbisSample = IAudioSamples.make(512, audioCoder.getChannels(),
IAudioSamples.Format.FMT_FLT);
Finally, resample the audio data and write the result:
oggResampler.resample(vorbisSample, audioSample, 0);
oggWriter.encodeAudio(1, vorbisSample);
.
Final thought
Just a hint to get your output files to play well:
If you use audio and video within the same container, then audio and video data packets should be written in such an order that the timestamp of each data packet is higher than that of the previous data packet. So you are almost certainly going to need some kind of buffering mechanism that alternates writing audio and video.
In my Android application I am recording the user's voice which I save as a .3gp encoded audio file.
What I want to do is open it up, i.e. the sequence x[n] representing the audio sample, in order to perform some audio signal analysis.
Does anyone know how I could go about doing this?
You can use the Android MediaCodec class to decode 3gp or other media files. The decoder output is standard PCM byte array. You can directly send this output to the Android AudioTrack class to play or continue with this output byte array for further processing such as DSP. To apply DSP algorithm the byte array must be transform into float/double array. There are several steps to get the byte array output. In summary it looks like as follows:
Instantiate MediaCodec
String mMime = "audio/3gpp"
MediaCodec mMediaCodec = MediaCodec.createDecoderByType(mMime);
Create Media format and configure media codec
MediaFormat mMediaFormat = new MediaFormat();
mMediaFormat = MediaFormat.createAudioFormat(mMime,
mMediaFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE),
mMediaFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT));
mMediaCodec.configure(mMediaFormat, null, null, 0);
mMediaCodec.start();
Capture output from MediaCodec ( Should process inside a thread)
MediaCodec.BufferInfo buf_info = new MediaCodec.BufferInfo();
int outputBufferIndex = mMediaCodec.dequeueOutputBuffer(buf_info, 0);
byte[] pcm = new byte[buf_info.size];
mOutputBuffers[outputBufferIndex].get(pcm, 0, buf_info.size);
This Google IO talk might be relevant here.
I am trying to capture audio from the line-in from my PC, to do this I am using AudioSystem class. There is one of two choices with the static AudioSystem.write method: Write to a file Or Write to a stream. I can get it to write to a file just fine, but whenever I try to write to a stream I get thrown java.io.IOException (stream length not specified). As for my buffer I am using a ByteArrayOutputStream. Is there another kind of stream I am supposed to be using or messing up somewhere else?
Also in a related subject, one can sample the audio line in (TargetDataLine) directly by calling read. Is this the preferred way doing audio capture or using AudioSystem?
Update
Source code that was requested:
final private TargetDataLine line;
final private AudioFormat format;
final private AudioFileFormat.Type fileType;
final private AudioInputStream audioInputStream;
final private ByteArrayOutputStream bos;
// Constructor, etc.
public void run()
{
System.out.println("AudioWorker Started");
try
{
line.open(format);
line.start();
// This commented part is regarding the second part
// of my question
// byte[] buff = new byte[512];
// int bytes = line.read(buff, 0, buff.length);
AudioSystem.write(audioInputStream, fileType, bos);
}
catch ( Exception e )
{
e.printStackTrace();
}
System.out.println("AudioWorker Finished");
}
// Stack trace in console
AudioWorker Started
java.io.IOException: stream length not specified
at com.sun.media.sound.WaveFileWriter.write(Unknown Source)
at javax.sound.sampled.AudioSystem.write(Unknown Source)
at AudioWorker.run(AudioWorker.java:41)
AudioWorker Finished
From AudioSystem.write JavaDoc:
Writes a stream of bytes representing an audio file of the specified file type to the output stream provided. Some file types require that the length be written into the file header; such files cannot be written from start to finish unless the length is known in advance. An attempt to write a file of such a type will fail with an IOException if the length in the audio file type is AudioSystem.NOT_SPECIFIED.
Since the Wave format requires the length to be written at the beginning of the file, the writer is querying the getFrameLength method of your AudioInputStream. When this returns NOT_SPECIFIED—because your recording "live" data of as-yet-unspecified length— the writer throws the exception.
The File-oriented works around this by writing dummy data to the length field, then re-opening the file when the write is complete and overwriting that area of the file.
Use an output format that doesn't need the length in advance (au), or use an AudioInputStream that returns a valid frame length, or use the File version of the API.
You should check out Richard Baldwin's tutorial on Java sound. There's a complete source listing at the bottom of the article where he uses TargetDataLine's read to capture audio.
You could also try looking into using JMF which is a bit hairy but works a bit better that javax.sound.sampled stuff. There's quite a few tutorials on the JMF page which describe how to record from line in or mic channels.