I'm working on a system to play, pause and stop music. I'm testing this out with 2 different wav files, one with a length of 45 seconds and other with a length of 3:35 minutes. The problem I'm having is that the 45 second wav file plays without any problem. The 3:35 minute wav file on the other hand, doesn't load. Is there a maximum time limit to wav files in java or is it possible the wav file is broken? It plays without any problem on windows app "groove music".
I've searched around on stack overflow but no one seemed to experience the same problem as I am, one wav file playing, the other one not.
Error code I'm getting:
javax.sound.sampled.LineUnavailableException: line with format PCM_FLOAT 44100.0 Hz, 32 bit, stereo, 8 bytes/frame, not supported.
The method i use for playing the wav file.
public static void playAudio(String name) {
try {
System.out.println("NOTE: Playing audio");
clip = AudioSystem.getClip();
AudioInputStream inputStream = AudioSystem.getAudioInputStream(
Engine.class.getResourceAsStream("/Audio/" + name));
clip.open(inputStream);
clip.start();
} catch(Exception e) {
System.out.println("ERROR: Failed to load audio");
}
}
Calling the method
Engine.playAudio("easy2.wav");
Picture of the wav files in the "src/Audio/" folder
Given the error message, one thing you could try is after opening the AudioInputStream from the resource, do the following:
AudioFormat fmt = inputStream.getFormat();
fmt = new AudioFormat(fmt.getSampleRate(),
16,
fmt.getChannels(),
true,
true);
inputStream = AudioSystem.getAudioInputStream(fmt, inputStream);
This attempts to convert the stream away from a floating-point format, hopefully to something that the system is more likely to support. (Also see JavaDoc for AudioSystem.getAudioInputStream(AudioFormat, AudioInputStream).)
You can run the following code to find out what formats are likely available for playback:
Arrays.stream(AudioSystem.getSourceLineInfo(new Line.Info(SourceDataLine.class)))
.filter(info -> info instanceof DataLine.Info)
.map(info -> (DataLine.Info) info)
.flatMap(info -> Arrays.stream(info.getFormats()))
.forEach(System.out::println);
If the above method of converting the audio stream doesn't work, then probably your best bet is to just convert the file with some editor, such as Audacity. Java sound is unfortunately still pretty limited in what formats it supports by default. (The most reliable format is probably CDDA, which is 44100Hz, 16-bit signed LPCM.)
There may also be 3rd-party SPIs which support conversions from floating-point PCM.
The problem I was facing originated in the wav file itself. Since a wav file can have one of several different formats, the one it was encoded with was not compatible with java. The solution was to change the bitrate and Hz of the wav file to match an encoding that java supported.
More about wav formats: https://en.wikipedia.org/wiki/WAV
Related
Hey I wrote a program which cuts a specific area from a wav file.
But I realized that the cut is very hard so I wanted to fade it in and out. My problem is that I have no idea how to achieve that in java because I'm very new to the sound library from java.
Could someone give a hint or a tip how to achieve that or tip for a resource where I can find the answer?
Here is some code I wrote before:
AudioInputStream in = null;
AudioInputStream out = null;
File originalFile = new File(filePath);
if (originalFile.exists() && originalFile.isFile())
{
File editedFile = new File(newPath);
try
{
in = AudioSystem.getAudioInputStream(originalFile);
AudioFileFormat fileFormat = AudioSystem.getAudioFileFormat(originalFile);
AudioFormat format = fileFormat.getFormat();
int bytesPerSecond = format.getFrameSize() * (int) format.getFrameRate();
in.skip(start * bytesPerSecond);
long framesOfAudioToCopy = trackDuration * (int) format.getFrameRate();
// out is the audiostream which contains the output wav file
out = new AudioInputStream(in, format, framesOfAudioToCopy);
// so I guess here would be the right place to fade the audio file
// just before writing it to the disk
AudioSystem.write(out, fileFormat.getType(), editedFile);
System.out.println("Trimming done!");
System.out.println();
}
catch (UnsupportedAudioFileException e)
{
errorMessage = e.getMessage();
}
catch (IOException e)
{
errorMessage = e.getMessage();
}
You can try using the controls provided in javax.sound.sampled, a tutorial is here, but I've never had much luck with them. My experience is that if they exist for a given system (pc/os dependent) there are still issues as the volume changes only occur at buffer boundaries.
Note that the very last part of the tutorial suggests manipulating the audio directly. To do this requires multiple steps.
1) get a hold of the individual bytes of the sound file
There is a code example of this in the very next tutorial on Using Files and Format Converters, the section "reading sound files". In this code example, note the point where we have a comment marking the point where access to the individual bytes has been provided:
// Here, do something useful with the audio data that's
// now in the audioBytes array...
2) Convert bytes to PCM (depends on the audio format)
3) multiply by a volume factor
4) increment or decrement the factor (if fading)
5) convert the PCM back to bytes
6) pack and ship via a SourceDataLine (again depends on audio format)
All the steps have been described before in greater detail in StackOverflow and should be searchable, though I don't know how easy it will be at this point to find them.
There are a couple free libraries that will allow real-time volume fading. I wrote AudioCue for this (and real time frequency and panning) and there is also TinySound.
PS I am happy to answer questions and take suggestions for improvements in presentation for the library I wrote.
What I'm trying now (don't mind about the syntax - it's Kotlin):
audioFormat = AudioFormat(8000f, 8, 2, true, false)
mixerInfo = AudioSystem.getMixerInfo()
mixer = AudioSystem.getMixer(mixerInfo[0]) // have tried all
dataLineInfo = DataLine.Info(TargetDataLine::class.java, audioFormat)
dataLine = mixer.getLine(dataLineInfo) as TargetDataLine // IllegalArgumentException
dataLine.open()
dataLine.start()
AudioSystem.write(AudioInputStream(dataLine), AudioFileFormat.Type.WAVE, File("1.wav"))
Unfortunatelly a IllegalArgumentException is being thrown at mixer.getLine:
java.lang.IllegalArgumentException: Line unsupported: interface TargetDataLine supporting format PCM_SIGNED 8000.0 Hz, 8 bit, stereo, 2 bytes/frame, little-endian
I've tried all the available mixers and audio formats. Still no luck. The only working mixer is the microphone mixer. All the output-mixers cause the exception.
I've also tried to detect supported audio formats using AudioSystem.isLineSupported, but I coudn't detect any single format that would be supported by output-mixers. AudioSystem.isLineSupported always returns false for all the possible formats I've checked.
So, is it possible to record the sound I hear from speakers?
PS: There is some capture soft that can perform such record. For example, SnagIt 12 can record audio from my sound card. And there are lot of similar apps.
Does anyone have an example of how I can play an 24-192 HD FLAC file with JustFLAC?
JustFLAC is an fork of jFLAC and is claiming it can play this types of files.
package org.kc7bfi.jflac.apps;
class Player {
public static void main(String[] args) {
try {
Player decoder = new Player();
// FLAC HDTracks 24-192
String f = "hdflacfile.flac";
decoder.decode(f);
Throws this exception:
Exception in thread "main" java.lang.IllegalArgumentException: No line matching interface SourceDataLine supporting format PCM_SIGNED 192000.0 Hz, 24 bit, stereo, 6 bytes/frame, little-endian is supported.
I have tried a lot of files.
I'm on WIN8 and Java6.
JustFLAC or similar "small" packages is what I need information about.
What is happening is that the JustFLAC code is saying that the audio format of the FLAC file is 'PCM_SIGNED 192000.0 Hz, 24 bit, stereo, 6 bytes/frame, little-endian' (which looks correct).
The player code will then be asking the output device for a SourceDataLine which matches this format so that it can write the decoded data to the line. However the output device is throwing an exception saying that it does not support this format.
This may be because the actual device does not support this format or it may be that the Java Sound API does not support it. Certainly on the Mac version of Java 6 the Java Sound API did not support 24 bit output, this was changed in Java 7 (and 8). Testing on my Mac with Java 8 a 24 bit 192Khz file plays OK.
I've a problem in converting byte to .mp3 sound file. In my case I do it using FileOutputStream using its write(bytes) method but it just creates a data file with mp3 extension but I cannot play it in any player on my PC.
Note: I'm recording it from Flex Michrophone and send ByteArray to java.
Which libraries should I use to add mp3 sound file headers etc. in java?
UPDATE: I couldn't even convert my raw data to Wave format that is supported by java sound api.. It creates for me sound with recorded sound but with a noise - where's the problem?
Here's my code for wave:
AudioFormat format = new AudioFormat(Encoding.PCM_SIGNED, 44100, 16, 2, 2, 44100, true);
ByteArrayInputStream bais = new ByteArrayInputStream(bytes);
AudioInputStream stream = new AudioInputStream(bais, format, bytes.length/format.getFrameSize());
AudioSystem.write(stream, AudioFileFormat.Type.WAVE, new File(path+"zzz.wav"));
What's wrong with my AudioFormat??? And which one do I have to use in MP3 case ?!
Urgent help! Any help will be highly appreciated!
Just writing the raw bytes to a file with the name extension .mp3 doesn't magically convert the data to the MP3 format. You need to use an encoder to compress it.
A quick google search found LAMEOnJ which is a Java API for the popular LAME MP3 encoder.
I need to split flac file to many pieces. I am using jFLAC library to read flac files
FLACDecoder decoder = new FLACDecoder(inputStream);
then I am trying to decode parent file between to SeekPoints
decoder.decode(seekPointFrom, seekPointTo);
I also don't quite understand how properly to get this seekpoints for seconds value. For example I need first seekpoint from 0 seconds and second to 150 seconds. How to get right seek points objects? Seekpoint cinstructor is
/**
* The constructor.
* #param sampleNumber The sample number of the target frame
* #param streamOffset The offset, in bytes, of the target frame with respect to beginning of the first frame
* #param frameSamples The number of samples in the target frame
*/
public SeekPoint(long sampleNumber, long streamOffset, int frameSamples) {
this.sampleNumber = sampleNumber;
this.streamOffset = streamOffset;
this.frameSamples = frameSamples;
}
also decoder have some listener that listen every read chunk action.
#Override
public void processPCM(ByteData pcm) {
try {
outputStream.write(pcm.getData(), 0, pcm.getLen());
} catch (IOException e) {
e.printStackTrace();
}
}
When writing is done I am tying to play new flac file but my player alerts that file incorrect. What I need to do that my flac files will open right? Maybe I need to write some header to this file or something else?
In regards to the FLAC SeekPoints, there is no guarantee that there will be one that corresponds to a given second - there might only be a few SeekPoints in the entire audio file.
As such, I recently updated jFLAC with a seek function to get you to at least the closest audio frame:
https://github.com/nguillaumin/jflac/commit/585940af97157eb11b60c15cc8cb13ef3fc27ce3
In regards to writing out a new file, the decoded data will be in raw PCM samples. So you will need to pipe it into a FLAC encoder if you want a valid FLAC file as the output. Alternately you could write out a Wave header and dump the raw PCM samples, then convert that resultant Wave file into a FLAC file.