I've been trying to figure out for the life of me how to cut up a certain amount of milliseconds out of a sample. My first problem is when I insert this code into eclipse. It tells me AudioInputStream, AudioFileFormat and FileFormat are not available. If I can't use FileFormat then i can't pass methods into other methods to get the correct calculations to save the correct amount of music. I can't even import them. But more importantly, how can I change the parameter "secondstocopy" into "millisecondstocopy" without ruining the integrity of the algorithm? Still new to java, thank you for your help!
//edit// FileFormat fileFormat = AudioSystem.getAudioFileFormat(file); It seems I can't use this bit of code for android. The getAudioFileFormat doesn't work. Nor can I calculate the fram rate because of.. int bytesPerSecond = format. * (int)format.getFrameRate(); How do I properly get the format and frame rate with media player so that I can calculate the bytesPerSecond and secondsToCopy?
AudioSystem is part of JavaSound, and JavaSound is part of the desktop JVM/SDK. JavaSound is NOT present in the Android JVM/SDK, so your old code will not compile on any current Android SDK. I get an exception using java on android (java.lang.NoClassDefFoundError), why?
How can I calculate the file format and fram rate and stuff without the javasound jvm/sdk. I can tell you now that the sounds are 44100 and 16 bit something. In the get info pane of the recording file, it says total bit rate: 128,000 with two audio channels. I'm not good with encoding stuff but I'm recording the audio with a program on mac called wiretap pro. It tells me the the parameters.
import java.io.*;
import javax.sound.sampled.*;
class AudioFileProcessor {
public static void main(String[] args) {
copyAudio("/tmp/uke.wav", "/tmp/uke-shortened.wav", ƒ, 1);
}
public static void copyAudio(String sourceFileName, String destinationFileName, int startSecond, int secondsToCopy) {
AudioInputStream inputStream = null;
AudioInputStream shortenedStream = null;
try {
File file = new File(sourceFileName);
AudioFileFormat fileFormat = AudioSystem.getAudioFileFormat(file);
AudioFormat format = fileFormat.getFormat();
inputStream = AudioSystem.getAudioInputStream(file);
int bytesPerSecond = format.getFrameSize() * (int)format.getFrameRate();
inputStream.skip(startSecond * bytesPerSecond);
long framesOfAudioToCopy = secondsToCopy * (int)format.getFrameRate();
shortenedStream = new AudioInputStream(inputStream, format, framesOfAudioToCopy);
File destinationFile = new File(destinationFileName);
AudioSystem.write(shortenedStream, fileFormat.getType(), destinationFile);
} catch (Exception e) {
println(e);
} finally {
if (inputStream != null) try { inputStream.close(); } catch (Exception e) { println(e); }
if (shortenedStream != null) try { shortenedStream.close(); } catch (Exception e) { println(e); }
}
}
}
Related
I want find out, if two audio files are same or one contains the other.
For this I use Fingerprint of musicg
byte[] firstAudio = readAudioFileData("first.mp3");
byte[] secondAudio = readAudioFileData("second.mp3");
FingerprintSimilarityComputer fingerprint =
new FingerprintSimilarityComputer(firstAudio, secondAudio);
FingerprintSimilarity fingerprintSimilarity = fingerprint.getFingerprintsSimilarity();
System.out.println("clip is found at " + fingerprintSimilarity.getScore());
to convert audio to byte array I use sound API
public static byte[] readAudioFileData(final String filePath) {
byte[] data = null;
try {
final ByteArrayOutputStream baout = new ByteArrayOutputStream();
final File file = new File(filePath);
final AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(file);
byte[] buffer = new byte[4096];
int c;
while ((c = audioInputStream.read(buffer, 0, buffer.length)) != -1) {
baout.write(buffer, 0, c);
}
audioInputStream.close();
baout.close();
data = baout.toByteArray();
} catch (Exception e) {
e.printStackTrace();
}
return data;
}
but when I execute it, I became at fingerprint.getFingerprintsSimilarity() an Exception.
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 15999
at com.musicg.fingerprint.PairManager.getPairPositionList(PairManager.java:133)
at com.musicg.fingerprint.PairManager.getPair_PositionList_Table(PairManager.java:80)
at com.musicg.fingerprint.FingerprintSimilarityComputer.getFingerprintsSimilarity(FingerprintSimilarityComputer.java:71)
at Main.main(Main.java:42)
How can I compare 2 mp3 files with fingerprint in Java?
I never did any audio stuff in Java before, but I looked into your code briefly. I think that musicg only works for WAV files, not for MP3. Thus, you need to convert the files first. A web search reveals that you can e.g. use JLayer for that purpose. The corresponding code looks like this:
package de.scrum_master.so;
import com.musicg.fingerprint.FingerprintManager;
import com.musicg.fingerprint.FingerprintSimilarity;
import com.musicg.fingerprint.FingerprintSimilarityComputer;
import com.musicg.wave.Wave;
import javazoom.jl.converter.Converter;
import javazoom.jl.decoder.JavaLayerException;
public class Application {
public static void main(String[] args) throws JavaLayerException {
// MP3 to WAV
new Converter().convert("White Wedding.mp3", "White Wedding.wav");
new Converter().convert("Poison.mp3", "Poison.wav");
// Fingerprint from WAV
byte[] firstFingerPrint = new FingerprintManager().extractFingerprint(new Wave("White Wedding.wav"));
byte[] secondFingerPrint = new FingerprintManager().extractFingerprint(new Wave("Poison.wav"));
// Compare fingerprints
FingerprintSimilarity fingerprintSimilarity = new FingerprintSimilarityComputer(firstFingerPrint, secondFingerPrint).getFingerprintsSimilarity();
System.out.println("Similarity score = " + fingerprintSimilarity.getScore());
}
}
Of course you should make sure that you do not convert each file again whenever the program starts, i.e. you should check if the WAV files already exist. I skipped this step and reduced the sample code to a minimal working version.
For FingerprintSimilarityComputer(input1, input2), it suppose to take in the fingerprint of the loaded audio data and not the loaded audio data itself.
In your case, it should be:
// Convert your audio to wav using FFMpeg
Wave w1 = new Wave("first.wav");
Wave w2 = new Wave("second.wav");
FingerprintSimilarityComputer fingerprint =
new FingerprintSimilarityComputer(w1.getFingerprint(), w2.getFingerprint());
// print fingerprint.getFingerprintSimilarity()
Maybe I am missing a point, but if I understood you right, this should do:
byte[] firstAudio = readAudioFileData("first.mp3");
byte[] secondAudio = readAudioFileData("second.mp3");
byte[] smaller = firstAudio.length <= secondAudio.lenght ? firstAudio : secondAudio;
byte[] bigger = firstAudio.length > secondAudio.length ? firstAudio : secondAudio;
int ixS = 0;
int ixB = 0;
boolean contians = false;
for (; ixB<bigger.length; ixB++) {
if (smaller[ixS] == bigger[ixB]) {
ixS++;
if (ixS == smaller.lenght) {
contains = true;
break;
}
}
else {
ixS = 0;
}
}
if (contains) {
if (smaller.length == bigger.length) {
System.out.println("Both tracks are equal");
}
else {
System.out.println("The bigger track, fully contains the smaller track starting at byte: "+(ixB-smaller.lenght));
}
}
else {
System.out.println("No track completely contains the other track");
}
I am currently trying to write a jukebox-like application in Java that is able to play any audio source possible, but encountered some difficulties when trying to play radio streams.
For playback I use JLayer from JavaZoom, that works fine as long as the target is a direct media file or a direct media stream (I can play PCM, MP3 and OGG just fine). However I encounter difficulties when trying to play radio streams which either contain pre-media data like a m3u/pls file (which I could fix by adding a detection beforehand), or data that is streamed on port 80 while a web-page exists at the same location and the media transmitted depends on the type of request. In the later case, whenever I try to stream the media, I instead get the HTML data.
Example link of a stream that is hidden behind a web-page: http://stream.t-n-media.de:8030
This is playable in VLC, but if you put it into a browser or my application you'll receive an HTML file.
Is there:
A ready-made, free solution that I could use in place of JLayer? Preferably open source so I can study it?
A tutorial that can help me to write a solution on my own?
Or can someone give me an example on how to properly detect/request a media stream?
Thanks in advance!
import java.io.*;
import java.net.*;
import javax.sound.sampled.*;
import javax.sound.midi.*;
/**
* This class plays sounds streaming from a URL: it does not have to preload
* the entire sound into memory before playing it. It is a command-line
* application with no gui. It includes code to convert ULAW and ALAW
* audio formats to PCM so they can be played. Use the -m command-line option
* before MIDI files.
*/
public class PlaySoundStream {
// Create a URL from the command-line argument and pass it to the
// right static method depending on the presence of the -m (MIDI) option.
public static void main(String[ ] args) throws Exception {
if (args[0].equals("-m")) streamMidiSequence(new URL(args[1]));
else streamSampledAudio(new URL(args[0]));
// Exit explicitly.
// This is needed because the audio system starts background threads.
System.exit(0);
}
/** Read sampled audio data from the specified URL and play it */
public static void streamSampledAudio(URL url)
throws IOException, UnsupportedAudioFileException,
LineUnavailableException
{
AudioInputStream ain = null; // We read audio data from here
SourceDataLine line = null; // And write it here.
try {
// Get an audio input stream from the URL
ain=AudioSystem.getAudioInputStream(url);
// Get information about the format of the stream
AudioFormat format = ain.getFormat( );
DataLine.Info info=new DataLine.Info(SourceDataLine.class,format);
// If the format is not supported directly (i.e. if it is not PCM
// encoded), then try to transcode it to PCM.
if (!AudioSystem.isLineSupported(info)) {
// This is the PCM format we want to transcode to.
// The parameters here are audio format details that you
// shouldn't need to understand for casual use.
AudioFormat pcm =
new AudioFormat(format.getSampleRate( ), 16,
format.getChannels( ), true, false);
// Get a wrapper stream around the input stream that does the
// transcoding for us.
ain = AudioSystem.getAudioInputStream(pcm, ain);
// Update the format and info variables for the transcoded data
format = ain.getFormat( );
info = new DataLine.Info(SourceDataLine.class, format);
}
// Open the line through which we'll play the streaming audio.
line = (SourceDataLine) AudioSystem.getLine(info);
line.open(format);
// Allocate a buffer for reading from the input stream and writing
// to the line. Make it large enough to hold 4k audio frames.
// Note that the SourceDataLine also has its own internal buffer.
int framesize = format.getFrameSize( );
byte[ ] buffer = new byte[4 * 1024 * framesize]; // the buffer
int numbytes = 0; // how many bytes
// We haven't started the line yet.
boolean started = false;
for(;;) { // We'll exit the loop when we reach the end of stream
// First, read some bytes from the input stream.
int bytesread=ain.read(buffer,numbytes,buffer.length-numbytes);
// If there were no more bytes to read, we're done.
if (bytesread == -1) break;
numbytes += bytesread;
// Now that we've got some audio data to write to the line,
// start the line, so it will play that data as we write it.
if (!started) {
line.start( );
started = true;
}
// We must write bytes to the line in an integer multiple of
// the framesize. So figure out how many bytes we'll write.
int bytestowrite = (numbytes/framesize)*framesize;
// Now write the bytes. The line will buffer them and play
// them. This call will block until all bytes are written.
line.write(buffer, 0, bytestowrite);
// If we didn't have an integer multiple of the frame size,
// then copy the remaining bytes to the start of the buffer.
int remaining = numbytes - bytestowrite;
if (remaining > 0)
System.arraycopy(buffer,bytestowrite,buffer,0,remaining);
numbytes = remaining;
}
// Now block until all buffered sound finishes playing.
line.drain( );
}
finally { // Always relinquish the resources we use
if (line != null) line.close( );
if (ain != null) ain.close( );
}
}
// A MIDI protocol constant that isn't defined by javax.sound.midi
public static final int END_OF_TRACK = 47;
/* MIDI or RMF data from the specified URL and play it */
public static void streamMidiSequence(URL url)
throws IOException, InvalidMidiDataException, MidiUnavailableException
{
Sequencer sequencer=null; // Converts a Sequence to MIDI events
Synthesizer synthesizer=null; // Plays notes in response to MIDI events
try {
// Create, open, and connect a Sequencer and Synthesizer
// They are closed in the finally block at the end of this method.
sequencer = MidiSystem.getSequencer( );
sequencer.open( );
synthesizer = MidiSystem.getSynthesizer( );
synthesizer.open( );
sequencer.getTransmitter( ).setReceiver(synthesizer.getReceiver( ));
// Specify the InputStream to stream the sequence from
sequencer.setSequence(url.openStream( ));
// This is an arbitrary object used with wait and notify to
// prevent the method from returning before the music finishes
final Object lock = new Object( );
// Register a listener to make the method exit when the stream is
// done. See Object.wait( ) and Object.notify( )
sequencer.addMetaEventListener(new MetaEventListener( ) {
public void meta(MetaMessage e) {
if (e.getType( ) == END_OF_TRACK) {
synchronized(lock) {
lock.notify( );
}
}
}
});
// Start playing the music
sequencer.start( );
// Now block until the listener above notifies us that we're done.
synchronized(lock) {
while(sequencer.isRunning( )) {
try { lock.wait( ); } catch(InterruptedException e) { }
}
}
}
finally {
// Always relinquish the sequencer, so others can use it.
if (sequencer != null) sequencer.close( );
if (synthesizer != null) synthesizer.close( );
}
}
}
I have used this piece of code in one of my projects that deal with Audio streaming and was working just fine.
Furthermore, you can see similar examples here:
Java Audio Example
Just reading the javadoc of AudioSystem give me an idea.
There is an other signature for getAudioInputStream: you can give it an InputStream instead of a URL.
So, try to manage to get the input stream by yourself and add the needed headers so that you get the stream instead the html content:
URLConnection uc = url.openConnection();
uc.setRequestProperty("<header name here>", "<header value here>");
InputStream in = uc.getInputStream();
ain=AudioSystem.getAudioInputStream(in);
Hope this help.
I know this answer comes late, but I had the same issue: I wanted to play MP3 and AAC audio and also wanted the user to insert PLS/M3U links. Here is what I did:
First I tried to parse the type by using the simple file name:
import de.webradio.enumerations.FileExtension;
import java.net.URL;
public class FileExtensionParser {
/**
*Parses a file extension
* #param filenameUrl the url
* #return the filename. if filename cannot be determined by file extension, Apache Tika parses by live detection
*/
public FileExtension parseFileExtension(URL filenameUrl) {
String filename = filenameUrl.toString();
if (filename.endsWith(".mp3")) {
return FileExtension.MP3;
} else if (filename.endsWith(".m3u") || filename.endsWith(".m3u8")) {
return FileExtension.M3U;
} else if (filename.endsWith(".aac")) {
return FileExtension.AAC;
} else if(filename.endsWith((".pls"))) {
return FileExtension.PLS;
}
URLTypeParser parser = new URLTypeParser();
return parser.parseByContentDetection(filenameUrl);
}
}
If that fails, I use Apache Tika to do a kind of live detection:
public class URLTypeParser {
/** This class uses Apache Tika to parse an URL using her content
*
* #param url the webstream url
* #return the detected file encoding: MP3, AAC or unsupported
*/
public FileExtension parseByContentDetection(URL url) {
try {
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
InputStream in = connection.getInputStream();
BodyContentHandler handler = new BodyContentHandler();
AudioParser parser = new AudioParser();
Metadata metadata = new Metadata();
parser.parse(in, handler, metadata);
return parseMediaType(metadata);
} catch (IOException e) {
e.printStackTrace();
} catch (TikaException e) {
e.printStackTrace();
} catch (SAXException e) {
e.printStackTrace();
}
return FileExtension.UNSUPPORTED_TYPE;
}
private FileExtension parseMediaType(Metadata metadata) {
String parsedMediaType = metadata.get("encoding");
if (parsedMediaType.equalsIgnoreCase("aac")) {
return FileExtension.AAC;
} else if (parsedMediaType.equalsIgnoreCase("mpeg1l3")) {
return FileExtension.MP3;
}
return FileExtension.UNSUPPORTED_TYPE;
}
}
This will also solve the HTML problem, since the method will return FileExtension.UNSUPPORTED for HTML content.
I combined this classes together with a factory pattern and it works fine. The live detection takes only about two seconds.
I don't think that this will help you anymore but since I struggled almost three weeks I wanted to provide a working answer. You can see the whole project at github: https://github.com/Seppl2202/webradio
I've been trying to figure out for the life of me how to cut up a certain amount of milliseconds out of a sample. My first problem is when I insert this code into eclipse. It tells me AudioInputStream, AudioFileFormat and FileFormat are not available. If I can't use FileFormat then i can't pass methods into other methods to get the correct calculations to save the correct amount of music. I can't even import them. But more importantly, how can I change the parameter "secondstocopy" into "millisecondstocopy" without ruining the integrity of the algorithm? Still new to java, thank you for your help!
//edit// FileFormat fileFormat = AudioSystem.getAudioFileFormat(file); It seems I can't use this bit of code for android. The getAudioFileFormat doesn't work. Nor can I calculate the fram rate because of.. int bytesPerSecond = format. * (int)format.getFrameRate(); How do I properly get the format and frame rate with media player so that I can calculate the bytesPerSecond and secondsToCopy?
AudioSystem is part of JavaSound, and JavaSound is part of the desktop JVM/SDK. JavaSound is NOT present in the Android JVM/SDK, so your old code will not compile on any current Android SDK. I get an exception using java on android (java.lang.NoClassDefFoundError), why?
How can I calculate the file format and fram rate and stuff without the javasound jvm/sdk. I can tell you now that the sounds are 44100 and 16 bit something. In the get info pane of the recording file, it says total bit rate: 128,000 with two audio channels. I'm not good with encoding stuff but I'm recording the audio with a program on mac called wiretap pro. It tells me the the parameters.
import java.io.*;
import javax.sound.sampled.*;
class AudioFileProcessor {
public static void main(String[] args) {
copyAudio("/tmp/uke.wav", "/tmp/uke-shortened.wav", ƒ, 1);
}
public static void copyAudio(String sourceFileName, String destinationFileName, int startSecond, int secondsToCopy) {
AudioInputStream inputStream = null;
AudioInputStream shortenedStream = null;
try {
File file = new File(sourceFileName);
AudioFileFormat fileFormat = AudioSystem.getAudioFileFormat(file);
AudioFormat format = fileFormat.getFormat();
inputStream = AudioSystem.getAudioInputStream(file);
int bytesPerSecond = format.getFrameSize() * (int)format.getFrameRate();
inputStream.skip(startSecond * bytesPerSecond);
long framesOfAudioToCopy = secondsToCopy * (int)format.getFrameRate();
shortenedStream = new AudioInputStream(inputStream, format, framesOfAudioToCopy);
File destinationFile = new File(destinationFileName);
AudioSystem.write(shortenedStream, fileFormat.getType(), destinationFile);
} catch (Exception e) {
println(e);
} finally {
if (inputStream != null) try { inputStream.close(); } catch (Exception e) { println(e); }
if (shortenedStream != null) try { shortenedStream.close(); } catch (Exception e) { println(e); }
}
}
}
this code was taken from: cutting a wave file
I have some problems finding out, what I actually read with the AudioInputStream. The program below just prints the byte-array I get but I actually don't even know, if the bytes are actually the samples, so the byte-array is the audio wave.
File fileIn;
AudioInputStream audio_in;
byte[] audioBytes;
int numBytesRead;
int numFramesRead;
int numBytes;
int totalFramesRead;
int bytesPerFrame;
try {
audio_in = AudioSystem.getAudioInputStream(fileIn);
bytesPerFrame = audio_in.getFormat().getFrameSize();
if (bytesPerFrame == AudioSystem.NOT_SPECIFIED) {
bytesPerFrame = 1;
}
numBytes = 1024 * bytesPerFrame;
audioBytes = new byte[numBytes];
try {
numBytesRead = 0;
numFramesRead = 0;
} catch (Exception ex) {
System.out.println("Something went completely wrong");
}
} catch (Exception e) {
System.out.println("Something went completely wrong");
}
and in some other part, I read some bytes with this:
try {
if ((numBytesRead = audio_in.read(audioBytes)) != -1) {
numFramesRead = numBytesRead / bytesPerFrame;
totalFramesRead += numFramesRead;
}
} catch (Exception e) {
System.out.println("Had problems reading new content");
}
So first of all, this code is not from me. This is my first time, reading audio-files so I got some help from the inter-webs. (Found the link:
Java - reading, manipulating and writing WAV files
stackoverflow, who would have known.
The question is, what are the bytes in audioBytes representing? Since the source is a 44kHz, stereo, there have to be 2 waves hiding in there somewhere, am I right? so how do I filter the important informations out of these bytes?
// EDIT
So what I added is this function:
public short[] Get_Sample() {
if(samplesRead == 1024) {
Read_Buffer();
samplesRead = 4;
} else {
samplesRead = samplesRead + 4;
}
short sample[] = new short[2];
sample[0] = (short)(audioBytes[samplesRead-4] + 256*audioBytes[samplesRead-3]);
sample[1] = (short)(audioBytes[samplesRead-2] + 256*audioBytes[samplesRead-1]);
return sample;
}
where Read_Buffer() reads the next 1024 (or less) Bytes and loads them into audioBytes. sample[0] is used for the left side, sample[1] for the right side. But I'm still not sure since the waves i get from this look quite "noisy". (Edit: the used WAV actually used little-endian byte order so I had to change the calculation.)
AudioInputStream read() method returns the raw audio data. You don't know what is the 'construction' of data before you read the audio format with getFormat() which returns AudioFormat. From AudioFormat you can getChannels() and getSampleSizeInBits() and more... This is because the AudioInputStream is made for known format.
If you calculate a sample value you have different possibilities with signes and
endianness of the data (in case of 16-bit sample). To make a more generic code
use your AudioFormat object returned from AudioInputStream to get more info
about the data buffer:
encoding() : PCM_SIGNED, PCM_UNSIGNED ...
bigEndian() : true or false
As you already discovered the incorrect sample building may lead to some disturbed sound. If you work with various files it may case a problems in the future. If you won't provide a support for some formats just check what says AudioFormat and throw exception (e.g. javax.sound.sampled.UnsupportedAudioFileException). It will save your time.
I want to record video only in android with MPEG4 format. I want the container and codec to be MPEG4. So here is what I have done for that.
Thread video = new Thread(new Runnable() {
public void run() {
videoRecorder = new MediaRecorder();
videoRecorder.setPreviewDisplay(surfaceView.getHolder().getSurface());
videoRecorder.setVideoSource(MediaRecorder.VideoSource.DEFAULT);
videoRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
videoRecorder.setVideoEncodingBitRate(56 * 8 * 1024);
videoRecorder.setVideoSize(176, 144);
videoRecorder.setVideoFrameRate(12);
videoRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.MPEG_4_SP);
videoRecorder.setOutputFile("/sdcard/video.m4e");
try {
videoRecorder.prepare();
} catch (IllegalStateException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
videoRecorder.start();
}
});
video.start();
Now, after recording, I got the video recorded into video.m4e file. But when I check its information, I got the following:
At the same time I used the following to record audio:
Thread audio = new Thread(new Runnable() {
public void run() {
audioRecorder = new MediaRecorder();
audioRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
audioRecorder.setOutputFormat(MediaRecorder.OutputFormat.RAW_AMR);
audioRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
audioRecorder.setOutputFile("/sdcard/audio.amr");
try {
audioRecorder.prepare();
} catch (IllegalStateException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
audioRecorder.start();
}
});
audio.start();
and I got the container format and codec as AMR as I intended:
So, what causes MediaRecorder to record video in 3GP format? I haven't specified 3GP anywhere in my program. I am testing this code on my Samsung Galaxy tab running Android 2.2
MPEG-4 is a method of defining compression of audio and visual (AV) digital data.A file format for storing time-based media content. It is a general format forming the basis for a number of other more specific file formats (e.g. 3GP, Motion JPEG 2000, MPEG-4 Part 14).So there is no conspiracy in the result you got, the compression method (OR "Codec") you used was MPEG4 and the video format generated by your phone is 3gp, which in actual is a part of the video formats of the suite of the MPEG4 compression scheme for the media.
This is defined by the frameworks. Specifically it has a set of parameters that are found in the MediaRecorder class that define what all is supported.
I do not understand akkilis's explanation.
In the official Android tutorial "MediaRecorder.OutputFormat",
MPEG_4 is one of options of "media file formats", parallel to THREE_GPP.
int AAC_ADTS AAC ADTS file format
int AMR_NB AMR NB file format
int AMR_WB AMR WB file format
int DEFAULT
int MPEG_4 MPEG4 media file format
int RAW_AMR This constant was deprecated in API level 16. Deprecated in favor of MediaRecorder.OutputFormat.AMR_NB
int THREE_GPP 3GPP media file format
There is another parameter to specify the encoding format:
"MediaRecorder.VideoEncoder"
int DEFAULT
int H263
int H264
int MPEG_4_SP
In OP's code, he specified both file format and encoding format.
videoRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
videoRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.MPEG_4_SP);
So it seems akkilis's explanation does not work for this example.