In my Android app, I'm trying to create a video file, adding an audio track at a given time position on the video.
I used a MediaMuxer and changed the value of presentationTimeUs to shift the audio.
But apparently this is not the way to go, because the starting time of the video is also shifted.
Another problem is that mp3 audio does not work.
Here is my attempt so far:
final long audioPositionUs = 10000000;
File fileOut = new File (Environment.getExternalStoragePublicDirectory (
Environment.DIRECTORY_MOVIES) + "/output.mp4");
fileOut.createNewFile ();
MediaExtractor videoExtractor = new MediaExtractor ();
MediaExtractor audioExtractor = new MediaExtractor ();
AssetFileDescriptor videoDescriptor = getAssets ().openFd ("video.mp4");
// AssetFileDescriptor audioDescriptor = getAssets ().openFd ("audio.mp3"); // ?!
AssetFileDescriptor audioDescriptor = getAssets ().openFd ("audio.aac");
videoExtractor.setDataSource (videoDescriptor.getFileDescriptor (),
videoDescriptor.getStartOffset (), videoDescriptor.getLength ());
audioExtractor.setDataSource (audioDescriptor.getFileDescriptor (),
audioDescriptor.getStartOffset (), audioDescriptor.getLength ());
MediaFormat videoFormat = null;
for (int i = 0; i < videoExtractor.getTrackCount (); i++) {
if (videoExtractor.getTrackFormat (i).getString (
MediaFormat.KEY_MIME).startsWith ("video/")) {
videoExtractor.selectTrack (i);
videoFormat = videoExtractor.getTrackFormat (i);
break;
}
}
audioExtractor.selectTrack (0);
MediaFormat audioFormat = audioExtractor.getTrackFormat (0);
MediaMuxer muxer = new MediaMuxer (fileOut.getAbsolutePath (),
MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
int videoTrack = muxer.addTrack (videoFormat);
int audioTrack = muxer.addTrack (audioFormat);
boolean end = false;
int sampleSize = 256 * 1024;
ByteBuffer videoBuffer = ByteBuffer.allocate (sampleSize);
ByteBuffer audioBuffer = ByteBuffer.allocate (sampleSize);
MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo ();
MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo ();
videoExtractor.seekTo (0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
audioExtractor.seekTo (0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
muxer.start ();
while (!end) {
videoBufferInfo.size = videoExtractor.readSampleData (videoBuffer, 0);
if (videoBufferInfo.size < 0) {
end = true;
videoBufferInfo.size = 0;
} else {
videoBufferInfo.presentationTimeUs = videoExtractor.getSampleTime ();
videoBufferInfo.flags = videoExtractor.getSampleFlags ();
muxer.writeSampleData (videoTrack, videoBuffer, videoBufferInfo);
videoExtractor.advance ();
}
}
end = false;
while (!end) {
audioBufferInfo.size = audioExtractor.readSampleData (audioBuffer, 0);
if (audioBufferInfo.size < 0) {
end = true;
audioBufferInfo.size = 0;
} else {
audioBufferInfo.presentationTimeUs = audioExtractor.getSampleTime () +
audioPositionUs;
audioBufferInfo.flags = audioExtractor.getSampleFlags ();
muxer.writeSampleData (audioTrack, audioBuffer, audioBufferInfo);
audioExtractor.advance ();
}
}
muxer.stop ();
muxer.release ();
Can you please give details (and code if possible) to help me solve this?
Send AudioRecord's samples to a MediaCodec + MediaMuxer wrapper. Using the system time at audioRecord.read(...) works sufficiently well as an audio timestamp, provided you poll often enough to avoid filling up AudioRecord's internal buffer (to avoid drift between the time you call read and the time AudioRecord recorded the samples). Too bad AudioRecord doesn't directly communicate timestamps...
// Setup AudioRecord
while (isRecording) {
audioPresentationTimeNs = System.nanoTime();
audioRecord.read(dataBuffer, 0, samplesPerFrame);
hwEncoder.offerAudioEncoder(dataBuffer.clone(), audioPresentationTimeNs);
}
Note that AudioRecord only guarantees support for 16 bit PCM samples, though MediaCodec.queueInputBuffer takes input as byte[]. Passing a byte[] to audioRecord.read(dataBuffer,...) will truncate split the 16 bit samples into 8 bit for you.
I found that polling in this way still occasionally generated a timestampUs XXX < lastTimestampUs XXX for Audio track error, so I included some logic to keep track of the bufferInfo.presentationTimeUs reported by mediaCodec.dequeueOutputBuffer(bufferInfo, timeoutMs) and adjust if necessary before calling mediaMuxer.writeSampleData(trackIndex, encodedData, bufferInfo).
As a workaround you may create a temporary audio track by padding your audio track at the head with a silent track, and then use it with addTrack.
PS: I would have thought presentationTimeUs should work as well.
PS2: perhaps the set method of MediaCodec.BufferInfo may help.
Related
I'm trying to get the video of my relative layout by having bitmaps in an array list that increases in every 30 seconds or we can say 30-FPS as a bitmap.
How can I achieve the video output from that bitmaps or can I direct record the view?
Could anyone suggest the best way to achieve that video of the view/layout(Relative layout)?
You can use the Android MediaCodec API to create a video from the array of bitmaps. The steps to achieve this would be:
Create a MediaFormat object that defines the format of the video, such as the resolution, frame rate, bit rate, and encoding type.
Create a MediaCodec instance and configure it with the MediaFormat object.
Allocate a ByteBuffer for input and output to the MediaCodec instance.
Encode the bitmaps into the ByteBuffer in a loop, submitting the input buffer to the MediaCodec instance and retrieving the output buffer.
Write the output buffer to a file using a MediaMuxer instance.
private fun createVideoFromBitmaps(bitmaps: ArrayList<Bitmap>, outputFilePath: String) {
try {
// Define the format of the video
val format = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC, bitmaps[0].width, bitmaps[0].height).apply {
setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface)
setInteger(MediaFormat.KEY_BIT_RATE, 6000000)
setInteger(MediaFormat.KEY_FRAME_RATE, 30)
setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 1)
}
// Create a MediaCodec instance and configure it with the MediaFormat
val codec = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_VIDEO_AVC)
codec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
// Start the codec
codec.start()
// Allocate input and output buffers
val inputBuffers = codec.inputBuffers
val outputBuffers = codec.outputBuffers
// Create a MediaMuxer instance to write the video to a file
val muxer = MediaMuxer(outputFilePath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)
// Keep track of the presentation time for each frame
var presentationTimeUs = 0L
// Encode the bitmaps into the input buffer and write to the output file
var isEOS = false
val bufferInfo = MediaCodec.BufferInfo()
while (!isEOS) {
val inputBufferIndex = codec.dequeueInputBuffer(0)
if (inputBufferIndex >= 0) {
val inputBuffer = inputBuffers[inputBufferIndex]
inputBuffer.clear()
if (bitmaps.isEmpty()) {
// End of stream
codec.queueInputBuffer(inputBufferIndex, 0, 0, presentationTimeUs, MediaCodec.BUFFER_FLAG_END_OF_STREAM)
isEOS = true
} else {
val bitmap = bitmaps.removeAt(0)
inputBuffer.put(getNV21(bitmap))
codec.queueInputBuffer(inputBufferIndex, 0, inputBuffer.position(), presentationTimeUs, 0)
presentationTimeUs += 1000000L / 30
}
}
var outputBufferIndex = codec.dequeueOutputBuffer(bufferInfo, 0)
while (outputBufferIndex >= 0) {
val outputBuffer = outputBuffers[outputBufferIndex]
muxer.writeSampleData(0, outputBuffer, bufferInfo)
outputBufferIndex = codec.dequeueOutputBuffer(bufferInfo, 0)
}
}
// Release the codec and muxer instances
codec.stop()
codec.release()
muxer.stop()
muxer.release()
} catch (e: Exception) {
e.printStackTrace()
}
}
Hope this will help you.
When I use mediaCodec to decode an AMR file it outputs a byte buffer but when i try to convert the byte buffer into array of doubles the app crashes.
I tried taking out a single byte form the byte buffer and the app crashes as well. Any operation on the byte buffer causes my app to crash.
decoder.start();
inputBuffers = decoder.getInputBuffers();
outputBuffers = decoder.getOutputBuffers();
end_of_input_file = false;
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
ByteBuffer data = readData(info);
int samplesRead = info.size;
byte[] bytesArray = new byte[data.remaining()];
bytesArray = getByteArrayFromByteBuffer(data);
here the app crashes.
This is the readData method:
private ByteBuffer readData(MediaCodec.BufferInfo info) {
if (decoder == null)
return null;
for (;;) {
// Read data from the file into the codec.
if (!end_of_input_file) {
int inputBufferIndex = decoder.dequeueInputBuffer(10000);
if (inputBufferIndex >= 0) {
int size = mExtractor.readSampleData(inputBuffers[inputBufferIndex], 0);
if (size < 0) {
// End Of File
decoder.queueInputBuffer(inputBufferIndex, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
end_of_input_file = true;
} else {
decoder.queueInputBuffer(inputBufferIndex, 0, size, mExtractor.getSampleTime(), 0);
mExtractor.advance();
}
}
}
// Read the output from the codec.
if (outputBufferIndex >= 0)
// Ensure that the data is placed at the start of the buffer
outputBuffers[outputBufferIndex].position(0);
outputBufferIndex = decoder.dequeueOutputBuffer(info, 10000);
if (outputBufferIndex >= 0) {
// Handle EOF
if (info.flags != 0) {
decoder.stop();
decoder.release();
decoder = null;
return null;
}
// Release the buffer so MediaCodec can use it again.
// The data should stay there until the next time we are called.
decoder.releaseOutputBuffer(outputBufferIndex, false);
return outputBuffers[outputBufferIndex];
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
// This usually happens once at the start of the file.
outputBuffers = decoder.getOutputBuffers();
}
}
}
This is the getByteArrayFromByteBuffer method:
private static byte[] getByteArrayFromByteBuffer(ByteBuffer byteBuffer) {
byte[] bytesArray = new byte[byteBuffer.remaining()];
byteBuffer.get(bytesArray, 0, bytesArray.length);
return bytesArray;
}
````
I would like to get the output of the decoder into a double array.
You have two bugs/misconceptions in your code.
First issue:
// Read the output from the codec.
if (outputBufferIndex >= 0)
// Ensure that the data is placed at the start of the buffer
outputBuffers[outputBufferIndex].position(0);
outputBufferIndex = decoder.dequeueOutputBuffer(info, 10000);
Before calling dequeueOutputBuffer, you can't know what output buffer it will return. And even if that, you can't control where in the buffer the codec will place its output. The decoder will choose one buffer out of the available ones and place the output data wherever it wants within that buffer. (In practice it is almost always at the start anyway.)
If you want to set up the position/limit in any specific way within the output buffer, you do that after dequeueOutputBuffer has returned it to you.
Just remove the whole if statement and position() call here.
Second bug, which is the real major issue:
// Release the buffer so MediaCodec can use it again.
// The data should stay there until the next time we are called.
decoder.releaseOutputBuffer(outputBufferIndex, false);
return outputBuffers[outputBufferIndex];
You are only allowed to touch and use a buffer from outputBuffers after it has been returned by dequeueOutputBuffer up until you return it to the decoder by releaseOutputBuffer. As your comment says, after releaseOutputBuffers, MediaCodec can use it again. Your ByteBuffer variable is only a reference to the same data as the codec can write new output into - it's not a copy here yet. So you cannot call releaseOutputBuffer until you have finished using the ByteBuffer object.
Bonus point:
Before using the ByteBuffer, it might be good to set the position/limit for it, like this:
outputBuffers[outputBufferIndex].position(info.offset);
outputBuffers[outputBufferIndex].limit(info.offset + info.size);
Some decoders do this implicitly, but not necessarily all of them.
I'm trying to figure out how to make a JProgressBar fill up as a file is being read. More specifically I need to read in 2 files and fill 2 JProgressBars, and then stop after one of the files has been read.
I am having trouble understanding how to make that work with a file. With two threads I would just put a for(int i = 0; i < 100; i++)loop and setValue(i) to get the current progress. But with files I don't know how to set the progress. Maybe get the size of the file and try something with that? I am really not sure and was hoping someone could throw and idea or two my way.
Thank you!
Update for future readers:
I managed to solve it by using a file.length() which returned the size of the file in bytes, then setting the bar to go from 0 to that size instead of the regular 100, and then using
for(int i = 0; i < fileSize; i++)
To get the bar loading like it should.
Example usage of ProgressMonitorInputStream. It automatically display simple dialog with progressbar if reading from InputStream takes longer - you can adjust that time by using: setMillisToPopup, setMillisToDecideToPopup.
public static void main(String[] args) {
JFrame mainFrame = new JFrame();
mainFrame.setSize(640, 480);
mainFrame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
mainFrame.setVisible(true);
String filename = "Path to your filename"; // replace with real filename
File file = new File(filename);
try (FileInputStream inputStream = new FileInputStream(file);
ProgressMonitorInputStream progressInputStream = new ProgressMonitorInputStream(mainFrame, "Reading file: " + filename, inputStream)) {
byte[] buffer = new byte[10]; // Make this number bigger - 10240 bytes for example. 10 is there to show how that dialog looks like
long totalReaded = 0;
long totalSize = file.length();
int readed = 0;
while((readed = progressInputStream.read(buffer)) != -1) {
totalReaded += readed;
progressInputStream.getProgressMonitor().setNote(String.format("%d / %d kB", totalReaded / 1024, totalSize / 1024));
// Do something with data in buffer
}
} catch(IOException ex) {
System.err.println(ex);
}
}
I want find out, if two audio files are same or one contains the other.
For this I use Fingerprint of musicg
byte[] firstAudio = readAudioFileData("first.mp3");
byte[] secondAudio = readAudioFileData("second.mp3");
FingerprintSimilarityComputer fingerprint =
new FingerprintSimilarityComputer(firstAudio, secondAudio);
FingerprintSimilarity fingerprintSimilarity = fingerprint.getFingerprintsSimilarity();
System.out.println("clip is found at " + fingerprintSimilarity.getScore());
to convert audio to byte array I use sound API
public static byte[] readAudioFileData(final String filePath) {
byte[] data = null;
try {
final ByteArrayOutputStream baout = new ByteArrayOutputStream();
final File file = new File(filePath);
final AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(file);
byte[] buffer = new byte[4096];
int c;
while ((c = audioInputStream.read(buffer, 0, buffer.length)) != -1) {
baout.write(buffer, 0, c);
}
audioInputStream.close();
baout.close();
data = baout.toByteArray();
} catch (Exception e) {
e.printStackTrace();
}
return data;
}
but when I execute it, I became at fingerprint.getFingerprintsSimilarity() an Exception.
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 15999
at com.musicg.fingerprint.PairManager.getPairPositionList(PairManager.java:133)
at com.musicg.fingerprint.PairManager.getPair_PositionList_Table(PairManager.java:80)
at com.musicg.fingerprint.FingerprintSimilarityComputer.getFingerprintsSimilarity(FingerprintSimilarityComputer.java:71)
at Main.main(Main.java:42)
How can I compare 2 mp3 files with fingerprint in Java?
I never did any audio stuff in Java before, but I looked into your code briefly. I think that musicg only works for WAV files, not for MP3. Thus, you need to convert the files first. A web search reveals that you can e.g. use JLayer for that purpose. The corresponding code looks like this:
package de.scrum_master.so;
import com.musicg.fingerprint.FingerprintManager;
import com.musicg.fingerprint.FingerprintSimilarity;
import com.musicg.fingerprint.FingerprintSimilarityComputer;
import com.musicg.wave.Wave;
import javazoom.jl.converter.Converter;
import javazoom.jl.decoder.JavaLayerException;
public class Application {
public static void main(String[] args) throws JavaLayerException {
// MP3 to WAV
new Converter().convert("White Wedding.mp3", "White Wedding.wav");
new Converter().convert("Poison.mp3", "Poison.wav");
// Fingerprint from WAV
byte[] firstFingerPrint = new FingerprintManager().extractFingerprint(new Wave("White Wedding.wav"));
byte[] secondFingerPrint = new FingerprintManager().extractFingerprint(new Wave("Poison.wav"));
// Compare fingerprints
FingerprintSimilarity fingerprintSimilarity = new FingerprintSimilarityComputer(firstFingerPrint, secondFingerPrint).getFingerprintsSimilarity();
System.out.println("Similarity score = " + fingerprintSimilarity.getScore());
}
}
Of course you should make sure that you do not convert each file again whenever the program starts, i.e. you should check if the WAV files already exist. I skipped this step and reduced the sample code to a minimal working version.
For FingerprintSimilarityComputer(input1, input2), it suppose to take in the fingerprint of the loaded audio data and not the loaded audio data itself.
In your case, it should be:
// Convert your audio to wav using FFMpeg
Wave w1 = new Wave("first.wav");
Wave w2 = new Wave("second.wav");
FingerprintSimilarityComputer fingerprint =
new FingerprintSimilarityComputer(w1.getFingerprint(), w2.getFingerprint());
// print fingerprint.getFingerprintSimilarity()
Maybe I am missing a point, but if I understood you right, this should do:
byte[] firstAudio = readAudioFileData("first.mp3");
byte[] secondAudio = readAudioFileData("second.mp3");
byte[] smaller = firstAudio.length <= secondAudio.lenght ? firstAudio : secondAudio;
byte[] bigger = firstAudio.length > secondAudio.length ? firstAudio : secondAudio;
int ixS = 0;
int ixB = 0;
boolean contians = false;
for (; ixB<bigger.length; ixB++) {
if (smaller[ixS] == bigger[ixB]) {
ixS++;
if (ixS == smaller.lenght) {
contains = true;
break;
}
}
else {
ixS = 0;
}
}
if (contains) {
if (smaller.length == bigger.length) {
System.out.println("Both tracks are equal");
}
else {
System.out.println("The bigger track, fully contains the smaller track starting at byte: "+(ixB-smaller.lenght));
}
}
else {
System.out.println("No track completely contains the other track");
}
Requirement : I want to reverse a video file and save it as a new video file in android. ie. the final output file should play the video in reverse.
What I tried : I've used the below code (which I got from AOSP https://android.googlesource.com/platform/cts/+/kitkat-release/tests/tests/media/src/android/media/cts/MediaMuxerTest.java) with a little modification.
File file = new File(srcMedia.getPath());
MediaExtractor extractor = new MediaExtractor();
extractor.setDataSource(file.getPath());
int trackCount = extractor.getTrackCount();
// Set up MediaMuxer for the destination.
MediaMuxer muxer;
muxer = new MediaMuxer(dstMediaPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
// Set up the tracks.
HashMap<Integer, Integer> indexMap = new HashMap<Integer, Integer>(trackCount);
for (int i = 0; i < trackCount; i++) {
extractor.selectTrack(i);
MediaFormat format = extractor.getTrackFormat(i);
int dstIndex = muxer.addTrack(format);
indexMap.put(i, dstIndex);
}
// Copy the samples from MediaExtractor to MediaMuxer.
boolean sawEOS = false;
int bufferSize = MAX_SAMPLE_SIZE;
int frameCount = 0;
int offset = 100;
long totalTime = mTotalVideoDurationInMicroSeconds;
ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
if (degrees >= 0) {
muxer.setOrientationHint(degrees);
}
muxer.start();
while (!sawEOS) {
bufferInfo.offset = offset;
bufferInfo.size = extractor.readSampleData(dstBuf, offset);
if (bufferInfo.size < 0) {
if (VERBOSE) {
Log.d(TAG, "saw input EOS.");
}
sawEOS = true;
bufferInfo.size = 0;
} else {
bufferInfo.presentationTimeUs = totalTime - extractor.getSampleTime();
//noinspection WrongConstant
bufferInfo.flags = extractor.getSampleFlags();
int trackIndex = extractor.getSampleTrackIndex();
muxer.writeSampleData(indexMap.get(trackIndex), dstBuf,
bufferInfo);
extractor.advance();
frameCount++;
if (VERBOSE) {
Log.d(TAG, "Frame (" + frameCount + ") " +
"PresentationTimeUs:" + bufferInfo.presentationTimeUs +
" Flags:" + bufferInfo.flags +
" TrackIndex:" + trackIndex +
" Size(KB) " + bufferInfo.size / 1024);
}
}
}
muxer.stop();
muxer.release();
The main change I did is in this line
bufferInfo.presentationTimeUs = totalTime - extractor.getSampleTime();
This was done in expectation that the video frames will be written to the output file in reverse order. But the result was same as the original video (not reversed).
I feel what I tried here is not making any sense. Basically I don't have much understanding of video formats, codecs, byte buffers etc.
I've also tried using JavaCV which is a good java wrapper over opencv, ffmpeg etc. and I got it working with that library. But the encoding process takes long time and also the apk size became large due to the library.
With android's built in MediaCodec APIs I expect things to be faster and lightweight. But I can accept other solutions also if they offer the same.
It's greatly appreciated if someone can offer any help on how this can be done in android. Also if you have great articles which can help me to learn the specifics/basics about video, codecs, video processing etc. that will also help.