Reverse a video in android using MediaCodec, MediaExtractor, MediaMuxer etc. - java

Requirement : I want to reverse a video file and save it as a new video file in android. ie. the final output file should play the video in reverse.
What I tried : I've used the below code (which I got from AOSP https://android.googlesource.com/platform/cts/+/kitkat-release/tests/tests/media/src/android/media/cts/MediaMuxerTest.java) with a little modification.
File file = new File(srcMedia.getPath());
MediaExtractor extractor = new MediaExtractor();
extractor.setDataSource(file.getPath());
int trackCount = extractor.getTrackCount();
// Set up MediaMuxer for the destination.
MediaMuxer muxer;
muxer = new MediaMuxer(dstMediaPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
// Set up the tracks.
HashMap<Integer, Integer> indexMap = new HashMap<Integer, Integer>(trackCount);
for (int i = 0; i < trackCount; i++) {
extractor.selectTrack(i);
MediaFormat format = extractor.getTrackFormat(i);
int dstIndex = muxer.addTrack(format);
indexMap.put(i, dstIndex);
}
// Copy the samples from MediaExtractor to MediaMuxer.
boolean sawEOS = false;
int bufferSize = MAX_SAMPLE_SIZE;
int frameCount = 0;
int offset = 100;
long totalTime = mTotalVideoDurationInMicroSeconds;
ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
if (degrees >= 0) {
muxer.setOrientationHint(degrees);
}
muxer.start();
while (!sawEOS) {
bufferInfo.offset = offset;
bufferInfo.size = extractor.readSampleData(dstBuf, offset);
if (bufferInfo.size < 0) {
if (VERBOSE) {
Log.d(TAG, "saw input EOS.");
}
sawEOS = true;
bufferInfo.size = 0;
} else {
bufferInfo.presentationTimeUs = totalTime - extractor.getSampleTime();
//noinspection WrongConstant
bufferInfo.flags = extractor.getSampleFlags();
int trackIndex = extractor.getSampleTrackIndex();
muxer.writeSampleData(indexMap.get(trackIndex), dstBuf,
bufferInfo);
extractor.advance();
frameCount++;
if (VERBOSE) {
Log.d(TAG, "Frame (" + frameCount + ") " +
"PresentationTimeUs:" + bufferInfo.presentationTimeUs +
" Flags:" + bufferInfo.flags +
" TrackIndex:" + trackIndex +
" Size(KB) " + bufferInfo.size / 1024);
}
}
}
muxer.stop();
muxer.release();
The main change I did is in this line
bufferInfo.presentationTimeUs = totalTime - extractor.getSampleTime();
This was done in expectation that the video frames will be written to the output file in reverse order. But the result was same as the original video (not reversed).
I feel what I tried here is not making any sense. Basically I don't have much understanding of video formats, codecs, byte buffers etc.
I've also tried using JavaCV which is a good java wrapper over opencv, ffmpeg etc. and I got it working with that library. But the encoding process takes long time and also the apk size became large due to the library.
With android's built in MediaCodec APIs I expect things to be faster and lightweight. But I can accept other solutions also if they offer the same.
It's greatly appreciated if someone can offer any help on how this can be done in android. Also if you have great articles which can help me to learn the specifics/basics about video, codecs, video processing etc. that will also help.

Related

Create a video file on Android

In my Android app, I'm trying to create a video file, adding an audio track at a given time position on the video.
I used a MediaMuxer and changed the value of presentationTimeUs to shift the audio.
But apparently this is not the way to go, because the starting time of the video is also shifted.
Another problem is that mp3 audio does not work.
Here is my attempt so far:
final long audioPositionUs = 10000000;
File fileOut = new File (Environment.getExternalStoragePublicDirectory (
Environment.DIRECTORY_MOVIES) + "/output.mp4");
fileOut.createNewFile ();
MediaExtractor videoExtractor = new MediaExtractor ();
MediaExtractor audioExtractor = new MediaExtractor ();
AssetFileDescriptor videoDescriptor = getAssets ().openFd ("video.mp4");
// AssetFileDescriptor audioDescriptor = getAssets ().openFd ("audio.mp3"); // ?!
AssetFileDescriptor audioDescriptor = getAssets ().openFd ("audio.aac");
videoExtractor.setDataSource (videoDescriptor.getFileDescriptor (),
videoDescriptor.getStartOffset (), videoDescriptor.getLength ());
audioExtractor.setDataSource (audioDescriptor.getFileDescriptor (),
audioDescriptor.getStartOffset (), audioDescriptor.getLength ());
MediaFormat videoFormat = null;
for (int i = 0; i < videoExtractor.getTrackCount (); i++) {
if (videoExtractor.getTrackFormat (i).getString (
MediaFormat.KEY_MIME).startsWith ("video/")) {
videoExtractor.selectTrack (i);
videoFormat = videoExtractor.getTrackFormat (i);
break;
}
}
audioExtractor.selectTrack (0);
MediaFormat audioFormat = audioExtractor.getTrackFormat (0);
MediaMuxer muxer = new MediaMuxer (fileOut.getAbsolutePath (),
MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
int videoTrack = muxer.addTrack (videoFormat);
int audioTrack = muxer.addTrack (audioFormat);
boolean end = false;
int sampleSize = 256 * 1024;
ByteBuffer videoBuffer = ByteBuffer.allocate (sampleSize);
ByteBuffer audioBuffer = ByteBuffer.allocate (sampleSize);
MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo ();
MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo ();
videoExtractor.seekTo (0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
audioExtractor.seekTo (0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
muxer.start ();
while (!end) {
videoBufferInfo.size = videoExtractor.readSampleData (videoBuffer, 0);
if (videoBufferInfo.size < 0) {
end = true;
videoBufferInfo.size = 0;
} else {
videoBufferInfo.presentationTimeUs = videoExtractor.getSampleTime ();
videoBufferInfo.flags = videoExtractor.getSampleFlags ();
muxer.writeSampleData (videoTrack, videoBuffer, videoBufferInfo);
videoExtractor.advance ();
}
}
end = false;
while (!end) {
audioBufferInfo.size = audioExtractor.readSampleData (audioBuffer, 0);
if (audioBufferInfo.size < 0) {
end = true;
audioBufferInfo.size = 0;
} else {
audioBufferInfo.presentationTimeUs = audioExtractor.getSampleTime () +
audioPositionUs;
audioBufferInfo.flags = audioExtractor.getSampleFlags ();
muxer.writeSampleData (audioTrack, audioBuffer, audioBufferInfo);
audioExtractor.advance ();
}
}
muxer.stop ();
muxer.release ();
Can you please give details (and code if possible) to help me solve this?
Send AudioRecord's samples to a MediaCodec + MediaMuxer wrapper. Using the system time at audioRecord.read(...) works sufficiently well as an audio timestamp, provided you poll often enough to avoid filling up AudioRecord's internal buffer (to avoid drift between the time you call read and the time AudioRecord recorded the samples). Too bad AudioRecord doesn't directly communicate timestamps...
// Setup AudioRecord
while (isRecording) {
audioPresentationTimeNs = System.nanoTime();
audioRecord.read(dataBuffer, 0, samplesPerFrame);
hwEncoder.offerAudioEncoder(dataBuffer.clone(), audioPresentationTimeNs);
}
Note that AudioRecord only guarantees support for 16 bit PCM samples, though MediaCodec.queueInputBuffer takes input as byte[]. Passing a byte[] to audioRecord.read(dataBuffer,...) will truncate split the 16 bit samples into 8 bit for you.
I found that polling in this way still occasionally generated a timestampUs XXX < lastTimestampUs XXX for Audio track error, so I included some logic to keep track of the bufferInfo.presentationTimeUs reported by mediaCodec.dequeueOutputBuffer(bufferInfo, timeoutMs) and adjust if necessary before calling mediaMuxer.writeSampleData(trackIndex, encodedData, bufferInfo).
As a workaround you may create a temporary audio track by padding your audio track at the head with a silent track, and then use it with addTrack.
PS: I would have thought presentationTimeUs should work as well.
PS2: perhaps the set method of MediaCodec.BufferInfo may help.

SeekableByteChannel.read() always returns 0, InputStream is fine

We have a data file for which we need to generate a CRC. (As a placeholder, I'm using CRC32 while the others figure out what CRC polynomial they actually want.) This code seems like it ought to work:
broken:
Path in = ......;
try (SeekableByteChannel reading =
Files.newByteChannel (in, StandardOpenOption.READ))
{
System.err.println("byte channel is a " + reading.getClass().getName() +
" from " + in + " of size " + reading.size() + " and isopen=" + reading.isOpen());
java.util.zip.CRC32 placeholder = new java.util.zip.CRC32();
ByteBuffer buffer = ByteBuffer.allocate (reasonable_buffer_size);
int bytesread = 0;
int loops = 0;
while ((bytesread = reading.read(buffer)) > 0) {
byte[] raw = buffer.array();
System.err.println("Claims to have read " + bytesread + " bytes, have buffer of size " + raw.length + ", updating CRC");
placeholder.update(raw);
loops++;
buffer.clear();
}
// do stuff with placeholder.getValue()
}
catch (all the things that go wrong with opening files) {
and handle them;
}
The System.err and loops stuff is just for debugging; we don't actually care how many times it takes. The output is:
byte channel is a sun.nio.ch.FileChannelImpl from C:\working\tmp\ls2kst83543216xuxxy8136.tmp of size 7196 and isopen=true
finished after 0 time(s) through the loop
There's no way to run the real code inside a debugger to step through it, but from looking at the source to sun.nio.ch.FileChannelImpl.read() it looks like a 0 is returned if the file magically becomes closed while internal data structures are prepared; the code below is copied from the Java 7 reference implementation, comments added by me:
// sun.nio.ch.FileChannelImpl.java
public int read(ByteBuffer dst) throws IOException {
ensureOpen(); // this throws if file is closed...
if (!readable)
throw new NonReadableChannelException();
synchronized (positionLock) {
int n = 0;
int ti = -1;
Object traceContext = IoTrace.fileReadBegin(path);
try {
begin();
ti = threads.add();
if (!isOpen())
return 0; // ...argh
do {
n = IOUtil.read(fd, dst, -1, nd);
} while (......)
.......
But the debugging code tests isOpen() and gets true. So I don't know what's going wrong.
As the current test data files are tiny, I dropped this in place just to have something working:
works for now:
try {
byte[] scratch = Files.readAllBytes(in);
java.util.zip.CRC32 placeholder = new java.util.zip.CRC32();
placeholder.update(scratch);
// do stuff with placeholder.getValue()
}
I don't want to slurp the entire file into memory for the Real Code, because some of those files can be large. I do note that readAllBytes uses an InputStream in its reference implementation, which has no trouble reading the same file that SeekableByteChannel failed to. So I'll probably rewrite the code to just use input streams instead of byte channels. I'd still like to figure out what's gone wrong in case a future scenario comes up where we need to use byte channels. What am I missing with SeekableByteChannel?
Check that 'reasonable_buffer_size' isn't zero.

How can I write metadata to png Image

I have been trying to find a way to write metadata to a PNG and I have tried quite alot.
I can read the data using the pngj library using:
PngReader pngr = new PngReader(file);
pngr.readSkippingAllRows(); // reads only metadata
for (PngChunk c : pngr.getChunksList().getChunks()) {
if (!ChunkHelper.isText(c)) continue;
PngChunkTextVar ct = (PngChunkTextVar) c;
String key = ct.getKey();
String val = ct.getVal();
System.out.print(key + " " + val + "\n" );
}
pngr.close();
And it works great. But I need to write to it.
I have tried:
public boolean writeCustomData(String key, String value) throws Exception {
PngReader pngr = new PngReader(currentImage);
PngWriter png = new PngWriter(new FileOutputStream(currentImage), pngr.imgInfo);
png.getMetadata().setText(key, value);
return true;
}
But this does nothing.
And I have tried using the answer from Writing image metadata in Java, preferably PNG
this works (kinda) but my read function cant see it.
If you want to add a chunk to the image, you must read and write the full image. Example
PngReader pngr = new PngReader(origFile);
PngWriter pngw = new PngWriter(destFile, pngr.imgInfo, true);
// instruct the writer to copy all ancillary chunks from source
pngw.copyChunksFrom(pngr.getChunksList(), ChunkCopyBehaviour.COPY_ALL);
// add a new textual chunk (can also be done after writing the rows)
pngw.getMetadata().setText("my key", "my val");
// copy all rows
for (int row = 0; row < pngr.imgInfo.rows; row++) {
IImageLine l1 = pngr.readRow();
pngw.writeRow(l1);
}
pngr.end();
pngw.end();
If you need more performance, you can read/write the chunks at a lower level, see this example.
Try this:
Stream pngStream = new System.IO.FileStream("smiley.png", FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite);
PngBitmapDecoder pngDecoder = new PngBitmapDecoder(pngStream, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default);
BitmapFrame pngFrame = pngDecoder.Frames[0];
InPlaceBitmapMetadataWriter pngInplace = pngFrame.CreateInPlaceBitmapMetadataWriter();
if (pngInplace.TrySave() == true)
{
pngInplace.SetQuery("/Text/Description", "Have a nice day.");
}
pngStream.Close();

Recommended Java library for creating a video programmatically [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Can anyone recommend a Java library that would allow me to create a video programmatically? Specifically, it would do the following:
take a series of BufferedImages as the frames
allow a background WAV/MP3 to be added
allow 'incidental' WAV/MP3s to be added at arbitrarily, programmatically specified points
output the video in a common format (MPEG etc)
Can anybody recommend anything? For the picture/sound mixing, I'd even live with something that took a series of frames, and for each frame I had to supply the raw bytes of uncompressed sound data associated with that frame.
P.S. It doesn't even have to be a "third party library" as such if the Java Media Framework has the calls to achieve the above, but from my sketchy memory I have a feeling it doesn't.
I've used the code mentioned below to successfully perform items 1, 2, and 4 on your requirements list in pure Java. It's worth a look and you could probably figure out how to include #3.
http://www.randelshofer.ch/blog/2010/10/writing-quicktime-movies-in-pure-java/
I found a tool called ffmpeg which can convert multimedia files form one format to another. There is a filter called libavfilter in ffmpeg which is the substitute for vhook which allows the video/audio to be modified or examined between the decoder and the encoder. I think it should be possible to input raw frames and generate video.
I researched on any java implementation of ffmpeg and found the page titled "Getting Started with FFMPEG-JAVA" which is a JAVA wrapper around FFMPEG using JNA.
You can try a pure Java codec library called JCodec.
It has a very basic H.264 ( AVC ) encoder and MP4 muxer. Here's a full sample code taken from the their samples -- TranscodeMain.
private static void png2avc(String pattern, String out) throws IOException {
FileChannel sink = null;
try {
sink = new FileOutputStream(new File(out)).getChannel();
H264Encoder encoder = new H264Encoder();
RgbToYuv420 transform = new RgbToYuv420(0, 0);
int i;
for (i = 0; i < 10000; i++) {
File nextImg = new File(String.format(pattern, i));
if (!nextImg.exists())
continue;
BufferedImage rgb = ImageIO.read(nextImg);
Picture yuv = Picture.create(rgb.getWidth(), rgb.getHeight(), ColorSpace.YUV420);
transform.transform(AWTUtil.fromBufferedImage(rgb), yuv);
ByteBuffer buf = ByteBuffer.allocate(rgb.getWidth() * rgb.getHeight() * 3);
ByteBuffer ff = encoder.encodeFrame(buf, yuv);
sink.write(ff);
}
if (i == 1) {
System.out.println("Image sequence not found");
return;
}
} finally {
if (sink != null)
sink.close();
}
}
This sample is more sophisticated and actually shows muxing of encoded frames into MP4 file:
private static void prores2avc(String in, String out, ProresDecoder decoder, RateControl rc) throws IOException {
SeekableByteChannel sink = null;
SeekableByteChannel source = null;
try {
sink = writableFileChannel(out);
source = readableFileChannel(in);
MP4Demuxer demux = new MP4Demuxer(source);
MP4Muxer muxer = new MP4Muxer(sink, Brand.MOV);
Transform transform = new Yuv422pToYuv420p(0, 2);
H264Encoder encoder = new H264Encoder(rc);
MP4DemuxerTrack inTrack = demux.getVideoTrack();
CompressedTrack outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, (int) inTrack.getTimescale());
VideoSampleEntry ine = (VideoSampleEntry) inTrack.getSampleEntries()[0];
Picture target1 = Picture.create(ine.getWidth(), ine.getHeight(), ColorSpace.YUV422_10);
Picture target2 = null;
ByteBuffer _out = ByteBuffer.allocate(ine.getWidth() * ine.getHeight() * 6);
ArrayList<ByteBuffer> spsList = new ArrayList<ByteBuffer>();
ArrayList<ByteBuffer> ppsList = new ArrayList<ByteBuffer>();
Packet inFrame;
int totalFrames = (int) inTrack.getFrameCount();
long start = System.currentTimeMillis();
for (int i = 0; (inFrame = inTrack.getFrames(1)) != null && i < 100; i++) {
Picture dec = decoder.decodeFrame(inFrame.getData(), target1.getData());
if (target2 == null) {
target2 = Picture.create(dec.getWidth(), dec.getHeight(), ColorSpace.YUV420);
}
transform.transform(dec, target2);
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, target2);
if (rc instanceof ConstantRateControl) {
int mbWidth = (dec.getWidth() + 15) >> 4;
int mbHeight = (dec.getHeight() + 15) >> 4;
result.limit(((ConstantRateControl) rc).calcFrameSize(mbWidth * mbHeight));
}
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
outTrack.addFrame(new MP4Packet((MP4Packet) inFrame, result));
if (i % 100 == 0) {
long elapse = System.currentTimeMillis() - start;
System.out.println((i * 100 / totalFrames) + "%, " + (i * 1000 / elapse) + "fps");
}
}
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
muxer.writeHeader();
} finally {
if (sink != null)
sink.close();
if (source != null)
source.close();
}
}
Try JavaFX.
JavaFX includes support for rendering of images in multiple formats and support for playback of audio and video on all platforms where JavaFX is supported.
Here is a tutorial on manipulating images
Here is a tutorial on creating slideshows, timelines and scenes.
Here is FAQ on adding sounds.
Most of these are on JavaFX 1.3. Now JavaFX 2.0 is out.
Why not use FFMPEG?
There seems to be a Java wrapper for it:
http://fmj-sf.net/ffmpeg-java/getting_started.php
Here is an example of how to compile various media sources into one video with FFMPEG:
http://howto-pages.org/ffmpeg/#multiple
And, finally, the docs:
http://ffmpeg.org/ffmpeg.html

How every one does mp3 streaming?

I am coding in java for android. the issue is how to get mp3 file size and its audio length before hand so that I can set my Progress bar / Seek bar. to respond to seeking events.
The answer can be a simple info in a header file or a algorithm.
I am using androids mediaplayer to stream and the only issue is seeking which requires both the above mentioned things.
Any help is appreciated.
I also tried manually looking into mp3 file and get the header to decode the mp3 length, with this noob code.
total = ucon.getContentLength(); //ucon is an HTTPURLconnection
is= ucon.getInputStream();
byte[] buffer = new byte[1024];
is.read(buffer , 0, 1024);
int offset = 1024;
int loc = 2000;
//FrameLengthInBytes = 144 * BitRate / SampleRate + Padding
while(loc == 2000 && offset< total)
{
for(int i = 0 ; i <1024 ;i++ )
{
if((int)buffer[i] == 255)
{
if((int)buffer[i+1] >= 224)
{
loc = i+1;
break;
}
}
}
is.read(buffer , 0, 1024);
offset = 1024 + offset;
}
Coudn't find the pattern which marks a mp3 header (11111111 111xxxxx). Tried on different files. Can't find anything else to do.
Update 2:
Now I know I dont have to search for mp3 headers but ID3v2 headers. But still its done when streaming the file and on Android. I really hope someone helps and there are a lot of programs doing these things I wonder why would I have to do it the hard way.
Well, I never knew about the ID3 tag system.

Categories

Resources