Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Can anyone recommend a Java library that would allow me to create a video programmatically? Specifically, it would do the following:
take a series of BufferedImages as the frames
allow a background WAV/MP3 to be added
allow 'incidental' WAV/MP3s to be added at arbitrarily, programmatically specified points
output the video in a common format (MPEG etc)
Can anybody recommend anything? For the picture/sound mixing, I'd even live with something that took a series of frames, and for each frame I had to supply the raw bytes of uncompressed sound data associated with that frame.
P.S. It doesn't even have to be a "third party library" as such if the Java Media Framework has the calls to achieve the above, but from my sketchy memory I have a feeling it doesn't.
I've used the code mentioned below to successfully perform items 1, 2, and 4 on your requirements list in pure Java. It's worth a look and you could probably figure out how to include #3.
http://www.randelshofer.ch/blog/2010/10/writing-quicktime-movies-in-pure-java/
I found a tool called ffmpeg which can convert multimedia files form one format to another. There is a filter called libavfilter in ffmpeg which is the substitute for vhook which allows the video/audio to be modified or examined between the decoder and the encoder. I think it should be possible to input raw frames and generate video.
I researched on any java implementation of ffmpeg and found the page titled "Getting Started with FFMPEG-JAVA" which is a JAVA wrapper around FFMPEG using JNA.
You can try a pure Java codec library called JCodec.
It has a very basic H.264 ( AVC ) encoder and MP4 muxer. Here's a full sample code taken from the their samples -- TranscodeMain.
private static void png2avc(String pattern, String out) throws IOException {
FileChannel sink = null;
try {
sink = new FileOutputStream(new File(out)).getChannel();
H264Encoder encoder = new H264Encoder();
RgbToYuv420 transform = new RgbToYuv420(0, 0);
int i;
for (i = 0; i < 10000; i++) {
File nextImg = new File(String.format(pattern, i));
if (!nextImg.exists())
continue;
BufferedImage rgb = ImageIO.read(nextImg);
Picture yuv = Picture.create(rgb.getWidth(), rgb.getHeight(), ColorSpace.YUV420);
transform.transform(AWTUtil.fromBufferedImage(rgb), yuv);
ByteBuffer buf = ByteBuffer.allocate(rgb.getWidth() * rgb.getHeight() * 3);
ByteBuffer ff = encoder.encodeFrame(buf, yuv);
sink.write(ff);
}
if (i == 1) {
System.out.println("Image sequence not found");
return;
}
} finally {
if (sink != null)
sink.close();
}
}
This sample is more sophisticated and actually shows muxing of encoded frames into MP4 file:
private static void prores2avc(String in, String out, ProresDecoder decoder, RateControl rc) throws IOException {
SeekableByteChannel sink = null;
SeekableByteChannel source = null;
try {
sink = writableFileChannel(out);
source = readableFileChannel(in);
MP4Demuxer demux = new MP4Demuxer(source);
MP4Muxer muxer = new MP4Muxer(sink, Brand.MOV);
Transform transform = new Yuv422pToYuv420p(0, 2);
H264Encoder encoder = new H264Encoder(rc);
MP4DemuxerTrack inTrack = demux.getVideoTrack();
CompressedTrack outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, (int) inTrack.getTimescale());
VideoSampleEntry ine = (VideoSampleEntry) inTrack.getSampleEntries()[0];
Picture target1 = Picture.create(ine.getWidth(), ine.getHeight(), ColorSpace.YUV422_10);
Picture target2 = null;
ByteBuffer _out = ByteBuffer.allocate(ine.getWidth() * ine.getHeight() * 6);
ArrayList<ByteBuffer> spsList = new ArrayList<ByteBuffer>();
ArrayList<ByteBuffer> ppsList = new ArrayList<ByteBuffer>();
Packet inFrame;
int totalFrames = (int) inTrack.getFrameCount();
long start = System.currentTimeMillis();
for (int i = 0; (inFrame = inTrack.getFrames(1)) != null && i < 100; i++) {
Picture dec = decoder.decodeFrame(inFrame.getData(), target1.getData());
if (target2 == null) {
target2 = Picture.create(dec.getWidth(), dec.getHeight(), ColorSpace.YUV420);
}
transform.transform(dec, target2);
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, target2);
if (rc instanceof ConstantRateControl) {
int mbWidth = (dec.getWidth() + 15) >> 4;
int mbHeight = (dec.getHeight() + 15) >> 4;
result.limit(((ConstantRateControl) rc).calcFrameSize(mbWidth * mbHeight));
}
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
outTrack.addFrame(new MP4Packet((MP4Packet) inFrame, result));
if (i % 100 == 0) {
long elapse = System.currentTimeMillis() - start;
System.out.println((i * 100 / totalFrames) + "%, " + (i * 1000 / elapse) + "fps");
}
}
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
muxer.writeHeader();
} finally {
if (sink != null)
sink.close();
if (source != null)
source.close();
}
}
Try JavaFX.
JavaFX includes support for rendering of images in multiple formats and support for playback of audio and video on all platforms where JavaFX is supported.
Here is a tutorial on manipulating images
Here is a tutorial on creating slideshows, timelines and scenes.
Here is FAQ on adding sounds.
Most of these are on JavaFX 1.3. Now JavaFX 2.0 is out.
Why not use FFMPEG?
There seems to be a Java wrapper for it:
http://fmj-sf.net/ffmpeg-java/getting_started.php
Here is an example of how to compile various media sources into one video with FFMPEG:
http://howto-pages.org/ffmpeg/#multiple
And, finally, the docs:
http://ffmpeg.org/ffmpeg.html
Related
I'm parsing large PCAP files in Java using Kaitai-Struct. Whenever the file size exceeds Integer.MAX_VALUE bytes I face an IllegalArgumentException caused by the size limit of the underlying ByteBuffer.
I haven't found references to this issue elsewhere, which leads me to believe that this is not a library limitation but a mistake in the way I'm using it.
Since the problem is caused by trying to map the whole file into the ByteBuffer I'd think that the solution would be mapping only the first region of the file, and as the data is being consumed map again skipping the data already parsed.
As this is done within the Kaitai Struct Runtime library it would mean to write my own class extending fom KatiaiStream and overwrite the auto-generated fromFile(...) method, and this doesn't really seem the right approach.
The auto-generated method to parse from file for the PCAP class is.
public static Pcap fromFile(String fileName) throws IOException {
return new Pcap(new ByteBufferKaitaiStream(fileName));
}
And the ByteBufferKaitaiStream provided by the Kaitai Struct Runtime library is backed by a ByteBuffer.
private final FileChannel fc;
private final ByteBuffer bb;
public ByteBufferKaitaiStream(String fileName) throws IOException {
fc = FileChannel.open(Paths.get(fileName), StandardOpenOption.READ);
bb = fc.map(FileChannel.MapMode.READ_ONLY, 0, fc.size());
}
Which in turn is limitted by the ByteBuffer max size.
Am I missing some obvious workaround? Is it really a limitation of the implementation of Katiati Struct in Java?
There are two separate issues here:
Running Pcap.fromFile() for large files is generally not a very efficient method, as you'll eventually get all files parsed into memory array at once. A example on how to avoid that is given in kaitai_struct/issues/255. The basic idea is that you'd want to have control over how you read every packet, and then dispose of every packet after you've parsed / accounted it somehow.
2GB limit on Java's mmaped files. To mitigate that, you can use alternative RandomAccessFile-based KaitaiStream implementation: RandomAccessFileKaitaiStream — it might be slower, but it should avoid that 2GB problem.
This library provides a ByteBuffer implementation which uses long offset. I haven't tried this approach but looks promising. See section Mapping Files Bigger than 2 GB
http://www.kdgregory.com/index.php?page=java.byteBuffer
public int getInt(long index)
{
return buffer(index).getInt();
}
private ByteBuffer buffer(long index)
{
ByteBuffer buf = _buffers[(int)(index / _segmentSize)];
buf.position((int)(index % _segmentSize));
return buf;
}
public MappedFileBuffer(File file, int segmentSize, boolean readWrite)
throws IOException
{
if (segmentSize > MAX_SEGMENT_SIZE)
throw new IllegalArgumentException(
"segment size too large (max " + MAX_SEGMENT_SIZE + "): " + segmentSize);
_segmentSize = segmentSize;
_fileSize = file.length();
RandomAccessFile mappedFile = null;
try
{
String mode = readWrite ? "rw" : "r";
MapMode mapMode = readWrite ? MapMode.READ_WRITE : MapMode.READ_ONLY;
mappedFile = new RandomAccessFile(file, mode);
FileChannel channel = mappedFile.getChannel();
_buffers = new MappedByteBuffer[(int)(_fileSize / segmentSize) + 1];
int bufIdx = 0;
for (long offset = 0 ; offset < _fileSize ; offset += segmentSize)
{
long remainingFileSize = _fileSize - offset;
long thisSegmentSize = Math.min(2L * segmentSize, remainingFileSize);
_buffers[bufIdx++] = channel.map(mapMode, offset, thisSegmentSize);
}
}
finally
{
// close quietly
if (mappedFile != null)
{
try
{
mappedFile.close();
}
catch (IOException ignored) { /* */ }
}
}
}
The situation
I should show 200-350 frames animation in my application. Images have 500x300ish resolution. If user wants to share animation, i have to convert it to Video. For convertion i am using ffmpeg command.
ffmpeg -y -r 1 -i /sdcard/videokit/pic00%d.jpg -i /sdcard/videokit/in.mp3 -strict experimental -ar 44100 -ac 2 -ab 256k -b 2097152 -ar 22050 -vcodec mpeg4 -b 2097152 -s 320x240 /sdcard/videokit/out.mp4
To convert images to video ffmpeg wants actual files not Bitmap or byte[].
Problem
Compressing bitmaps to image files taking to much time. 210 image convertion takes about 1 minute to finish on average device(HTC ONE m7). Converting image files to mp4 takes about 15 seconds on the same device. All together user have to wait about 1.5 minutes.
What i have tried
I changed comrpession format form PNG to JPEG(1.5 minute result is
achieved with JPEG compression(quality=80),with PNG it takes about
2-2.5 minutes) success
Tried to find how pass byte[] or bitmap to ffmpeg - no succes.
QUESTION
Is there any way(library (even native)) to make saving process faster.
Is there any way to pass byte[] or Bitmap objects (i mean png file decompressed to Android Bitmap Class Object) to ffmpeg library video creating method
Is there any other working library which will create mp4(or any supported format(supported by main Social Networks)) from byte[] or Bitmap objects in about 30 seconds(for 200 frames).
You can convert Bitmap (or byte[]) to YUV format quickly, using renderscript (see https://stackoverflow.com/a/39877029/192373). You can pass these YUV frames to ffmpeg library (as suggests halfelf), or use the built-in native MediaCodec which uses dedicated hardware on modt devices (but compression options are less flexible than all-software ffmpeg).
There are two steps slow us down. Compressing image to PNG/JPG and writing them to disk. Both can be skipped if we directly code against ffmpeg libs, instead of calling ffmpeg command. (There are other improvements too, such like GPU encoding and multithreading, but much more complicated.)
Some approaches to code:
Only use C/C++ NDK for android programming. FFmpeg will happily work. But I guess it's not an option here.
Build it from scratch by Java JNI. Not much experience here. I only know this could link java to c/c++ libs.
Some java wrapper. Luckily I found javacpp-presets. (There are others too, but this one is good enough and up to date.)
This library includes a good example ported from famous dranger's ffmpeg tutorial, though it is a demuxing one.
We can try to write a muxing one, following ffmpeg's muxing.c example.
import java.io.*;
import org.bytedeco.javacpp.*;
import static org.bytedeco.javacpp.avcodec.*;
import static org.bytedeco.javacpp.avformat.*;
import static org.bytedeco.javacpp.avutil.*;
import static org.bytedeco.javacpp.swscale.*;
public class Muxer {
public class OutputStream {
public AVStream Stream;
public AVCodecContext Ctx;
public AVFrame Frame;
public SwsContext SwsCtx;
public void setStream(AVStream s) {
this.Stream = s;
}
public AVStream getStream() {
return this.Stream;
}
public void setCodecCtx(AVCodecContext c) {
this.Ctx = c;
}
public AVCodecContext getCodecCtx() {
return this.Ctx;
}
public void setFrame(AVFrame f) {
this.Frame = f;
}
public AVFrame getFrame() {
return this.Frame;
}
public OutputStream() {
Stream = null;
Ctx = null;
Frame = null;
SwsCtx = null;
}
}
public static void main(String[] args) throws IOException {
Muxer t = new Muxer();
OutputStream VideoSt = t.new OutputStream();
AVOutputFormat Fmt = null;
AVFormatContext FmtCtx = new AVFormatContext(null);
AVCodec VideoCodec = null;
AVDictionary Opt = null;
SwsContext SwsCtx = null;
AVPacket Pkt = new AVPacket();
int GotOutput;
int InLineSize[] = new int[1];
String FilePath = "/path/xxx.mp4";
avformat_alloc_output_context2(FmtCtx, null, null, FilePath);
Fmt = FmtCtx.oformat();
AVCodec codec = avcodec_find_encoder_by_name("libx264");
av_format_set_video_codec(FmtCtx, codec);
VideoCodec = avcodec_find_encoder(Fmt.video_codec());
VideoSt.setStream(avformat_new_stream(FmtCtx, null));
AVStream stream = VideoSt.getStream();
VideoSt.getStream().id(FmtCtx.nb_streams() - 1);
VideoSt.setCodecCtx(avcodec_alloc_context3(VideoCodec));
VideoSt.getCodecCtx().codec_id(Fmt.video_codec());
VideoSt.getCodecCtx().bit_rate(5120000);
VideoSt.getCodecCtx().width(1920);
VideoSt.getCodecCtx().height(1080);
AVRational fps = new AVRational();
fps.den(25); fps.num(1);
VideoSt.getStream().time_base(fps);
VideoSt.getCodecCtx().time_base(fps);
VideoSt.getCodecCtx().gop_size(10);
VideoSt.getCodecCtx().max_b_frames();
VideoSt.getCodecCtx().pix_fmt(AV_PIX_FMT_YUV420P);
if ((FmtCtx.oformat().flags() & AVFMT_GLOBALHEADER) != 0)
VideoSt.getCodecCtx().flags(VideoSt.getCodecCtx().flags() | AV_CODEC_FLAG_GLOBAL_HEADER);
avcodec_open2(VideoSt.getCodecCtx(), VideoCodec, Opt);
VideoSt.setFrame(av_frame_alloc());
VideoSt.getFrame().format(VideoSt.getCodecCtx().pix_fmt());
VideoSt.getFrame().width(1920);
VideoSt.getFrame().height(1080);
av_frame_get_buffer(VideoSt.getFrame(), 32);
// should be at least Long or even BigInteger
// it is a unsigned long in C
int nextpts = 0;
av_dump_format(FmtCtx, 0, FilePath, 1);
avio_open(FmtCtx.pb(), FilePath, AVIO_FLAG_WRITE);
avformat_write_header(FmtCtx, Opt);
int[] got_output = { 0 };
while (still_has_input) {
// convert or directly copy your Bytes[] into VideoSt.Frame here
// AVFrame structure has two important data fields:
// AVFrame.data (uint8_t*[]) and AVFrame.linesize (int[])
// data includes pixel values in some formats and linesize is size of each picture line.
// For example, if formats is RGB. linesize should has 3 valid values equaling to `image_width * 3`. And data will point to three arrays containing rgb values.
// But I guess we'll need swscale() to convert pixel format here. From RGB to yuv420p (or other yuv family formats).
Pkt = new AVPacket();
av_init_packet(Pkt);
VideoSt.getFrame().pts(nextpts++);
avcodec_encode_video2(VideoSt.getCodecCtx(), Pkt, VideoSt.getFrame(), got_output);
av_packet_rescale_ts(Pkt, VideoSt.getCodecCtx().time_base(), VideoSt.getStream().time_base());
Pkt.stream_index(VideoSt.getStream().index());
av_interleaved_write_frame(FmtCtx, Pkt);
av_packet_unref(Pkt);
}
// get delayed frames
for (got_output[0] = 1; got_output[0] != 0;) {
Pkt = new AVPacket();
av_init_packet(Pkt);
avcodec_encode_video2(VideoSt.getCodecCtx(), Pkt, null, got_output);
if (got_output[0] > 0) {
av_packet_rescale_ts(Pkt, VideoSt.getCodecCtx().time_base(), VideoSt.getStream().time_base());
Pkt.stream_index(VideoSt.getStream().index());
av_interleaved_write_frame(FmtCtx, Pkt);
}
av_packet_unref(Pkt);
}
// free c structs
avcodec_free_context(VideoSt.getCodecCtx());
av_frame_free(VideoSt.getFrame());
avio_closep(FmtCtx.pb());
avformat_free_context(FmtCtx);
}
}
For porting C code, normally several changes should be done:
Mostly the work is to replace every C struct member access (. and ->) to java getter/setter.
Also there are many C address-of operators &, just delete them.
Change C NULL macro and C++ nullptr pointer to Java null object.
C codes used to check bool result of an int type in if, for, while. Have to compare them with 0 in java.
And there may be other API changes, as long as referencing to javacpp-presets docs, it'll be ok.
Note that I omitted all error handling codes here. It may be needed in real development/production.
Really I don't want to make publicity but to use pkzip and its SDK may be a good
solution. Pkzip compress file to 95% as they say.
The Smartcrypt SDK is available in all major programming languages, including C++, Java, and C#, and can be used to encrypt both structured and unstructured data. Changes to existing applications typically consist of two or three lines of code.
Requirement : I want to reverse a video file and save it as a new video file in android. ie. the final output file should play the video in reverse.
What I tried : I've used the below code (which I got from AOSP https://android.googlesource.com/platform/cts/+/kitkat-release/tests/tests/media/src/android/media/cts/MediaMuxerTest.java) with a little modification.
File file = new File(srcMedia.getPath());
MediaExtractor extractor = new MediaExtractor();
extractor.setDataSource(file.getPath());
int trackCount = extractor.getTrackCount();
// Set up MediaMuxer for the destination.
MediaMuxer muxer;
muxer = new MediaMuxer(dstMediaPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
// Set up the tracks.
HashMap<Integer, Integer> indexMap = new HashMap<Integer, Integer>(trackCount);
for (int i = 0; i < trackCount; i++) {
extractor.selectTrack(i);
MediaFormat format = extractor.getTrackFormat(i);
int dstIndex = muxer.addTrack(format);
indexMap.put(i, dstIndex);
}
// Copy the samples from MediaExtractor to MediaMuxer.
boolean sawEOS = false;
int bufferSize = MAX_SAMPLE_SIZE;
int frameCount = 0;
int offset = 100;
long totalTime = mTotalVideoDurationInMicroSeconds;
ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
if (degrees >= 0) {
muxer.setOrientationHint(degrees);
}
muxer.start();
while (!sawEOS) {
bufferInfo.offset = offset;
bufferInfo.size = extractor.readSampleData(dstBuf, offset);
if (bufferInfo.size < 0) {
if (VERBOSE) {
Log.d(TAG, "saw input EOS.");
}
sawEOS = true;
bufferInfo.size = 0;
} else {
bufferInfo.presentationTimeUs = totalTime - extractor.getSampleTime();
//noinspection WrongConstant
bufferInfo.flags = extractor.getSampleFlags();
int trackIndex = extractor.getSampleTrackIndex();
muxer.writeSampleData(indexMap.get(trackIndex), dstBuf,
bufferInfo);
extractor.advance();
frameCount++;
if (VERBOSE) {
Log.d(TAG, "Frame (" + frameCount + ") " +
"PresentationTimeUs:" + bufferInfo.presentationTimeUs +
" Flags:" + bufferInfo.flags +
" TrackIndex:" + trackIndex +
" Size(KB) " + bufferInfo.size / 1024);
}
}
}
muxer.stop();
muxer.release();
The main change I did is in this line
bufferInfo.presentationTimeUs = totalTime - extractor.getSampleTime();
This was done in expectation that the video frames will be written to the output file in reverse order. But the result was same as the original video (not reversed).
I feel what I tried here is not making any sense. Basically I don't have much understanding of video formats, codecs, byte buffers etc.
I've also tried using JavaCV which is a good java wrapper over opencv, ffmpeg etc. and I got it working with that library. But the encoding process takes long time and also the apk size became large due to the library.
With android's built in MediaCodec APIs I expect things to be faster and lightweight. But I can accept other solutions also if they offer the same.
It's greatly appreciated if someone can offer any help on how this can be done in android. Also if you have great articles which can help me to learn the specifics/basics about video, codecs, video processing etc. that will also help.
Im attempting to translate a large set of bufferedimages (pre-saved images created on the fly by my application) into a video using java and hopefully a library that can help with the process.
I've explored a number of different options, such as jcodec (there was no documentation on how to use it). Xuggler (couldn't get it to run due to compatibility issues with jdk5 and its related libraries). and a number of other libraries that had very poor documentation.
I'm trying to find a library that I can use that uses java to (1) create h264 videos by writing bufferedimages frame by frame and (2) has documentation so that I can actually figure out how to use the dam thing.
Any ideas on what I should be looking into?
If pure java source code exists somewhere that can achieve this I would be VERY interested in seeing it. Because I would love to see how the person has achieved the functionality and how I could use it!
Thanks in advance...
Here's how you can do it with JCodec:
public class SequenceEncoder {
private SeekableByteChannel ch;
private Picture toEncode;
private RgbToYuv420 transform;
private H264Encoder encoder;
private ArrayList<ByteBuffer> spsList;
private ArrayList<ByteBuffer> ppsList;
private CompressedTrack outTrack;
private ByteBuffer _out;
private int frameNo;
private MP4Muxer muxer;
public SequenceEncoder(File out) throws IOException {
this.ch = NIOUtils.writableFileChannel(out);
// Transform to convert between RGB and YUV
transform = new RgbToYuv420(0, 0);
// Muxer that will store the encoded frames
muxer = new MP4Muxer(ch, Brand.MP4);
// Add video track to muxer
outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, 25);
// Allocate a buffer big enough to hold output frames
_out = ByteBuffer.allocate(1920 * 1080 * 6);
// Create an instance of encoder
encoder = new H264Encoder();
// Encoder extra data ( SPS, PPS ) to be stored in a special place of
// MP4
spsList = new ArrayList<ByteBuffer>();
ppsList = new ArrayList<ByteBuffer>();
}
public void encodeImage(BufferedImage bi) throws IOException {
if (toEncode == null) {
toEncode = Picture.create(bi.getWidth(), bi.getHeight(), ColorSpace.YUV420);
}
// Perform conversion
transform.transform(AWTUtil.fromBufferedImage(bi), toEncode);
// Encode image into H.264 frame, the result is stored in '_out' buffer
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, toEncode);
// Based on the frame above form correct MP4 packet
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
// Add packet to video track
outTrack.addFrame(new MP4Packet(result, frameNo, 25, 1, frameNo, true, null, frameNo, 0));
frameNo++;
}
public void finish() throws IOException {
// Push saved SPS/PPS to a special storage in MP4
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
// Write MP4 header and finalize recording
muxer.writeHeader();
NIOUtils.closeQuietly(ch);
}
}
jcodec now (jcodec-0.1.9.jar) includes SequenceEncoder that directly permits writing of BufferedImages to a video stream.
I spent a while fixing the default import of this new class into Eclipse. After removing the first import, attempting (as I said above, I could not locate some of the classes) to create my own using Stanislav's code and reimporting I spotted the mistake:
import org.jcodec.api.awt.SequenceEncoder;
//import org.jcodec.api.SequenceEncoder;
The second is completely deprecated with no documentation directing me to the latter.
The commensurate method is then:
private void saveClip(Trajectory traj) {
//See www.tutorialspoint.com/androi/android_audio_capture.htm
//for audio cap ideas.
SequenceEncoder enc;
try {
enc = new SequenceEncoder(new File("C:/Users/WHOAMI/today.mp4"));
for (int i = 0; i < BUFF_COUNT; ++i) {
BufferedImage image = buffdFramToBuffdImage(frameBuff.get(i));
enc.encodeImage(image);
}
enc.finish();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
I am coding in java for android. the issue is how to get mp3 file size and its audio length before hand so that I can set my Progress bar / Seek bar. to respond to seeking events.
The answer can be a simple info in a header file or a algorithm.
I am using androids mediaplayer to stream and the only issue is seeking which requires both the above mentioned things.
Any help is appreciated.
I also tried manually looking into mp3 file and get the header to decode the mp3 length, with this noob code.
total = ucon.getContentLength(); //ucon is an HTTPURLconnection
is= ucon.getInputStream();
byte[] buffer = new byte[1024];
is.read(buffer , 0, 1024);
int offset = 1024;
int loc = 2000;
//FrameLengthInBytes = 144 * BitRate / SampleRate + Padding
while(loc == 2000 && offset< total)
{
for(int i = 0 ; i <1024 ;i++ )
{
if((int)buffer[i] == 255)
{
if((int)buffer[i+1] >= 224)
{
loc = i+1;
break;
}
}
}
is.read(buffer , 0, 1024);
offset = 1024 + offset;
}
Coudn't find the pattern which marks a mp3 header (11111111 111xxxxx). Tried on different files. Can't find anything else to do.
Update 2:
Now I know I dont have to search for mp3 headers but ID3v2 headers. But still its done when streaming the file and on Android. I really hope someone helps and there are a lot of programs doing these things I wonder why would I have to do it the hard way.
Well, I never knew about the ID3 tag system.