I am building an export application using Xuggler that exports a h264 encoded recording so that it can be played in an external player ( writing the video recording to .avi or .mp4 container).
I am interested to know how one could create a IPacket from a byte array representing a video frame. What parameters from the IPacket need to be set and what values should those contain?
And again what parameters should be set and what should be their values for the container that gathers the packets?
packet = IPacket.make( IBuffer.make( null, data, 0, data.length ));
packet.setTimeStamp( time );
packet.setTimeBase( IRational.make(1,1000) );
int pksz = packet.getSize();
packet.setComplete(true, pksz);
Related
So I have a link to a video online (e.g. somewebsite.com/myVideo.mkv) and I want to download that video on the server through a servlet. The video file has CDN enabled, so basically any public user can just put the link into the browser and it will start playing. This is the code I have so far.
downloadFile(URL myURL){
InputStream input = myURL.openStream();
File video = new File ("/path-to-file/" + myURL.getFile());
FileOutputStream output = new FileOutputStream(output);
byte[] buffer = new byte[1024];
int read;
// Write full range.
while ((read = input.read(buffer)) > 0){
output.write(buffer, 0, read);
}
output.close();
input.close()
}
If I do that, it would download the entire video file from the URL and the video playback fine. However, if I want to specify a specific byte range on the video downloadFile(URL myURL, long startByte, long endByte), the video doesn't playback. I used the function input.skip() to skip forward to the startByte but I suspect it skips over some important header of the mkv format. That's why the player can't recognize it. Does anyone know how to do this in java?
There are 3 dominant HTTP streaming techologies: Apple HTTP Live Streaming, Microsoft Smooth Streaming, and Adobe HTTP Dynamic Streaming. Each of these technologies provides tools to convert video to corresponding format. If you start with one large video file, the Apple and Adobe tools would create a number of small files containing, say, 10 sec of video each, and a playlist file that would give the client a clue how to read them. I believe Microsoft tools actually can generate a single file, but it would contain small video fragments internally.
With the HTTP streaming, the "intelligence" lives in the client that knows how to read the master playlist file and how to get around either numerous media files or numerous media file fragments. The HTTP server only have to serve a file or a file fragment specified by the Range header.
I'm trying simply to convert a .mov file into .webm using Xuggler, which should work as FFMPEG supports .webm files.
This is my code:
IMediaReader reader = ToolFactory.makeReader("/home/user/vids/2.mov");
reader.addListener(ToolFactory.makeWriter("/home/user/vids/2.webm", reader));
while (reader.readPacket() == null);
System.out.println( "Finished" );
On running this, I get this error:
[main] ERROR org.ffmpeg - [libvorbis # 0x8d7fafe0] Specified sample_fmt is not supported.
[main] WARN com.xuggle.xuggler - Error: could not open codec (../../../../../../../csrc/com/xuggle/xuggler/StreamCoder.cpp:831)
Exception in thread "main" java.lang.RuntimeException: could not open stream com.xuggle.xuggler.IStream#-1921013728[index:1;id:0;streamcoder:com.xuggle.xuggler.IStreamCoder#-1921010088[codec=com.xuggle.xuggler.ICodec#-1921010232[type=CODEC_TYPE_AUDIO;id=CODEC_ID_VORBIS;name=libvorbis;];time base=1/44100;frame rate=0/0;sample rate=44100;channels=1;];framerate:0/0;timebase:1/90000;direction:OUTBOUND;]: Operation not permitted
at com.xuggle.mediatool.MediaWriter.openStream(MediaWriter.java:1192)
at com.xuggle.mediatool.MediaWriter.getStream(MediaWriter.java:1052)
at com.xuggle.mediatool.MediaWriter.encodeAudio(MediaWriter.java:830)
at com.xuggle.mediatool.MediaWriter.onAudioSamples(MediaWriter.java:1441)
at com.xuggle.mediatool.AMediaToolMixin.onAudioSamples(AMediaToolMixin.java:89)
at com.xuggle.mediatool.MediaReader.dispatchAudioSamples(MediaReader.java:628)
at com.xuggle.mediatool.MediaReader.decodeAudio(MediaReader.java:555)
at com.xuggle.mediatool.MediaReader.readPacket(MediaReader.java:469)
at com.mycompany.xugglertest.App.main(App.java:13)
Java Result: 1
Any ideas?
There's a funky thing going on with Xuggler where it doesn't always allow you to set the sample rate of IAudioSamples. You'll need to use an IAudioResampler.
Took me a while to figure this out. This post by Marty helped a lot, though his code is outdated now.
Here's how you fix it.
.
Before encoding
I'm assuming here that audio input has been properly set up, resulting in an IStreamCoder called audioCoder.
After that's done, you are probably initiating an IMediaWriter and adding an audio stream like so:
final IMediaWriter oggWriter = ToolFactory.makeWriter(oggOutputFile);
// Using stream 1 'cause there is also a video stream.
// For an audio only file you should use stream 0.
oggWriter.addAudioStream(1, 1, ICodec.ID.CODEC_ID_VORBIS,
audioCoder.getChannels(), audioCoder.getSampleRate());
Now create an IAudioResampler:
IAudioResampler oggResampler = IAudioResampler.make(audioCoder.getChannels(),
audioCoder.getChannels(),
audioCoder.getSampleRate(),
audioCoder.getSampleRate(),
IAudioSamples.Format.FMT_FLT,
audioCoder.getSampleFormat());
And tell your IMediaWriter to update to its sample format:
// The stream 1 here is consistent with the stream we added earlier.
oggWriter.getContainer().getStream(1).getStreamCoder().
setSampleFormat(IAudioSamples.Format.FMT_FLT);
.
During encoding
You are currently probably initiating an IAudioSamples and filling it with audio data, like so:
IAudioSamples audioSample = IAudioSamples.make(512, audioCoder.getChannels(),
audioCoder.getSampleFormat());
int bytesDecoded = audioCoder.decodeAudio(audioSample, packet, offset);
Now initiate an IAudioSamples for our resampled data:
IAudioSamples vorbisSample = IAudioSamples.make(512, audioCoder.getChannels(),
IAudioSamples.Format.FMT_FLT);
Finally, resample the audio data and write the result:
oggResampler.resample(vorbisSample, audioSample, 0);
oggWriter.encodeAudio(1, vorbisSample);
.
Final thought
Just a hint to get your output files to play well:
If you use audio and video within the same container, then audio and video data packets should be written in such an order that the timestamp of each data packet is higher than that of the previous data packet. So you are almost certainly going to need some kind of buffering mechanism that alternates writing audio and video.
In my Android application I am recording the user's voice which I save as a .3gp encoded audio file.
What I want to do is open it up, i.e. the sequence x[n] representing the audio sample, in order to perform some audio signal analysis.
Does anyone know how I could go about doing this?
You can use the Android MediaCodec class to decode 3gp or other media files. The decoder output is standard PCM byte array. You can directly send this output to the Android AudioTrack class to play or continue with this output byte array for further processing such as DSP. To apply DSP algorithm the byte array must be transform into float/double array. There are several steps to get the byte array output. In summary it looks like as follows:
Instantiate MediaCodec
String mMime = "audio/3gpp"
MediaCodec mMediaCodec = MediaCodec.createDecoderByType(mMime);
Create Media format and configure media codec
MediaFormat mMediaFormat = new MediaFormat();
mMediaFormat = MediaFormat.createAudioFormat(mMime,
mMediaFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE),
mMediaFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT));
mMediaCodec.configure(mMediaFormat, null, null, 0);
mMediaCodec.start();
Capture output from MediaCodec ( Should process inside a thread)
MediaCodec.BufferInfo buf_info = new MediaCodec.BufferInfo();
int outputBufferIndex = mMediaCodec.dequeueOutputBuffer(buf_info, 0);
byte[] pcm = new byte[buf_info.size];
mOutputBuffers[outputBufferIndex].get(pcm, 0, buf_info.size);
This Google IO talk might be relevant here.
I am struggling with the transfer of a simple jpeg file inside an ID3v2 tag from c++ over a TCP socket to java (Android). The library "taglib" offers to extract this file and I am able to save the jpeg as a new file.
The send function looks like this
char *parameter_full = new char[f3->picture().size()+2];
sprintf(parameter_full,"%s\n\0",f3->picture().data());
// send
result = send(c,parameter_full,strlen(parameter_full),0);
delete[] parameter_full;
where
f3->picture().data() returns a pointer to the internal data structure (it returns char*) and
f3->picture().size() returns the size of the array.
Then Android receives it with
String imageString = inFromServer.readLine();
byte[] imageBytes = imageString.getBytes();
Bitmap cover = BitmapFactory.decodeByteArray(imageBytes,0,imageBytes.length);
But somehow decodeByteArray always returns null. My idea is that Java doesn't receive the image correctly because imageString only consists of 4 characters...while the extracted jpeg file has a size of 12.7 KB.
But what has gone wrong?
Martin
You shouldn't use string functions on byte data because 0 values are taken as string terminators. Try looking into memcpy on the C++ side if you need to copy the char* and also the byte[] read functions for InputStream on the Java side.
I would like to record the user's interaction in my Java Applet as a video to send (potentially stream) to my server with the intention of uploading to Youtube (or similar). A high frame-rate is not required (a couple frames per second is sufficient).
Minimizing the bandwidth used is preferred, so sending jpeg snapshots to the server and encoding server-side is my last resort.
Are there any lightweight Java video encoding libraries available that don't require native code?
I'm new to java so don't take this to seriously :)
I guess a good start with video encoding in java is Java Media Framework.
I haven't tried it, so I don't know what's they're support on flv encoding.
Since Flash Media Server is commercial, couldn't you use Red5 ?
You would have a swf, not an applet, but you will get a broader percentage of viewers since Flash Player is pretty wide spreaded.
And Alex has a good point, since you need to upload the video to youtube, why not use they're API ?
hth
Xuggler can be used to encode pretty much any format from Java, but it requires a native component to be installed with it. There isn't an applet version available in the easy to use download, but some users have built custom versions of FFmpeg and Xuggler that they have used in downloadable applications. Try asking on the xuggler-users user group to see if others will help.
You can encode your images into H.264/MP4 this way it would be immediately good for web streaming. To upload it in parallel to recording you can break up your sequence into small chunks, let's say 25-100 images each and upload each chunk as a separate movie.
You can do it in pure Java without any native code, just use JCodec ( http://jcodec.org ). Here's a handy class that you can use:
public class SequenceEncoder {
private SeekableByteChannel ch;
private Picture toEncode;
private RgbToYuv420 transform;
private H264Encoder encoder;
private ArrayList<ByteBuffer> spsList;
private ArrayList<ByteBuffer> ppsList;
private CompressedTrack outTrack;
private ByteBuffer _out;
private int frameNo;
private MP4Muxer muxer;
public SequenceEncoder(File out) throws IOException {
this.ch = NIOUtils.writableFileChannel(out);
// Transform to convert between RGB and YUV
transform = new RgbToYuv420(0, 0);
// Muxer that will store the encoded frames
muxer = new MP4Muxer(ch, Brand.MP4);
// Add video track to muxer
outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, 25);
// Allocate a buffer big enough to hold output frames
_out = ByteBuffer.allocate(1920 * 1080 * 6);
// Create an instance of encoder
encoder = new H264Encoder();
// Encoder extra data ( SPS, PPS ) to be stored in a special place of
// MP4
spsList = new ArrayList<ByteBuffer>();
ppsList = new ArrayList<ByteBuffer>();
}
public void encodeImage(BufferedImage bi) throws IOException {
if (toEncode == null) {
toEncode = Picture.create(bi.getWidth(), bi.getHeight(), ColorSpace.YUV420);
}
// Perform conversion
transform.transform(AWTUtil.fromBufferedImage(bi), toEncode);
// Encode image into H.264 frame, the result is stored in '_out' buffer
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, toEncode);
// Based on the frame above form correct MP4 packet
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
// Add packet to video track
outTrack.addFrame(new MP4Packet(result, frameNo, 25, 1, frameNo, true, null, frameNo, 0));
frameNo++;
}
public void finish() throws IOException {
// Push saved SPS/PPS to a special storage in MP4
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
// Write MP4 header and finalize recording
muxer.writeHeader();
NIOUtils.closeQuietly(ch);
}
}
Why do you need to send the images or video form directly? Sounds like a big bandwidth expense. Just serialize and send the stream of UI events with timestamps, and reconstruct what the user should be seeing on your server later (some visual details may depend on the user's machine/setup, but your applet ain't gonna be able to get to them decently anyway).