Im attempting to translate a large set of bufferedimages (pre-saved images created on the fly by my application) into a video using java and hopefully a library that can help with the process.
I've explored a number of different options, such as jcodec (there was no documentation on how to use it). Xuggler (couldn't get it to run due to compatibility issues with jdk5 and its related libraries). and a number of other libraries that had very poor documentation.
I'm trying to find a library that I can use that uses java to (1) create h264 videos by writing bufferedimages frame by frame and (2) has documentation so that I can actually figure out how to use the dam thing.
Any ideas on what I should be looking into?
If pure java source code exists somewhere that can achieve this I would be VERY interested in seeing it. Because I would love to see how the person has achieved the functionality and how I could use it!
Thanks in advance...
Here's how you can do it with JCodec:
public class SequenceEncoder {
private SeekableByteChannel ch;
private Picture toEncode;
private RgbToYuv420 transform;
private H264Encoder encoder;
private ArrayList<ByteBuffer> spsList;
private ArrayList<ByteBuffer> ppsList;
private CompressedTrack outTrack;
private ByteBuffer _out;
private int frameNo;
private MP4Muxer muxer;
public SequenceEncoder(File out) throws IOException {
this.ch = NIOUtils.writableFileChannel(out);
// Transform to convert between RGB and YUV
transform = new RgbToYuv420(0, 0);
// Muxer that will store the encoded frames
muxer = new MP4Muxer(ch, Brand.MP4);
// Add video track to muxer
outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, 25);
// Allocate a buffer big enough to hold output frames
_out = ByteBuffer.allocate(1920 * 1080 * 6);
// Create an instance of encoder
encoder = new H264Encoder();
// Encoder extra data ( SPS, PPS ) to be stored in a special place of
// MP4
spsList = new ArrayList<ByteBuffer>();
ppsList = new ArrayList<ByteBuffer>();
}
public void encodeImage(BufferedImage bi) throws IOException {
if (toEncode == null) {
toEncode = Picture.create(bi.getWidth(), bi.getHeight(), ColorSpace.YUV420);
}
// Perform conversion
transform.transform(AWTUtil.fromBufferedImage(bi), toEncode);
// Encode image into H.264 frame, the result is stored in '_out' buffer
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, toEncode);
// Based on the frame above form correct MP4 packet
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
// Add packet to video track
outTrack.addFrame(new MP4Packet(result, frameNo, 25, 1, frameNo, true, null, frameNo, 0));
frameNo++;
}
public void finish() throws IOException {
// Push saved SPS/PPS to a special storage in MP4
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
// Write MP4 header and finalize recording
muxer.writeHeader();
NIOUtils.closeQuietly(ch);
}
}
jcodec now (jcodec-0.1.9.jar) includes SequenceEncoder that directly permits writing of BufferedImages to a video stream.
I spent a while fixing the default import of this new class into Eclipse. After removing the first import, attempting (as I said above, I could not locate some of the classes) to create my own using Stanislav's code and reimporting I spotted the mistake:
import org.jcodec.api.awt.SequenceEncoder;
//import org.jcodec.api.SequenceEncoder;
The second is completely deprecated with no documentation directing me to the latter.
The commensurate method is then:
private void saveClip(Trajectory traj) {
//See www.tutorialspoint.com/androi/android_audio_capture.htm
//for audio cap ideas.
SequenceEncoder enc;
try {
enc = new SequenceEncoder(new File("C:/Users/WHOAMI/today.mp4"));
for (int i = 0; i < BUFF_COUNT; ++i) {
BufferedImage image = buffdFramToBuffdImage(frameBuff.get(i));
enc.encodeImage(image);
}
enc.finish();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Related
I am trying to make an AirPlay server in java with this library. I am able to start the server and connect to it and I am getting video input, however the input is in h264 format and I tried decoding it with JCodec but it always says I need an sps/pps and I don't know how to create/find this with just a byte[]. This is the onVideo method which is pretty much just copy-pasted from some websites:
#Override
public void onVideo(byte[] video) {
try {
videoFileChannel.write(ByteBuffer.wrap(video));
ByteBuffer bb = ByteBuffer.wrap(video);
H264Decoder decoder = new H264Decoder();
decoder.addSps(List.of(ByteBuffer.wrap(video)));
Picture out = Picture.create(1920, 1088, ColorSpace.YUV420);
var real = decoder.decodeFrame(bb, out.getData());
// decoder.decodeFrame prints "[WARN] . (:0): Skipping frame as no SPS/PPS have been seen so far..." in console and returns null => NullPointer in next line
var img = AWTUtil.toBufferedImage(real.createCompatible());
// ...
} catch (IOException e) {
e.printStackTrace();
}
}
Edit: I've uploaded a ("working") version to github, but the decoded image is discolored and doesn't update all pixels so when something is on the screen and the frame changes, that something can still be on the image.
As the title suggests, I'm trying to save to file an object that contains (among other variables, Strings, etc) a few BufferedImages.
I found this:
How to serialize an object that includes BufferedImages
And it works like a charm, but with a small setback: it works well if your object contains only ONE image.
I've been struggling to get his solution to work with more than one image (which in theory should work) but each time I read the file in, I get my object back, I get the correct number of images, but only the first image actually gets read in; the others are just null images that have no data in them.
This is how my object looks like:
class Obj implements Serializable
{
transient List<BufferedImage> imageSelection= new ArrayList<BufferedImage>();
// ... other vars and functions
private void writeObject(ObjectOutputStream out) throws IOException {
out.defaultWriteObject();
out.writeInt(imageSelection.size()); // how many images are serialized?
for (BufferedImage eachImage : imageSelection) {
ImageIO.write(eachImage, "jpg", out); // png is lossless
}
}
private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException {
in.defaultReadObject();
final int imageCount = in.readInt();
imageSelection = new ArrayList<BufferedImage>(imageCount);
for (int i=0; i<imageCount; i++) {
imageSelection.add(ImageIO.read(in));
}
}
}
This is how I'm writing and reading the object to and from a file:
// writing
try (
FileOutputStream file = new FileOutputStream(objName+".ser");
ObjectOutputStream output = new ObjectOutputStream(file);
){
output.writeObject(myObjs);
}
catch(IOException ex){
ex.printStackTrace();
}
// reading
try(
FileInputStream inputStr = new FileInputStream(file.getAbsolutePath());
ObjectInputStream input = new ObjectInputStream (inputStr);
)
{myObjs = (List<Obj>)input.readObject();}
catch(Exception ex)
{ex.printStackTrace();}
Even though I have a list of objects, they get read in correctly and each element of the list is populated accordingly, except for the BufferedImages.
Does anyone have any means of fixing this?
The problem is likely that ImageIO.read(...) incorrectly positions the stream after the first image read.
I see two options to fix this:
Rewrite the serialization of the BufferedImages to write the backing array(s) of the image, height, width, color model/color space identifer, and other data required to recreate the BufferedImage. This requires a bit of code to correctly handle all kinds of images, so I'll skip the details for now. Might be faster and more accurate (but might send more data).
Continue to serialize using ImageIO, but buffer each write using a ByteArrayOutputStream, and prepend each image with its byte count. When reading back, start by reading the byte count, and make sure you fully read each image. This is trivial to implement, but some images might get converted or lose details (ie. JPEG compression), due to file format constraints. Something like:
private void writeObject(ObjectOutputStream out) throws IOException {
out.defaultWriteObject();
out.writeInt(imageSelection.size()); // how many images are serialized?
for (BufferedImage eachImage : imageSelection) {
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
ImageIO.write(eachImage, "jpg", buffer);
out.writeInt(buffer.size()); // Prepend image with byte count
buffer.writeTo(out); // Write image
}
}
private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException {
in.defaultReadObject();
int imageCount = in.readInt();
imageSelection = new ArrayList<BufferedImage>(imageCount);
for (int i = 0; i < imageCount; i++) {
int size = in.readInt(); // Read byte count
byte[] buffer = new byte[size];
in.readFully(buffer); // Make sure you read all bytes of the image
imageSelection.add(ImageIO.read(new ByteArrayInputStream(buffer)));
}
}
I am currently trying to write a jukebox-like application in Java that is able to play any audio source possible, but encountered some difficulties when trying to play radio streams.
For playback I use JLayer from JavaZoom, that works fine as long as the target is a direct media file or a direct media stream (I can play PCM, MP3 and OGG just fine). However I encounter difficulties when trying to play radio streams which either contain pre-media data like a m3u/pls file (which I could fix by adding a detection beforehand), or data that is streamed on port 80 while a web-page exists at the same location and the media transmitted depends on the type of request. In the later case, whenever I try to stream the media, I instead get the HTML data.
Example link of a stream that is hidden behind a web-page: http://stream.t-n-media.de:8030
This is playable in VLC, but if you put it into a browser or my application you'll receive an HTML file.
Is there:
A ready-made, free solution that I could use in place of JLayer? Preferably open source so I can study it?
A tutorial that can help me to write a solution on my own?
Or can someone give me an example on how to properly detect/request a media stream?
Thanks in advance!
import java.io.*;
import java.net.*;
import javax.sound.sampled.*;
import javax.sound.midi.*;
/**
* This class plays sounds streaming from a URL: it does not have to preload
* the entire sound into memory before playing it. It is a command-line
* application with no gui. It includes code to convert ULAW and ALAW
* audio formats to PCM so they can be played. Use the -m command-line option
* before MIDI files.
*/
public class PlaySoundStream {
// Create a URL from the command-line argument and pass it to the
// right static method depending on the presence of the -m (MIDI) option.
public static void main(String[ ] args) throws Exception {
if (args[0].equals("-m")) streamMidiSequence(new URL(args[1]));
else streamSampledAudio(new URL(args[0]));
// Exit explicitly.
// This is needed because the audio system starts background threads.
System.exit(0);
}
/** Read sampled audio data from the specified URL and play it */
public static void streamSampledAudio(URL url)
throws IOException, UnsupportedAudioFileException,
LineUnavailableException
{
AudioInputStream ain = null; // We read audio data from here
SourceDataLine line = null; // And write it here.
try {
// Get an audio input stream from the URL
ain=AudioSystem.getAudioInputStream(url);
// Get information about the format of the stream
AudioFormat format = ain.getFormat( );
DataLine.Info info=new DataLine.Info(SourceDataLine.class,format);
// If the format is not supported directly (i.e. if it is not PCM
// encoded), then try to transcode it to PCM.
if (!AudioSystem.isLineSupported(info)) {
// This is the PCM format we want to transcode to.
// The parameters here are audio format details that you
// shouldn't need to understand for casual use.
AudioFormat pcm =
new AudioFormat(format.getSampleRate( ), 16,
format.getChannels( ), true, false);
// Get a wrapper stream around the input stream that does the
// transcoding for us.
ain = AudioSystem.getAudioInputStream(pcm, ain);
// Update the format and info variables for the transcoded data
format = ain.getFormat( );
info = new DataLine.Info(SourceDataLine.class, format);
}
// Open the line through which we'll play the streaming audio.
line = (SourceDataLine) AudioSystem.getLine(info);
line.open(format);
// Allocate a buffer for reading from the input stream and writing
// to the line. Make it large enough to hold 4k audio frames.
// Note that the SourceDataLine also has its own internal buffer.
int framesize = format.getFrameSize( );
byte[ ] buffer = new byte[4 * 1024 * framesize]; // the buffer
int numbytes = 0; // how many bytes
// We haven't started the line yet.
boolean started = false;
for(;;) { // We'll exit the loop when we reach the end of stream
// First, read some bytes from the input stream.
int bytesread=ain.read(buffer,numbytes,buffer.length-numbytes);
// If there were no more bytes to read, we're done.
if (bytesread == -1) break;
numbytes += bytesread;
// Now that we've got some audio data to write to the line,
// start the line, so it will play that data as we write it.
if (!started) {
line.start( );
started = true;
}
// We must write bytes to the line in an integer multiple of
// the framesize. So figure out how many bytes we'll write.
int bytestowrite = (numbytes/framesize)*framesize;
// Now write the bytes. The line will buffer them and play
// them. This call will block until all bytes are written.
line.write(buffer, 0, bytestowrite);
// If we didn't have an integer multiple of the frame size,
// then copy the remaining bytes to the start of the buffer.
int remaining = numbytes - bytestowrite;
if (remaining > 0)
System.arraycopy(buffer,bytestowrite,buffer,0,remaining);
numbytes = remaining;
}
// Now block until all buffered sound finishes playing.
line.drain( );
}
finally { // Always relinquish the resources we use
if (line != null) line.close( );
if (ain != null) ain.close( );
}
}
// A MIDI protocol constant that isn't defined by javax.sound.midi
public static final int END_OF_TRACK = 47;
/* MIDI or RMF data from the specified URL and play it */
public static void streamMidiSequence(URL url)
throws IOException, InvalidMidiDataException, MidiUnavailableException
{
Sequencer sequencer=null; // Converts a Sequence to MIDI events
Synthesizer synthesizer=null; // Plays notes in response to MIDI events
try {
// Create, open, and connect a Sequencer and Synthesizer
// They are closed in the finally block at the end of this method.
sequencer = MidiSystem.getSequencer( );
sequencer.open( );
synthesizer = MidiSystem.getSynthesizer( );
synthesizer.open( );
sequencer.getTransmitter( ).setReceiver(synthesizer.getReceiver( ));
// Specify the InputStream to stream the sequence from
sequencer.setSequence(url.openStream( ));
// This is an arbitrary object used with wait and notify to
// prevent the method from returning before the music finishes
final Object lock = new Object( );
// Register a listener to make the method exit when the stream is
// done. See Object.wait( ) and Object.notify( )
sequencer.addMetaEventListener(new MetaEventListener( ) {
public void meta(MetaMessage e) {
if (e.getType( ) == END_OF_TRACK) {
synchronized(lock) {
lock.notify( );
}
}
}
});
// Start playing the music
sequencer.start( );
// Now block until the listener above notifies us that we're done.
synchronized(lock) {
while(sequencer.isRunning( )) {
try { lock.wait( ); } catch(InterruptedException e) { }
}
}
}
finally {
// Always relinquish the sequencer, so others can use it.
if (sequencer != null) sequencer.close( );
if (synthesizer != null) synthesizer.close( );
}
}
}
I have used this piece of code in one of my projects that deal with Audio streaming and was working just fine.
Furthermore, you can see similar examples here:
Java Audio Example
Just reading the javadoc of AudioSystem give me an idea.
There is an other signature for getAudioInputStream: you can give it an InputStream instead of a URL.
So, try to manage to get the input stream by yourself and add the needed headers so that you get the stream instead the html content:
URLConnection uc = url.openConnection();
uc.setRequestProperty("<header name here>", "<header value here>");
InputStream in = uc.getInputStream();
ain=AudioSystem.getAudioInputStream(in);
Hope this help.
I know this answer comes late, but I had the same issue: I wanted to play MP3 and AAC audio and also wanted the user to insert PLS/M3U links. Here is what I did:
First I tried to parse the type by using the simple file name:
import de.webradio.enumerations.FileExtension;
import java.net.URL;
public class FileExtensionParser {
/**
*Parses a file extension
* #param filenameUrl the url
* #return the filename. if filename cannot be determined by file extension, Apache Tika parses by live detection
*/
public FileExtension parseFileExtension(URL filenameUrl) {
String filename = filenameUrl.toString();
if (filename.endsWith(".mp3")) {
return FileExtension.MP3;
} else if (filename.endsWith(".m3u") || filename.endsWith(".m3u8")) {
return FileExtension.M3U;
} else if (filename.endsWith(".aac")) {
return FileExtension.AAC;
} else if(filename.endsWith((".pls"))) {
return FileExtension.PLS;
}
URLTypeParser parser = new URLTypeParser();
return parser.parseByContentDetection(filenameUrl);
}
}
If that fails, I use Apache Tika to do a kind of live detection:
public class URLTypeParser {
/** This class uses Apache Tika to parse an URL using her content
*
* #param url the webstream url
* #return the detected file encoding: MP3, AAC or unsupported
*/
public FileExtension parseByContentDetection(URL url) {
try {
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
InputStream in = connection.getInputStream();
BodyContentHandler handler = new BodyContentHandler();
AudioParser parser = new AudioParser();
Metadata metadata = new Metadata();
parser.parse(in, handler, metadata);
return parseMediaType(metadata);
} catch (IOException e) {
e.printStackTrace();
} catch (TikaException e) {
e.printStackTrace();
} catch (SAXException e) {
e.printStackTrace();
}
return FileExtension.UNSUPPORTED_TYPE;
}
private FileExtension parseMediaType(Metadata metadata) {
String parsedMediaType = metadata.get("encoding");
if (parsedMediaType.equalsIgnoreCase("aac")) {
return FileExtension.AAC;
} else if (parsedMediaType.equalsIgnoreCase("mpeg1l3")) {
return FileExtension.MP3;
}
return FileExtension.UNSUPPORTED_TYPE;
}
}
This will also solve the HTML problem, since the method will return FileExtension.UNSUPPORTED for HTML content.
I combined this classes together with a factory pattern and it works fine. The live detection takes only about two seconds.
I don't think that this will help you anymore but since I struggled almost three weeks I wanted to provide a working answer. You can see the whole project at github: https://github.com/Seppl2202/webradio
TargetDataLine is, for me so far, the easiest way to capture microphone input in Java. I want to encode the audio that I capture with a video of the screen [in a screen recorder software] so that the user can create a tutorial, slide case etc.
I use Xuggler to encode the video.
They do have a tutorial on encoding audio with video but they take their audio from a file. In my case, the audio is live.
To encode the video I use com.xuggle.mediaTool.IMediaWriter. The IMediaWriter object allows me to add a video stream and has an
encodeAudio(int streamIndex, short[] samples, long timeStamp, TimeUnit timeUnit)
I can use that if I can get the samples from target data line as short[]. It returns byte[]
So two questions are:
How can I encode the live audio with video?
How do I maintain the proper timing of the audio packets so that they are encoded at the proper time?
References:
1. DavaDoc for TargetDataLine: http://docs.oracle.com/javase/1.4.2/docs/api/javax/sound/sampled/TargetDataLine.html
2. Xuggler Documentation: http://build.xuggle.com/view/Stable/job/xuggler_jdk5_stable/javadoc/java/api/index.html
Update
My code for capturing video
public void run(){
final IRational FRAME_RATE = IRational.make(frameRate, 1);
final IMediaWriter writer = ToolFactory.makeWriter(completeFileName);
writer.addVideoStream(0, 0,FRAME_RATE, recordingArea.width, recordingArea.height);
long startTime = System.nanoTime();
while(keepCapturing==true){
image = bot.createScreenCapture(recordingArea);
PointerInfo pointerInfo = MouseInfo.getPointerInfo();
Point globalPosition = pointerInfo.getLocation();
int relativeX = globalPosition.x - recordingArea.x;
int relativeY = globalPosition.y - recordingArea.y;
BufferedImage bgr = convertToType(image,BufferedImage.TYPE_3BYTE_BGR);
if(cursor!=null){
bgr.getGraphics().drawImage(((ImageIcon)cursor).getImage(), relativeX,relativeY,null);
}
try{
writer.encodeVideo(0,bgr,System.nanoTime()-startTime,TimeUnit.NANOSECONDS);
}catch(Exception e){
writer.close();
JOptionPane.showMessageDialog(null,
"Recording will stop abruptly because" +
"an error has occured", "Error",JOptionPane.ERROR_MESSAGE,null);
}
try{
sleep(sleepTime);
}catch(InterruptedException e){
e.printStackTrace();
}
}
writer.close();
}
I answered most of that recently under this question: Xuggler encoding and muxing
Code sample:
writer.addVideoStream(videoStreamIndex, 0, videoCodec, width, height);
writer.addAudioStream(audioStreamIndex, 0, audioCodec, channelCount, sampleRate);
while (... have more data ...)
{
BufferedImage videoFrame = ...;
long videoFrameTime = ...; // this is the time to display this frame
writer.encodeVideo(videoStreamIndex, videoFrame, videoFrameTime, DEFAULT_TIME_UNIT);
short[] audioSamples = ...; // the size of this array should be number of samples * channelCount
long audioSamplesTime = ...; // this is the time to play back this bit of audio
writer.encodeAudio(audioStreamIndex, audioSamples, audioSamplesTime, DEFAULT_TIME_UNIT);
}
In the case of TargetDataLine, getMicrosecondPosition() will tell you the time you need for audioSamplesTime. This appears to start from the time the TargetDataLine was opened. You need to figure out how to get a video timestamp referenced to the same clock, which depends on the video device and/or how you capture video. The absolute values do not matter as long as they are both using the same clock. You could subtract the initial value (at start of stream) from both your video and your audio times so that the timestamps match, but that is only a somewhat approximate match (probably close enough in practice).
You need to call encodeVideo and encodeAudio in strictly increasing order of time; you may have to buffer some audio and some video to make sure you can do that. More details here.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Can anyone recommend a Java library that would allow me to create a video programmatically? Specifically, it would do the following:
take a series of BufferedImages as the frames
allow a background WAV/MP3 to be added
allow 'incidental' WAV/MP3s to be added at arbitrarily, programmatically specified points
output the video in a common format (MPEG etc)
Can anybody recommend anything? For the picture/sound mixing, I'd even live with something that took a series of frames, and for each frame I had to supply the raw bytes of uncompressed sound data associated with that frame.
P.S. It doesn't even have to be a "third party library" as such if the Java Media Framework has the calls to achieve the above, but from my sketchy memory I have a feeling it doesn't.
I've used the code mentioned below to successfully perform items 1, 2, and 4 on your requirements list in pure Java. It's worth a look and you could probably figure out how to include #3.
http://www.randelshofer.ch/blog/2010/10/writing-quicktime-movies-in-pure-java/
I found a tool called ffmpeg which can convert multimedia files form one format to another. There is a filter called libavfilter in ffmpeg which is the substitute for vhook which allows the video/audio to be modified or examined between the decoder and the encoder. I think it should be possible to input raw frames and generate video.
I researched on any java implementation of ffmpeg and found the page titled "Getting Started with FFMPEG-JAVA" which is a JAVA wrapper around FFMPEG using JNA.
You can try a pure Java codec library called JCodec.
It has a very basic H.264 ( AVC ) encoder and MP4 muxer. Here's a full sample code taken from the their samples -- TranscodeMain.
private static void png2avc(String pattern, String out) throws IOException {
FileChannel sink = null;
try {
sink = new FileOutputStream(new File(out)).getChannel();
H264Encoder encoder = new H264Encoder();
RgbToYuv420 transform = new RgbToYuv420(0, 0);
int i;
for (i = 0; i < 10000; i++) {
File nextImg = new File(String.format(pattern, i));
if (!nextImg.exists())
continue;
BufferedImage rgb = ImageIO.read(nextImg);
Picture yuv = Picture.create(rgb.getWidth(), rgb.getHeight(), ColorSpace.YUV420);
transform.transform(AWTUtil.fromBufferedImage(rgb), yuv);
ByteBuffer buf = ByteBuffer.allocate(rgb.getWidth() * rgb.getHeight() * 3);
ByteBuffer ff = encoder.encodeFrame(buf, yuv);
sink.write(ff);
}
if (i == 1) {
System.out.println("Image sequence not found");
return;
}
} finally {
if (sink != null)
sink.close();
}
}
This sample is more sophisticated and actually shows muxing of encoded frames into MP4 file:
private static void prores2avc(String in, String out, ProresDecoder decoder, RateControl rc) throws IOException {
SeekableByteChannel sink = null;
SeekableByteChannel source = null;
try {
sink = writableFileChannel(out);
source = readableFileChannel(in);
MP4Demuxer demux = new MP4Demuxer(source);
MP4Muxer muxer = new MP4Muxer(sink, Brand.MOV);
Transform transform = new Yuv422pToYuv420p(0, 2);
H264Encoder encoder = new H264Encoder(rc);
MP4DemuxerTrack inTrack = demux.getVideoTrack();
CompressedTrack outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, (int) inTrack.getTimescale());
VideoSampleEntry ine = (VideoSampleEntry) inTrack.getSampleEntries()[0];
Picture target1 = Picture.create(ine.getWidth(), ine.getHeight(), ColorSpace.YUV422_10);
Picture target2 = null;
ByteBuffer _out = ByteBuffer.allocate(ine.getWidth() * ine.getHeight() * 6);
ArrayList<ByteBuffer> spsList = new ArrayList<ByteBuffer>();
ArrayList<ByteBuffer> ppsList = new ArrayList<ByteBuffer>();
Packet inFrame;
int totalFrames = (int) inTrack.getFrameCount();
long start = System.currentTimeMillis();
for (int i = 0; (inFrame = inTrack.getFrames(1)) != null && i < 100; i++) {
Picture dec = decoder.decodeFrame(inFrame.getData(), target1.getData());
if (target2 == null) {
target2 = Picture.create(dec.getWidth(), dec.getHeight(), ColorSpace.YUV420);
}
transform.transform(dec, target2);
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, target2);
if (rc instanceof ConstantRateControl) {
int mbWidth = (dec.getWidth() + 15) >> 4;
int mbHeight = (dec.getHeight() + 15) >> 4;
result.limit(((ConstantRateControl) rc).calcFrameSize(mbWidth * mbHeight));
}
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
outTrack.addFrame(new MP4Packet((MP4Packet) inFrame, result));
if (i % 100 == 0) {
long elapse = System.currentTimeMillis() - start;
System.out.println((i * 100 / totalFrames) + "%, " + (i * 1000 / elapse) + "fps");
}
}
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
muxer.writeHeader();
} finally {
if (sink != null)
sink.close();
if (source != null)
source.close();
}
}
Try JavaFX.
JavaFX includes support for rendering of images in multiple formats and support for playback of audio and video on all platforms where JavaFX is supported.
Here is a tutorial on manipulating images
Here is a tutorial on creating slideshows, timelines and scenes.
Here is FAQ on adding sounds.
Most of these are on JavaFX 1.3. Now JavaFX 2.0 is out.
Why not use FFMPEG?
There seems to be a Java wrapper for it:
http://fmj-sf.net/ffmpeg-java/getting_started.php
Here is an example of how to compile various media sources into one video with FFMPEG:
http://howto-pages.org/ffmpeg/#multiple
And, finally, the docs:
http://ffmpeg.org/ffmpeg.html