Xuggler stream[0] is not video - java

This is my first time asking question on this form . My question has 2 parts .
First please see the code below to extract audio from video File using Xuggle .
IMediaReader reader;
File f;
reader = ToolFactory.makeReader("E:\\NetBeanWorkspace\\Repo\\VideoSamples\\one.mp4");
f = new File("E:\\NetBean Workspace\\Repo\\VideoSamples\\"+"one"+".wav");
IMediaWriter mediaWriter =ToolFactory.makeWriter(f.getAbsolutePath(), reader);
int sampleRate = 44100;
int channels = 2;
mediaWriter.addAudioStream(0, 0, ICodec.ID.CODEC_ID_ADPCM_IMA_WAV, channels, sampleRate);
reader.addListener(mediaWriter);
mediaWriter.setMaskLateStreamExceptions(true);
while( reader.readPacket() == null );
I get following error on some files and some files work fine .
java.lang.IllegalArgumentException: stream[0] is not video
at com.xuggle.mediatool.MediaWriter.encodeVideo(MediaWriter.java:754)
at com.xuggle.mediatool.MediaWriter.encodeVideo(MediaWriter.java:783)
at com.xuggle.mediatool.MediaWriter.onVideoPicture(MediaWriter.java:1434)
at com.xuggle.mediatool.AMediaToolMixin.onVideoPicture(AMediaToolMixin.java:166)
at com.xuggle.mediatool.MediaReader.dispatchVideoPicture(MediaReader.java:610)
at com.xuggle.mediatool.MediaReader.decodeVideo(MediaReader.java:519)
at com.xuggle.mediatool.MediaReader.readPacket(MediaReader.java:475)
at audioextractor.AudioExtractor.main(AudioExtractor.java:108)
What is the best codec for extracting 16 bit WAV file.
Please help me finding answer for these 2 questions .

Related

Processing Phase Cancellation on WAV file generates WAV file with no sound

So I am generating WAV file with Phase Cancellation. But generated WAV file plays but with no sound. Have used multiple players and devices but no sound. At first I copied the Header to the target file. Then,
Reading Data part of the WAV file and getting Audio Data Array
long arrLength = source.length() - Wav_header_size;
byte[] arr = new byte[(int) arrLength];
RandomAccessFile filein;
filein = new RandomAccessFile(source, "rw");
filein.seek(Wav_header_size);
filein.read();
filein.write(arr,0, arr.length);
filein.close();
Getting Channel arrays from the Audio Data
short[] shortAudioArray = new short[arr.length/2];
short[] channelLeft = new short[arr.length/4];
short[] channelRight = new short[arr.length/4];
ByteBuffer.wrap(arr).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shortAudioArray);
for(int i=0, j=0; i< shortAudioArray.length;i+=2, ++j){
if(channelLeft.length>j && channelLeft[j]!=0)
channelLeft[j] = shortAudioArray[i];
else
break;
if(channelRight.length>j && channelRight[j]!=0)
channelRight[j] = shortAudioArray[i+1];
else
break;
}
Processing Phase cancellation by negating one phase and then merging
for(int i =0;i< data2.length;i++) {
data2[i] = (short) -data2[i];
}
for(int i=0,j=0; j< dstAudio.length;i++,j=j+2) {
if(data1.length>i && data1[i]!=0)
dstAudio[j] = data1[i];
else
break;
if(data2.length>i && data2[i]!=0)
dstAudio[j+1] = data2[i];
else
break;
}
byte[] bytesLast = new byte[dstAudio.length * 2];
ByteBuffer.wrap(bytesLast).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(dstAudio);
This way same size Audio WAV file is getting generated but with no sound.
Can anyone please correct me if I am wrong in anyway in the whole process?
Java provides classes that handle the formatting and other structural aspects of the .wav file. I strongly suggest making use of those tools rather than attempting to write your own .wav headers and such.
You can read more about these tools at the Oracle tutorial on sound. The sixth in the series (Using Files and Format Converters) has a subsection on writing audio files.
So I have solved the issue by manipulating the byte array through inverting the bits one by one not producing or converting to anymore short arrays.

Export multiple images in one byte array (BLOB IBM DB2) to disk

I have a column "Content" (BLOB data) in database (IBM DB2) and the data of an record same that (https://drive.google.com/file/d/12d1g5jtomJS-ingCn_n0GKMsM4RkdYzB/view?usp=sharing)
I have opened it by editor and I think that it has more than one image in this (https://i.stack.imgur.com/2biLN.png, https://i.stack.imgur.com/ZwBOs.png).
I can export an image from byte array (using C#) to my disk, but with multiple images, I don't know how to do it.
Please help me! Thanks!
Edit 1:
I have tried export it as only one image by this code:
private void readBLOB(DB2Connection conn, DB2Transaction trans)
{
try
{
string SavePath = #"D:\\MyBLOB";
long CurrentIndex = 0;
//the number of bytes to store in the array
int BufferSize = 413454;
//The Number of bytes returned from GetBytes() method
long BytesReturned;
//A byte array to hold the buffer
byte[] Blob = new byte[BufferSize];
DB2Command cmd = conn.CreateCommand();
cmd.CommandText = "SELECT ATTR0102500126 " +
" FROM JCR.ICMUT01278001 " +
" WHERE COMPKEY = 'N21E26B04900FC6B1F00000'";
cmd.Transaction = trans;
DB2DataReader reader;
reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
if (reader.Read())
{
FileStream fs = new FileStream(SavePath + "\\" + "quang canh.jpg", FileMode.OpenOrCreate, FileAccess.Write);
BinaryWriter writer = new BinaryWriter(fs);
//reset the index to the beginning of the file
CurrentIndex = 0;
BytesReturned = reader.GetBytes(
0, //the BlobsTable column index
CurrentIndex, // the current index of the field from which to begin the read operation
Blob, // Array name to write the buffer to
0, // the start index of the array
BufferSize // the maximum length to copy into the buffer
);
while (BytesReturned == BufferSize)
{
writer.Write(Blob);
writer.Flush();
CurrentIndex += BufferSize;
BytesReturned = reader.GetBytes(0, CurrentIndex, Blob, 0, BufferSize);
}
writer.Write(Blob, 0, (int)BytesReturned);
writer.Flush(); writer.Close();
fs.Close();
}
reader.Close();
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
But can not view the image, it show format error => https://i.stack.imgur.com/PNS9Q.png
Your are currently asuming all BLOBS in that DB are JPEG Images. But that is clearly not the case.
Option 1: This is a faulty data
Programms that save to databases can fail.
Databases themself might fail, especially if transactions are turned off. Transactions are most likely turned off for BLOB's.
The physical disk the data was stored on might have degraded. And again, you will not get a lot of redundancy and error correction with BLOBS (plus getting use of the Error correction requires going through the proper DBMS in the first place).
Option 2: This is not a jpg
I know article about Unicode that says "[...]problem comes down to one naive programmer who didn’t understand the simple fact that if you don’t tell me whether a particular string is encoded using UTF-8 or ASCII or ISO 8859-1 (Latin 1) or Windows 1252 (Western European), you simply cannot display it correctly or even figure out where it ends."
This applies doubly, triply and quadruply to images:
this could be any number of formats that uses Interlacing.
this could could be a professional graphics programms image/project file like TIFF. Which can totally contain multiple images - up to one per layer you are working with.
this could even be a .SVG file (XML text that contains drawing orders) that was run through a .ZIP compression and a word document
this could even be a PDF, where the images are usually appended at the back (allowing you to read the text with a partial file, similar to interleaving)

Recommended Java library for creating a video programmatically [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Can anyone recommend a Java library that would allow me to create a video programmatically? Specifically, it would do the following:
take a series of BufferedImages as the frames
allow a background WAV/MP3 to be added
allow 'incidental' WAV/MP3s to be added at arbitrarily, programmatically specified points
output the video in a common format (MPEG etc)
Can anybody recommend anything? For the picture/sound mixing, I'd even live with something that took a series of frames, and for each frame I had to supply the raw bytes of uncompressed sound data associated with that frame.
P.S. It doesn't even have to be a "third party library" as such if the Java Media Framework has the calls to achieve the above, but from my sketchy memory I have a feeling it doesn't.
I've used the code mentioned below to successfully perform items 1, 2, and 4 on your requirements list in pure Java. It's worth a look and you could probably figure out how to include #3.
http://www.randelshofer.ch/blog/2010/10/writing-quicktime-movies-in-pure-java/
I found a tool called ffmpeg which can convert multimedia files form one format to another. There is a filter called libavfilter in ffmpeg which is the substitute for vhook which allows the video/audio to be modified or examined between the decoder and the encoder. I think it should be possible to input raw frames and generate video.
I researched on any java implementation of ffmpeg and found the page titled "Getting Started with FFMPEG-JAVA" which is a JAVA wrapper around FFMPEG using JNA.
You can try a pure Java codec library called JCodec.
It has a very basic H.264 ( AVC ) encoder and MP4 muxer. Here's a full sample code taken from the their samples -- TranscodeMain.
private static void png2avc(String pattern, String out) throws IOException {
FileChannel sink = null;
try {
sink = new FileOutputStream(new File(out)).getChannel();
H264Encoder encoder = new H264Encoder();
RgbToYuv420 transform = new RgbToYuv420(0, 0);
int i;
for (i = 0; i < 10000; i++) {
File nextImg = new File(String.format(pattern, i));
if (!nextImg.exists())
continue;
BufferedImage rgb = ImageIO.read(nextImg);
Picture yuv = Picture.create(rgb.getWidth(), rgb.getHeight(), ColorSpace.YUV420);
transform.transform(AWTUtil.fromBufferedImage(rgb), yuv);
ByteBuffer buf = ByteBuffer.allocate(rgb.getWidth() * rgb.getHeight() * 3);
ByteBuffer ff = encoder.encodeFrame(buf, yuv);
sink.write(ff);
}
if (i == 1) {
System.out.println("Image sequence not found");
return;
}
} finally {
if (sink != null)
sink.close();
}
}
This sample is more sophisticated and actually shows muxing of encoded frames into MP4 file:
private static void prores2avc(String in, String out, ProresDecoder decoder, RateControl rc) throws IOException {
SeekableByteChannel sink = null;
SeekableByteChannel source = null;
try {
sink = writableFileChannel(out);
source = readableFileChannel(in);
MP4Demuxer demux = new MP4Demuxer(source);
MP4Muxer muxer = new MP4Muxer(sink, Brand.MOV);
Transform transform = new Yuv422pToYuv420p(0, 2);
H264Encoder encoder = new H264Encoder(rc);
MP4DemuxerTrack inTrack = demux.getVideoTrack();
CompressedTrack outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, (int) inTrack.getTimescale());
VideoSampleEntry ine = (VideoSampleEntry) inTrack.getSampleEntries()[0];
Picture target1 = Picture.create(ine.getWidth(), ine.getHeight(), ColorSpace.YUV422_10);
Picture target2 = null;
ByteBuffer _out = ByteBuffer.allocate(ine.getWidth() * ine.getHeight() * 6);
ArrayList<ByteBuffer> spsList = new ArrayList<ByteBuffer>();
ArrayList<ByteBuffer> ppsList = new ArrayList<ByteBuffer>();
Packet inFrame;
int totalFrames = (int) inTrack.getFrameCount();
long start = System.currentTimeMillis();
for (int i = 0; (inFrame = inTrack.getFrames(1)) != null && i < 100; i++) {
Picture dec = decoder.decodeFrame(inFrame.getData(), target1.getData());
if (target2 == null) {
target2 = Picture.create(dec.getWidth(), dec.getHeight(), ColorSpace.YUV420);
}
transform.transform(dec, target2);
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, target2);
if (rc instanceof ConstantRateControl) {
int mbWidth = (dec.getWidth() + 15) >> 4;
int mbHeight = (dec.getHeight() + 15) >> 4;
result.limit(((ConstantRateControl) rc).calcFrameSize(mbWidth * mbHeight));
}
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
outTrack.addFrame(new MP4Packet((MP4Packet) inFrame, result));
if (i % 100 == 0) {
long elapse = System.currentTimeMillis() - start;
System.out.println((i * 100 / totalFrames) + "%, " + (i * 1000 / elapse) + "fps");
}
}
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
muxer.writeHeader();
} finally {
if (sink != null)
sink.close();
if (source != null)
source.close();
}
}
Try JavaFX.
JavaFX includes support for rendering of images in multiple formats and support for playback of audio and video on all platforms where JavaFX is supported.
Here is a tutorial on manipulating images
Here is a tutorial on creating slideshows, timelines and scenes.
Here is FAQ on adding sounds.
Most of these are on JavaFX 1.3. Now JavaFX 2.0 is out.
Why not use FFMPEG?
There seems to be a Java wrapper for it:
http://fmj-sf.net/ffmpeg-java/getting_started.php
Here is an example of how to compile various media sources into one video with FFMPEG:
http://howto-pages.org/ffmpeg/#multiple
And, finally, the docs:
http://ffmpeg.org/ffmpeg.html

Do i have to guess the Sample Rate of the sample before starting recording?

This is a code that will attempt to record an audio sample : but i've a not constructed AudioFormat object ( that has been passed to DataLine.Info) because i don't know the sample rate.
EDIT
I have seen that just randomly placing sample rate of 8000 works . But is it fine ? Can i keep any value of sample rate ?
boolean lineIsStopped = false;
TargetDataLine line = null;
AudioFormat af; // object not constructed through out
DataLine.Info info = new DataLine.Info(TargetDataLine.class, af); // af not initialized
try {
line = (TargetDataLine)AudioSystem.getLine(info);
line.open( af );
} catch( LineUnavailableException ex ) {
// handle the error
}
// now we are ready for an input
// call start to start accepting data from mic
byte data[] = new byte[ line.getBufferSize() / 5 ];
line.start(); // this statement starts delivering data into the line buffer
// start retreiving data from the line buffer
int numBytesRead;
int offset = 0;
ByteArrayOutputStream out = new ByteArrayOutputStream();
while( ! lineIsStopped ) { // when the line is not stopped i.e is active
numBytesRead = line.read( data , offset , data.length );
// now save the data
try {
out.write(data); // writes data to this output stream !
} catch( Exception exc) {
System.out.println(exc);
}
}
In this how can i construct audio format object without getting any audio sample ?
After reading your comments, you are recording from the mic. In which case you want to set the audio format according to the quality you want from the mic. If you want telephone quality 8k hz would be fine. If you want tape quality 22khz, and if you want CD quality audio 44.1khz. Of course, if you transmitting that over the network then 8khz is probably going to be good enough.
It's always a good idea to have this be a setting if your application so the user can control what quality they want.

How every one does mp3 streaming?

I am coding in java for android. the issue is how to get mp3 file size and its audio length before hand so that I can set my Progress bar / Seek bar. to respond to seeking events.
The answer can be a simple info in a header file or a algorithm.
I am using androids mediaplayer to stream and the only issue is seeking which requires both the above mentioned things.
Any help is appreciated.
I also tried manually looking into mp3 file and get the header to decode the mp3 length, with this noob code.
total = ucon.getContentLength(); //ucon is an HTTPURLconnection
is= ucon.getInputStream();
byte[] buffer = new byte[1024];
is.read(buffer , 0, 1024);
int offset = 1024;
int loc = 2000;
//FrameLengthInBytes = 144 * BitRate / SampleRate + Padding
while(loc == 2000 && offset< total)
{
for(int i = 0 ; i <1024 ;i++ )
{
if((int)buffer[i] == 255)
{
if((int)buffer[i+1] >= 224)
{
loc = i+1;
break;
}
}
}
is.read(buffer , 0, 1024);
offset = 1024 + offset;
}
Coudn't find the pattern which marks a mp3 header (11111111 111xxxxx). Tried on different files. Can't find anything else to do.
Update 2:
Now I know I dont have to search for mp3 headers but ID3v2 headers. But still its done when streaming the file and on Android. I really hope someone helps and there are a lot of programs doing these things I wonder why would I have to do it the hard way.
Well, I never knew about the ID3 tag system.

Categories

Resources