I am coding in java for android. the issue is how to get mp3 file size and its audio length before hand so that I can set my Progress bar / Seek bar. to respond to seeking events.
The answer can be a simple info in a header file or a algorithm.
I am using androids mediaplayer to stream and the only issue is seeking which requires both the above mentioned things.
Any help is appreciated.
I also tried manually looking into mp3 file and get the header to decode the mp3 length, with this noob code.
total = ucon.getContentLength(); //ucon is an HTTPURLconnection
is= ucon.getInputStream();
byte[] buffer = new byte[1024];
is.read(buffer , 0, 1024);
int offset = 1024;
int loc = 2000;
//FrameLengthInBytes = 144 * BitRate / SampleRate + Padding
while(loc == 2000 && offset< total)
{
for(int i = 0 ; i <1024 ;i++ )
{
if((int)buffer[i] == 255)
{
if((int)buffer[i+1] >= 224)
{
loc = i+1;
break;
}
}
}
is.read(buffer , 0, 1024);
offset = 1024 + offset;
}
Coudn't find the pattern which marks a mp3 header (11111111 111xxxxx). Tried on different files. Can't find anything else to do.
Update 2:
Now I know I dont have to search for mp3 headers but ID3v2 headers. But still its done when streaming the file and on Android. I really hope someone helps and there are a lot of programs doing these things I wonder why would I have to do it the hard way.
Well, I never knew about the ID3 tag system.
Related
So I am generating WAV file with Phase Cancellation. But generated WAV file plays but with no sound. Have used multiple players and devices but no sound. At first I copied the Header to the target file. Then,
Reading Data part of the WAV file and getting Audio Data Array
long arrLength = source.length() - Wav_header_size;
byte[] arr = new byte[(int) arrLength];
RandomAccessFile filein;
filein = new RandomAccessFile(source, "rw");
filein.seek(Wav_header_size);
filein.read();
filein.write(arr,0, arr.length);
filein.close();
Getting Channel arrays from the Audio Data
short[] shortAudioArray = new short[arr.length/2];
short[] channelLeft = new short[arr.length/4];
short[] channelRight = new short[arr.length/4];
ByteBuffer.wrap(arr).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shortAudioArray);
for(int i=0, j=0; i< shortAudioArray.length;i+=2, ++j){
if(channelLeft.length>j && channelLeft[j]!=0)
channelLeft[j] = shortAudioArray[i];
else
break;
if(channelRight.length>j && channelRight[j]!=0)
channelRight[j] = shortAudioArray[i+1];
else
break;
}
Processing Phase cancellation by negating one phase and then merging
for(int i =0;i< data2.length;i++) {
data2[i] = (short) -data2[i];
}
for(int i=0,j=0; j< dstAudio.length;i++,j=j+2) {
if(data1.length>i && data1[i]!=0)
dstAudio[j] = data1[i];
else
break;
if(data2.length>i && data2[i]!=0)
dstAudio[j+1] = data2[i];
else
break;
}
byte[] bytesLast = new byte[dstAudio.length * 2];
ByteBuffer.wrap(bytesLast).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(dstAudio);
This way same size Audio WAV file is getting generated but with no sound.
Can anyone please correct me if I am wrong in anyway in the whole process?
Java provides classes that handle the formatting and other structural aspects of the .wav file. I strongly suggest making use of those tools rather than attempting to write your own .wav headers and such.
You can read more about these tools at the Oracle tutorial on sound. The sixth in the series (Using Files and Format Converters) has a subsection on writing audio files.
So I have solved the issue by manipulating the byte array through inverting the bits one by one not producing or converting to anymore short arrays.
I have a column "Content" (BLOB data) in database (IBM DB2) and the data of an record same that (https://drive.google.com/file/d/12d1g5jtomJS-ingCn_n0GKMsM4RkdYzB/view?usp=sharing)
I have opened it by editor and I think that it has more than one image in this (https://i.stack.imgur.com/2biLN.png, https://i.stack.imgur.com/ZwBOs.png).
I can export an image from byte array (using C#) to my disk, but with multiple images, I don't know how to do it.
Please help me! Thanks!
Edit 1:
I have tried export it as only one image by this code:
private void readBLOB(DB2Connection conn, DB2Transaction trans)
{
try
{
string SavePath = #"D:\\MyBLOB";
long CurrentIndex = 0;
//the number of bytes to store in the array
int BufferSize = 413454;
//The Number of bytes returned from GetBytes() method
long BytesReturned;
//A byte array to hold the buffer
byte[] Blob = new byte[BufferSize];
DB2Command cmd = conn.CreateCommand();
cmd.CommandText = "SELECT ATTR0102500126 " +
" FROM JCR.ICMUT01278001 " +
" WHERE COMPKEY = 'N21E26B04900FC6B1F00000'";
cmd.Transaction = trans;
DB2DataReader reader;
reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
if (reader.Read())
{
FileStream fs = new FileStream(SavePath + "\\" + "quang canh.jpg", FileMode.OpenOrCreate, FileAccess.Write);
BinaryWriter writer = new BinaryWriter(fs);
//reset the index to the beginning of the file
CurrentIndex = 0;
BytesReturned = reader.GetBytes(
0, //the BlobsTable column index
CurrentIndex, // the current index of the field from which to begin the read operation
Blob, // Array name to write the buffer to
0, // the start index of the array
BufferSize // the maximum length to copy into the buffer
);
while (BytesReturned == BufferSize)
{
writer.Write(Blob);
writer.Flush();
CurrentIndex += BufferSize;
BytesReturned = reader.GetBytes(0, CurrentIndex, Blob, 0, BufferSize);
}
writer.Write(Blob, 0, (int)BytesReturned);
writer.Flush(); writer.Close();
fs.Close();
}
reader.Close();
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
But can not view the image, it show format error => https://i.stack.imgur.com/PNS9Q.png
Your are currently asuming all BLOBS in that DB are JPEG Images. But that is clearly not the case.
Option 1: This is a faulty data
Programms that save to databases can fail.
Databases themself might fail, especially if transactions are turned off. Transactions are most likely turned off for BLOB's.
The physical disk the data was stored on might have degraded. And again, you will not get a lot of redundancy and error correction with BLOBS (plus getting use of the Error correction requires going through the proper DBMS in the first place).
Option 2: This is not a jpg
I know article about Unicode that says "[...]problem comes down to one naive programmer who didn’t understand the simple fact that if you don’t tell me whether a particular string is encoded using UTF-8 or ASCII or ISO 8859-1 (Latin 1) or Windows 1252 (Western European), you simply cannot display it correctly or even figure out where it ends."
This applies doubly, triply and quadruply to images:
this could be any number of formats that uses Interlacing.
this could could be a professional graphics programms image/project file like TIFF. Which can totally contain multiple images - up to one per layer you are working with.
this could even be a .SVG file (XML text that contains drawing orders) that was run through a .ZIP compression and a word document
this could even be a PDF, where the images are usually appended at the back (allowing you to read the text with a partial file, similar to interleaving)
This is my first time asking question on this form . My question has 2 parts .
First please see the code below to extract audio from video File using Xuggle .
IMediaReader reader;
File f;
reader = ToolFactory.makeReader("E:\\NetBeanWorkspace\\Repo\\VideoSamples\\one.mp4");
f = new File("E:\\NetBean Workspace\\Repo\\VideoSamples\\"+"one"+".wav");
IMediaWriter mediaWriter =ToolFactory.makeWriter(f.getAbsolutePath(), reader);
int sampleRate = 44100;
int channels = 2;
mediaWriter.addAudioStream(0, 0, ICodec.ID.CODEC_ID_ADPCM_IMA_WAV, channels, sampleRate);
reader.addListener(mediaWriter);
mediaWriter.setMaskLateStreamExceptions(true);
while( reader.readPacket() == null );
I get following error on some files and some files work fine .
java.lang.IllegalArgumentException: stream[0] is not video
at com.xuggle.mediatool.MediaWriter.encodeVideo(MediaWriter.java:754)
at com.xuggle.mediatool.MediaWriter.encodeVideo(MediaWriter.java:783)
at com.xuggle.mediatool.MediaWriter.onVideoPicture(MediaWriter.java:1434)
at com.xuggle.mediatool.AMediaToolMixin.onVideoPicture(AMediaToolMixin.java:166)
at com.xuggle.mediatool.MediaReader.dispatchVideoPicture(MediaReader.java:610)
at com.xuggle.mediatool.MediaReader.decodeVideo(MediaReader.java:519)
at com.xuggle.mediatool.MediaReader.readPacket(MediaReader.java:475)
at audioextractor.AudioExtractor.main(AudioExtractor.java:108)
What is the best codec for extracting 16 bit WAV file.
Please help me finding answer for these 2 questions .
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Can anyone recommend a Java library that would allow me to create a video programmatically? Specifically, it would do the following:
take a series of BufferedImages as the frames
allow a background WAV/MP3 to be added
allow 'incidental' WAV/MP3s to be added at arbitrarily, programmatically specified points
output the video in a common format (MPEG etc)
Can anybody recommend anything? For the picture/sound mixing, I'd even live with something that took a series of frames, and for each frame I had to supply the raw bytes of uncompressed sound data associated with that frame.
P.S. It doesn't even have to be a "third party library" as such if the Java Media Framework has the calls to achieve the above, but from my sketchy memory I have a feeling it doesn't.
I've used the code mentioned below to successfully perform items 1, 2, and 4 on your requirements list in pure Java. It's worth a look and you could probably figure out how to include #3.
http://www.randelshofer.ch/blog/2010/10/writing-quicktime-movies-in-pure-java/
I found a tool called ffmpeg which can convert multimedia files form one format to another. There is a filter called libavfilter in ffmpeg which is the substitute for vhook which allows the video/audio to be modified or examined between the decoder and the encoder. I think it should be possible to input raw frames and generate video.
I researched on any java implementation of ffmpeg and found the page titled "Getting Started with FFMPEG-JAVA" which is a JAVA wrapper around FFMPEG using JNA.
You can try a pure Java codec library called JCodec.
It has a very basic H.264 ( AVC ) encoder and MP4 muxer. Here's a full sample code taken from the their samples -- TranscodeMain.
private static void png2avc(String pattern, String out) throws IOException {
FileChannel sink = null;
try {
sink = new FileOutputStream(new File(out)).getChannel();
H264Encoder encoder = new H264Encoder();
RgbToYuv420 transform = new RgbToYuv420(0, 0);
int i;
for (i = 0; i < 10000; i++) {
File nextImg = new File(String.format(pattern, i));
if (!nextImg.exists())
continue;
BufferedImage rgb = ImageIO.read(nextImg);
Picture yuv = Picture.create(rgb.getWidth(), rgb.getHeight(), ColorSpace.YUV420);
transform.transform(AWTUtil.fromBufferedImage(rgb), yuv);
ByteBuffer buf = ByteBuffer.allocate(rgb.getWidth() * rgb.getHeight() * 3);
ByteBuffer ff = encoder.encodeFrame(buf, yuv);
sink.write(ff);
}
if (i == 1) {
System.out.println("Image sequence not found");
return;
}
} finally {
if (sink != null)
sink.close();
}
}
This sample is more sophisticated and actually shows muxing of encoded frames into MP4 file:
private static void prores2avc(String in, String out, ProresDecoder decoder, RateControl rc) throws IOException {
SeekableByteChannel sink = null;
SeekableByteChannel source = null;
try {
sink = writableFileChannel(out);
source = readableFileChannel(in);
MP4Demuxer demux = new MP4Demuxer(source);
MP4Muxer muxer = new MP4Muxer(sink, Brand.MOV);
Transform transform = new Yuv422pToYuv420p(0, 2);
H264Encoder encoder = new H264Encoder(rc);
MP4DemuxerTrack inTrack = demux.getVideoTrack();
CompressedTrack outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, (int) inTrack.getTimescale());
VideoSampleEntry ine = (VideoSampleEntry) inTrack.getSampleEntries()[0];
Picture target1 = Picture.create(ine.getWidth(), ine.getHeight(), ColorSpace.YUV422_10);
Picture target2 = null;
ByteBuffer _out = ByteBuffer.allocate(ine.getWidth() * ine.getHeight() * 6);
ArrayList<ByteBuffer> spsList = new ArrayList<ByteBuffer>();
ArrayList<ByteBuffer> ppsList = new ArrayList<ByteBuffer>();
Packet inFrame;
int totalFrames = (int) inTrack.getFrameCount();
long start = System.currentTimeMillis();
for (int i = 0; (inFrame = inTrack.getFrames(1)) != null && i < 100; i++) {
Picture dec = decoder.decodeFrame(inFrame.getData(), target1.getData());
if (target2 == null) {
target2 = Picture.create(dec.getWidth(), dec.getHeight(), ColorSpace.YUV420);
}
transform.transform(dec, target2);
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, target2);
if (rc instanceof ConstantRateControl) {
int mbWidth = (dec.getWidth() + 15) >> 4;
int mbHeight = (dec.getHeight() + 15) >> 4;
result.limit(((ConstantRateControl) rc).calcFrameSize(mbWidth * mbHeight));
}
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
outTrack.addFrame(new MP4Packet((MP4Packet) inFrame, result));
if (i % 100 == 0) {
long elapse = System.currentTimeMillis() - start;
System.out.println((i * 100 / totalFrames) + "%, " + (i * 1000 / elapse) + "fps");
}
}
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
muxer.writeHeader();
} finally {
if (sink != null)
sink.close();
if (source != null)
source.close();
}
}
Try JavaFX.
JavaFX includes support for rendering of images in multiple formats and support for playback of audio and video on all platforms where JavaFX is supported.
Here is a tutorial on manipulating images
Here is a tutorial on creating slideshows, timelines and scenes.
Here is FAQ on adding sounds.
Most of these are on JavaFX 1.3. Now JavaFX 2.0 is out.
Why not use FFMPEG?
There seems to be a Java wrapper for it:
http://fmj-sf.net/ffmpeg-java/getting_started.php
Here is an example of how to compile various media sources into one video with FFMPEG:
http://howto-pages.org/ffmpeg/#multiple
And, finally, the docs:
http://ffmpeg.org/ffmpeg.html
I have a strange phenomenon when resuming a file transfer.
Look at the picture below you see the bad section.
This happens apparently random, maybe every 10:th time.
Im sending the picture from my Android phone to java server over ftp.
What is it that i forgot here.
I see the connection is killed due to java.net.SocketTimeoutException:
The transfer is resuming like this
Resume at : 287609 Sending 976 bytes more
The bytes are always correct when file is completely received.
Even for the picture below.
Dunno where to start debug this since its working most of the times.
Any suggestions or ideas would be grate i think i totally missed something here.
The device Sender code (only send loop):
int count = 1;
//Sending N files, looping N times
while(count <= max) {
String sPath = batchFiles.get(count-1);
fis = new FileInputStream(new File(sPath));
int fileSize = bis.available();
out.writeInt(fileSize); // size
String nextReply = in.readUTF();
// if the file exist,
if(nextReply.equals(Consts.SERVER_give_me_next)){
count++;
continue;
}
long resumeLong = 0; // skip this many bytes
int val = 0;
buffer = new byte[1024];
if(nextReply.equals(Consts.SERVER_file_exist)){
resumeLong = in.readLong();
}
//UPDATE FOR #Justin Breitfeller, Thanks
long skiip = bis.skip(resumeLong);
if(resumeLong != -1){
if(!(resumeLong == skiip)){
Log.d(TAG, "ERROR skip is not the same as resumeLong ");
skiip = bis.skip(resumeLong);
if(!(resumeLong == skiip)){
Log.d(TAG, "ERROR ABORTING skip is not the same as resumeLong);
return;
}
}
}
while ((val = bis.read(buffer, 0, 1024)) > 0) {
out.write(buffer, 0, val);
fileSize -= val;
if (fileSize < 1024) {
val = (int) fileSize;
}
}
reply = in.readUTF();
if (reply.equals(Consts.SERVER_file_receieved_ok)) {
// check if all files are sent
if(count == max){
break;
}
}
count++;
}
The receiver code (very truncated):
//receiving N files, looping N times
while(count < totalNrOfFiles){
int ii = in.readInt(); // File size
fileSize = (long)ii;
String filePath = Consts.SERVER_DRIVE + Consts.PTPP_FILETRANSFER;
filePath = filePath.concat(theBatch.getFileName(count));
File path = new File(filePath);
boolean resume = false;
//if the file exist. Skip if done or resume if not
if(path.exists()){
if(path.length() == fileSize){ // Does the file has same size
logger.info("File size same skipping file:" + theBatch.getFileName(count) );
count++;
out.writeUTF(Consts.SERVER_give_me_next);
continue; // file is OK don't upload it again
}else {
// Resume the upload
out.writeUTF(Consts.SERVER_file_exist);
out.writeLong(path.length());
resume = true;
fileSize = fileSize-path.length();
logger.info("Resume at : " + path.length() +
" Sending "+ fileSize +" bytes more");
}
}else
out.writeUTF("lets go");
byte[] buffer = new byte[1024];
// ***********************************
// RECEIVE FROM PHONE
// ***********************************
int size = 1024;
int val = 0;
bos = new BufferedOutputStream(new FileOutputStream(path,resume));
if(fileSize < size){
size = (int) fileSize;
}
while (fileSize >0) {
val = in.read(buffer, 0, size);
bos.write(buffer, 0, val);
fileSize -= val;
if (fileSize < size)
size = (int) fileSize;
}
bos.flush();
bos.close();
out.writeUTF("file received ok");
count++;
}
Found the error and the problem was bad logic from my part.
say no more.
I was sending pictures that was being resized just before they where sent.
The problem was when the resume kicked in after a failed transfer
the resized picture was not used, instead the code used the original
pictured that had a larger scale size.
I have now setup a short lived cache that holds the resized temporary pictures.
In the light of the complexity of the app im making I simply forgot that the files during resume was not the same as original.
With a BufferedOutputStream, BufferedInputStream, you need to watch out for the following
Create BufferedOutputStream before BuffererdInputStream (on both client and server)
And flush just after create.
Flush after every write (not just before close)
That worked for me.
Edited
Add sentRequestTime, receivedRequestTime, sentResponseTime, receivedResponseTime to your packet payload. Use System.nanoTime() on these, run your server and client on the same host, use ExecutorService to run multiple clients for that server, and plot your (received-sent) for both request and response packets, time delay on a excel chart (some csv format). Do this before bufferedIOStream and afterIOStream. You will be pleased to know that your performance has boosted by 100%. Made me very happy to plot that graph, took about 45 mins.
I have also heard that using custom buffer's further improves performance.
Edited again
In my case I am using Object IOStreams, I have added a payload of 4 long variables to the object, and initialize sentRequestTime when I send the packet from the client, initialize receivedRequestTime when the server receives the response, so and so forth for the response from server to client too. I then find the difference between received and sent time to find out the delay in response and request. Be careful to run this test on localhost. If you run it between different hardware/devices, their actual time difference may interfere with your test results. Since requestReceivedTime is time stamped at the server end and the requestSentTime is time stamped at the client end. In other words, their own local time is stamped (obviously). And both of these devices running the exact same time to the nano second is not possible. If you must run it between different devices atleast make sure that you have ntp running (to keep them time synchronized). That said, you hare comparing the performance before and after bufferedio (you dont really care about the actual time delays right ?), so time drift should not really matter. Comparing a set of results before buffered and after buffered is your actual interest.
Enjoy!!!