Passing buffer from SuperpoweredAndroidIO to Java InputStream / Android NDK - java

I am using the amazing Superpowered library (SuperpoweredAndroidAudioIO) for low-latency recording of audio. While the basic concepts are clear to me, I want to pass the recorded audio (which arrives in a buffer) back to an InputStream in Java (without recording to a file), from which I can then read the recorded audio and process it.
I guess this question could also be more generally asked - how to feed an InputStream in Java from a periodically updated buffer in C++?

Well, the suggestion I received in a comment turned out to be a simple and working solution:
Creation of pipe in C++:
if (pipe(pipefd) == -1) {
__android_log_print(ANDROID_LOG_VERBOSE, "C++", "Error creating pipe");
}
...
Passing file descriptor to Java:
...
return pipefd[0];
...
Then in Java/Android:
private ParcelFileDescriptor.AutoCloseInputStream underlyingStream;
ParcelFileDescriptor pfd = ParcelFileDescriptor.adoptFd(getFD());
underlyingStream = new ParcelFileDescriptor.AutoCloseInputStream(pfd);
Worked well for me, but of course I'm still happy to receive other suggestions.

Related

Compressing audio stream data in python and decompressing in java

I successfully stream audio over TCP from python running on raspberry pi to an android application and I am trying to use compression but it's only working with strings and not working with data from audio stream (only noise is heard)
here is my python code:
while True:
data = stream.read(CHUNKS)
outData = StringIO()
d = GzipFile( mode="w", fileobj = outData)
d.write(data)
d.close()
conn.send((outData.getvalue()))
and my android code:
speaker = newAudioTrack(AudioManager.STREAM_VOICE_CALL,sampleRate,channelConfig,audioFormat,minBufSize,AudioTrack.MODE_STREAM);
speaker.play();
DataInputStream ds = new DataInputStream(connectionSocket.getInputStream());
GZIPInputStream gzip = new GZIPInputStream(ds);
while (true) {
gzip.read(buffer, 0, buffer.length);
speaker.write(buffer, 0, buffer.length);}
Does anybody have an idea why this is not working? Also, are there alternative ways to achieve what I'm trying to?
The problem might not be in GZIP at all:
AudioTrack used only for uncompressed audio as mentioned here:
It allows streaming of PCM audio buffers to the audio sink
You might give the MediaPlayer a try, as an example:
MediaPlayer player = new MediaPlayer();
player.setAudioStreamType(AudioManager.STREAM_MUSIC);
player.setDataSource("http://...");
player.prepare();
player.start();
Yet this might not be what you need as you are trying to use Socket
In this case you might use Temporary file as mentioned in this post
OR use a Local HTTP server to stream an InputStream to MediaPlayer on Android
Hope this can help, best of luck :)

Download part of a video file through a servlet

So I have a link to a video online (e.g. somewebsite.com/myVideo.mkv) and I want to download that video on the server through a servlet. The video file has CDN enabled, so basically any public user can just put the link into the browser and it will start playing. This is the code I have so far.
downloadFile(URL myURL){
InputStream input = myURL.openStream();
File video = new File ("/path-to-file/" + myURL.getFile());
FileOutputStream output = new FileOutputStream(output);
byte[] buffer = new byte[1024];
int read;
// Write full range.
while ((read = input.read(buffer)) > 0){
output.write(buffer, 0, read);
}
output.close();
input.close()
}
If I do that, it would download the entire video file from the URL and the video playback fine. However, if I want to specify a specific byte range on the video downloadFile(URL myURL, long startByte, long endByte), the video doesn't playback. I used the function input.skip() to skip forward to the startByte but I suspect it skips over some important header of the mkv format. That's why the player can't recognize it. Does anyone know how to do this in java?
There are 3 dominant HTTP streaming techologies: Apple HTTP Live Streaming, Microsoft Smooth Streaming, and Adobe HTTP Dynamic Streaming. Each of these technologies provides tools to convert video to corresponding format. If you start with one large video file, the Apple and Adobe tools would create a number of small files containing, say, 10 sec of video each, and a playlist file that would give the client a clue how to read them. I believe Microsoft tools actually can generate a single file, but it would contain small video fragments internally.
With the HTTP streaming, the "intelligence" lives in the client that knows how to read the master playlist file and how to get around either numerous media files or numerous media file fragments. The HTTP server only have to serve a file or a file fragment specified by the Range header.

Xuggler not converting a .webm file?

I'm trying simply to convert a .mov file into .webm using Xuggler, which should work as FFMPEG supports .webm files.
This is my code:
IMediaReader reader = ToolFactory.makeReader("/home/user/vids/2.mov");
reader.addListener(ToolFactory.makeWriter("/home/user/vids/2.webm", reader));
while (reader.readPacket() == null);
System.out.println( "Finished" );
On running this, I get this error:
[main] ERROR org.ffmpeg - [libvorbis # 0x8d7fafe0] Specified sample_fmt is not supported.
[main] WARN com.xuggle.xuggler - Error: could not open codec (../../../../../../../csrc/com/xuggle/xuggler/StreamCoder.cpp:831)
Exception in thread "main" java.lang.RuntimeException: could not open stream com.xuggle.xuggler.IStream#-1921013728[index:1;id:0;streamcoder:com.xuggle.xuggler.IStreamCoder#-1921010088[codec=com.xuggle.xuggler.ICodec#-1921010232[type=CODEC_TYPE_AUDIO;id=CODEC_ID_VORBIS;name=libvorbis;];time base=1/44100;frame rate=0/0;sample rate=44100;channels=1;];framerate:0/0;timebase:1/90000;direction:OUTBOUND;]: Operation not permitted
at com.xuggle.mediatool.MediaWriter.openStream(MediaWriter.java:1192)
at com.xuggle.mediatool.MediaWriter.getStream(MediaWriter.java:1052)
at com.xuggle.mediatool.MediaWriter.encodeAudio(MediaWriter.java:830)
at com.xuggle.mediatool.MediaWriter.onAudioSamples(MediaWriter.java:1441)
at com.xuggle.mediatool.AMediaToolMixin.onAudioSamples(AMediaToolMixin.java:89)
at com.xuggle.mediatool.MediaReader.dispatchAudioSamples(MediaReader.java:628)
at com.xuggle.mediatool.MediaReader.decodeAudio(MediaReader.java:555)
at com.xuggle.mediatool.MediaReader.readPacket(MediaReader.java:469)
at com.mycompany.xugglertest.App.main(App.java:13)
Java Result: 1
Any ideas?
There's a funky thing going on with Xuggler where it doesn't always allow you to set the sample rate of IAudioSamples. You'll need to use an IAudioResampler.
Took me a while to figure this out. This post by Marty helped a lot, though his code is outdated now.
Here's how you fix it.
.
Before encoding
I'm assuming here that audio input has been properly set up, resulting in an IStreamCoder called audioCoder.
After that's done, you are probably initiating an IMediaWriter and adding an audio stream like so:
final IMediaWriter oggWriter = ToolFactory.makeWriter(oggOutputFile);
// Using stream 1 'cause there is also a video stream.
// For an audio only file you should use stream 0.
oggWriter.addAudioStream(1, 1, ICodec.ID.CODEC_ID_VORBIS,
audioCoder.getChannels(), audioCoder.getSampleRate());
Now create an IAudioResampler:
IAudioResampler oggResampler = IAudioResampler.make(audioCoder.getChannels(),
audioCoder.getChannels(),
audioCoder.getSampleRate(),
audioCoder.getSampleRate(),
IAudioSamples.Format.FMT_FLT,
audioCoder.getSampleFormat());
And tell your IMediaWriter to update to its sample format:
// The stream 1 here is consistent with the stream we added earlier.
oggWriter.getContainer().getStream(1).getStreamCoder().
setSampleFormat(IAudioSamples.Format.FMT_FLT);
.
During encoding
You are currently probably initiating an IAudioSamples and filling it with audio data, like so:
IAudioSamples audioSample = IAudioSamples.make(512, audioCoder.getChannels(),
audioCoder.getSampleFormat());
int bytesDecoded = audioCoder.decodeAudio(audioSample, packet, offset);
Now initiate an IAudioSamples for our resampled data:
IAudioSamples vorbisSample = IAudioSamples.make(512, audioCoder.getChannels(),
IAudioSamples.Format.FMT_FLT);
Finally, resample the audio data and write the result:
oggResampler.resample(vorbisSample, audioSample, 0);
oggWriter.encodeAudio(1, vorbisSample);
.
Final thought
Just a hint to get your output files to play well:
If you use audio and video within the same container, then audio and video data packets should be written in such an order that the timestamp of each data packet is higher than that of the previous data packet. So you are almost certainly going to need some kind of buffering mechanism that alternates writing audio and video.

Sending images through a socket using qt and read it using java

I'm trying to send an image upload in a Qt server trough the socket and visualize it in a client created using Java. Until now I have only transferred strings to communicate on both sides, and tried different examples for sending images but with no results.
The code I used to transfer the image in qt is:
QImage image;
image.load("../punton.png");
qDebug()<<"Image loaded";
QByteArray ban; // Construct a QByteArray object
QBuffer buffer(&ban); // Construct a QBuffer object using the QbyteArray
image.save(&buffer, "PNG"); // Save the QImage data into the QBuffer
socket->write(ban);
In the other end the code to read in Java is:
BufferedInputStream in = new BufferedInputStream(socket.getInputStream(),1);
File f = new File("C:\\Users\\CLOUDMOTO\\Desktop\\JAVA\\image.png");
System.out.println("Receiving...");
FileOutputStream fout = new FileOutputStream(f);
byte[] by = new byte[1];
for(int len; (len = in.read(by)) > 0;){
fout.write(by, 0, len);
System.out.println("Done!");
}
The process in Java gets stuck until I close the Qt server and after that the file generated is corrupt.
I'll appreciate any help because it's neccessary for me to do this and I'm new to programming with both languages.
Also I've used the following commands that and the receiving process now ends and show a message, but the file is corrupt.
socket->write(ban+"-1");
socket->close(); in qt.
And in java:
System.out.println(by);
String received = new String(by, 0, by.length, "ISO8859_1");
System.out.println(received);
System.out.println("Done!");
You cannot transport file over socket in such simple way. You are not giving the receiver any clue, what number of bytes is coming. Read javadoc for InputStream.read() carefully. Your receiver is in endless loop because it is waiting for next byte until the stream is closed. So you have partially fixed that by calling socket->close() at the sender side. Ideally, you need to write the length of ban into the socket before the buffer, read that length at receiver side and then receive only that amount of bytes. Also flush and close the receiver stream before trying to read the received file.
I have absolutely no idea what you wanted to achieve with socket->write(ban+"-1"). Your logged output starts with %PNG which is correct. I can see there "-1" at the end, which means that you added characters to the image binary file, hence you corrupted it. Why so?
And no, 1x1 PNG does not have size of 1 byte. It does not have even 4 bytes (red,green,blue,alpha). PNG needs some things like header and control checksum. Have a look at the size of the file on filesystem. This is your required by size.

Java AudioSystem and TargetDataLine

I am trying to capture audio from the line-in from my PC, to do this I am using AudioSystem class. There is one of two choices with the static AudioSystem.write method: Write to a file Or Write to a stream. I can get it to write to a file just fine, but whenever I try to write to a stream I get thrown java.io.IOException (stream length not specified). As for my buffer I am using a ByteArrayOutputStream. Is there another kind of stream I am supposed to be using or messing up somewhere else?
Also in a related subject, one can sample the audio line in (TargetDataLine) directly by calling read. Is this the preferred way doing audio capture or using AudioSystem?
Update
Source code that was requested:
final private TargetDataLine line;
final private AudioFormat format;
final private AudioFileFormat.Type fileType;
final private AudioInputStream audioInputStream;
final private ByteArrayOutputStream bos;
// Constructor, etc.
public void run()
{
System.out.println("AudioWorker Started");
try
{
line.open(format);
line.start();
// This commented part is regarding the second part
// of my question
// byte[] buff = new byte[512];
// int bytes = line.read(buff, 0, buff.length);
AudioSystem.write(audioInputStream, fileType, bos);
}
catch ( Exception e )
{
e.printStackTrace();
}
System.out.println("AudioWorker Finished");
}
// Stack trace in console
AudioWorker Started
java.io.IOException: stream length not specified
at com.sun.media.sound.WaveFileWriter.write(Unknown Source)
at javax.sound.sampled.AudioSystem.write(Unknown Source)
at AudioWorker.run(AudioWorker.java:41)
AudioWorker Finished
From AudioSystem.write JavaDoc:
Writes a stream of bytes representing an audio file of the specified file type to the output stream provided. Some file types require that the length be written into the file header; such files cannot be written from start to finish unless the length is known in advance. An attempt to write a file of such a type will fail with an IOException if the length in the audio file type is AudioSystem.NOT_SPECIFIED.
Since the Wave format requires the length to be written at the beginning of the file, the writer is querying the getFrameLength method of your AudioInputStream. When this returns NOT_SPECIFIED—because your recording "live" data of as-yet-unspecified length— the writer throws the exception.
The File-oriented works around this by writing dummy data to the length field, then re-opening the file when the write is complete and overwriting that area of the file.
Use an output format that doesn't need the length in advance (au), or use an AudioInputStream that returns a valid frame length, or use the File version of the API.
You should check out Richard Baldwin's tutorial on Java sound. There's a complete source listing at the bottom of the article where he uses TargetDataLine's read to capture audio.
You could also try looking into using JMF which is a bit hairy but works a bit better that javax.sound.sampled stuff. There's quite a few tutorials on the JMF page which describe how to record from line in or mic channels.

Categories

Resources