Real-time FFT plotting - java

I have a bit of am odd question that I am hoping someone here can help me with.
BACKGROUND: I am trying to design a system that will take in continuous-time data from a VLF antenna/preamp system which will take that data, do an FFT analysis of on it (magnitude versus time) and plot the resulting FFT data as a real-time spectrogram. The project is what is known as a "hum sniffer" but specifically to see signal interference in the 15 - 35 kHz range. I have purchased a couple of "teach yourself java" books and am in the process of reading them. I am an engineering student with limited experience with programming in Ansi-C and Matlab.
QUESTION: There are several applications on the Android market that will perform a similar function using the microphone as the input source and I have purchased all of them just to see how they operate. I have also purchased an Arduino Uno with USB Host shield from Sparkfun as well as a IOIO board from Sparkfun. I am really REALLY hoping that I can use a combination of those boards I have purchased in conjunction with the aforementioned antenna/preamp system to plot those real-time spectrograms in an Android program I have yet to create.
I am not looking for anyone to hold my hand through this process but if anyone has any experience with anything similar I would appreciate any insight. My major concern at this point is whether I need to design the external system to do the A/D conversion before feeding that data into the phone or if I might be able to send the CT signal data into Android directly and have the phone do both the A/D conversion and the FFT plots. Oh, and whether or not I can use the USB port to send data into the phone.
I am using my Nexus S 4G for all testing/applications.
Thanks in advance for any input.

Have you tried connecting your audio on the phone's headset microphone connection and using a sound recording app? Then you should get a file that you can read into Matlab and play around with to get an idea of the capabilities of the audio input on the phone.
If the audio input good enough then writing an app to do real-time FFT and plotting shouldn't be too tricky. That way you avoid dealing with Arduino and the Android USB accessory support.

IOIO hardware is capable of 500ksps. This is currently being limited in firmware to 1ksps per channel in order to bound the USB bandwidth being used. However, it is super easy to change (a single number, and a firmware rebuild) in case you know what you're doing and won't overflow the USB channel.
A single sample on a single channel will be a 3B message. At 40KHz, this would be 120KB/s, which is within the effective bandwidth that has been reached over ADB (the maximum is about 300KB/s).
If you need help rebuilding the firmware, the ioio-users list is your friend.

Related

How to communicate with a device?

I am a web programmer and I am just wondering how software and hardware can communicate. I have basic knowledge in Java but I am not an expert.
Let's make it simple. I have a device which is just a simple lamp that can get switched on and off and it is connected via USB. My software has only one function - pressing enter.
By pressing enter I want the software to communicate with the USB port and tell it to activate the device. How would that be possible? Where do I have to start and what do I need to learn?
I understand that my question and my example sound silly but I am just trying to understand how it works.
I appreciate any help!
I think that if you using arduino or other prototyping board you might use rxtx serial library.
If I answer simply, then I should tell, you need to have microcontroller to do this by your own.
A microcontroller is a small computer on a single integrated circuit. A microcontroller contains one or more CPUs (processor cores) along with memory and programmable input/output peripherals. So, there are input, output pins for different operations you want to do with your physical hardware and also a memory to store the commands or program. That means a microcontroller is a third party which keeps communication active between the hardware and software in this case.
In the market, you can find many microcontroller integrated boards for building digital devices and interactive objects that can sense and control objects in the physical world.
To make your work easy as a beginner, I suggest you to buy an arduino board from the market. If you google it and search for arduino tutorials in youtube, you will find how they work. Hope these help you.

Editing Voice while calling - Android

I am newbie in android developing, I have searched for this question but i didn't find my answer.
I want to know is there any ability to edit the sound calls in android?
I mean i want to add noise or change sounds of caller, Is it possible to change the sound in calls or adding a new sound to it?
TL DR : The answer is not yet.
And it's not like we've been waiting. The first entry i can find is from July 31 of 2009, the issue #3434 and, as of today (May 13 of 2015) it's still has not been assigned.
It's really hard to actually work on low-latency project, audio recording and of course, voices changers when you can't do low latency.
Not to say there's hasn't been any workarounds, you could emulate yourself the call, and add the voice effect sure (build your own dialer, and work with that), but let me warn you : you probably won't have any good perfs when it comes to real-time appliucations. No low-latency means no efficiency when it comes to audio recording.
You'll have to wait then.
Your question can be resolved partially depending your using model. the premises are :
you just want to eject some noise into your outgoing audio stream,
not into the incoming audio stream.
you may use a third-party VoIP application to make the phone call.
or simply say, you just want the peer to hear some modified voice. it is feasible.
Normal a native phone application on Android platform uses "Android audio system module" in the framework, the vendor provided audio libraries and Linux ALSA audio libraries to transmit/receive the audio data. These .so and .a files are under the read-only mode normally and could not be overwritten by user, so you can not inject data into this data chain.
But you have more capability to manipulate the data if you use a VoIP application to make the phone call, some VoIP applications can give a real phone number, like Fongo, you can receive a phone call to that number, the caller does not know you are using a VoIP application to speak.
So if I was assigned to do this project, here are my steps:
find a usable and open sourced VoIP client on Android.
find the code to sample the audio data from microphone ,add the code
to manipulate the raw PCM data and send the result to audio encoder.
build and run it on Android
register or apply a phone number for this VoIP client.
done.
Hope it help

Capture the soundcard's output?

I'm trying to get the data that the soundcard is outputting. Unfortunately, from my understanding of the Java Sound API, SourceDataLine does not support the read method, and there is no way to listen for raw data. I want to stick to Java for this, rather than C++, so if anyone knows how to listen for audio output on the soundcard that would be great.
Thanks very much!
Sorry if this post is confusing, just woke up.
I've researched this a while, and determined any implementation using only java sound will not work with any reliability on multiple audio cards.
There are a few solutions though. Hopefully one of these helps you.
Bite the bullet, write some C++ code to allow this functionality on different operating systems.
Use Java Sound to capture audio from a virtual audio recorder adapter which loops back the system audio output.
Create a loopback yourself using cables to feed a sound output port into a sound input port.
I recommend option 1 if you're developing this for a professional application as installation will be cleaner.
Go with option 2 if you've a short amount of time, and you expect to spend more time with your users, or your users are tech savvy.
Use option 3 if this is just a hobby, or some one-off project for a client.

Using DSLR Controller for a Secondary Display

I am trying to create a low latency method to use an android device as a secondary display for a PC. So far all I have found has been either wireless streaming, or a slow usb connection (i.e. using iDisplay).
However, I found a DSLR camera contoller app (https://play.google.com/store/apps/details?id=com.dslr.dashboard/) that is able to stream a live feed of the camera to an android display via USB. Would it be possible to edit the source code of this application so it can read the video output of PC via USB? If so, how would you go about this? Do you think that this would be a low latency alternative?
Thank you!
Lots of fantasy in your question. Have you ever seen a PC outputting data from one of its USB ports to another device? How are you supposed to do that? With a plain male-to-male USB cable, in case you find one? Sorry but things don't go that way. To transfer data (files, or a network) via USB between two computers you'd need some propietary/specific software. Of course, once you have acomplished that is technically possible to transfer files with the screen content. Buy you'd need to develop a software that would capture the computer screen, compress it in real time, and send it through USB with enough low latency to be usable. That's going to be resource intensive.
A better, easier approach would be, maybe, using some sort of remote desktop or VNC on the Android machine, with the computer acting as a server. At least far more feasible than trying to implement a similar protocol by yourself.
Sorry but what you are trying to achieve is flawed from the beginning.

Audio Stream Transcoding with Android

Let me first state that I do not know Java. I'm a .NET developer with solid C# skills, but I'm actually attempting to learn Java and the Android SDK at the same time (I know it's probably not ideal, but oh well, I'm adventurous :))
That said, my end goal is to write a streaming media player for Android that can accept Windows Media streams. I'm okay with restricting myself to Android 2.0 and greater if I need to. My current device is a Motorola Droid running Android 2.0.1. There is one online radio service I listen to religiously on my PC that only offers Windows Media streaming, and I'd like to transcode the stream so my Android device can play it.
Is such a thing possible? If so, would it be feasible (i.e., would it be too CPU intensive and kill the battery)? Should I be looking into doing this with the NDK in native code instead of Java? I'm not opposed to writing some sort of service in between that runs on a desktop computer (even in C#), but ideally I'd like to explore purely device-based options first. Where should I start?
Thanks in advance for any insight you can provide!
Having a proxy on your PC that captures windows audio output, encodes it, and sends it to your phone is perfectly possible. I had something like that 8 years ago on a linux-based PDA (sharp zaurus). The trick is that you're not trying to decode or access the XM radio stream directly, you're simply capturing what is being sent to the speakers on your desktop and re-sending it. There will be a minor hit in audio quality due to the re-encode, but shouldn't be too bad.
I've done cloud-to-phone transcoding using an alpha version of Android Cloud Services. The transcoding is transparently done on a server and the resulting stream is streamed on the phone. Might worth having a look. http://positivelydisruptive.blogspot.com/2010/08/streaming-m4a-files-using-android-cloud.html

Categories

Resources