I am newbie in android developing, I have searched for this question but i didn't find my answer.
I want to know is there any ability to edit the sound calls in android?
I mean i want to add noise or change sounds of caller, Is it possible to change the sound in calls or adding a new sound to it?
TL DR : The answer is not yet.
And it's not like we've been waiting. The first entry i can find is from July 31 of 2009, the issue #3434 and, as of today (May 13 of 2015) it's still has not been assigned.
It's really hard to actually work on low-latency project, audio recording and of course, voices changers when you can't do low latency.
Not to say there's hasn't been any workarounds, you could emulate yourself the call, and add the voice effect sure (build your own dialer, and work with that), but let me warn you : you probably won't have any good perfs when it comes to real-time appliucations. No low-latency means no efficiency when it comes to audio recording.
You'll have to wait then.
Your question can be resolved partially depending your using model. the premises are :
you just want to eject some noise into your outgoing audio stream,
not into the incoming audio stream.
you may use a third-party VoIP application to make the phone call.
or simply say, you just want the peer to hear some modified voice. it is feasible.
Normal a native phone application on Android platform uses "Android audio system module" in the framework, the vendor provided audio libraries and Linux ALSA audio libraries to transmit/receive the audio data. These .so and .a files are under the read-only mode normally and could not be overwritten by user, so you can not inject data into this data chain.
But you have more capability to manipulate the data if you use a VoIP application to make the phone call, some VoIP applications can give a real phone number, like Fongo, you can receive a phone call to that number, the caller does not know you are using a VoIP application to speak.
So if I was assigned to do this project, here are my steps:
find a usable and open sourced VoIP client on Android.
find the code to sample the audio data from microphone ,add the code
to manipulate the raw PCM data and send the result to audio encoder.
build and run it on Android
register or apply a phone number for this VoIP client.
done.
Hope it help
Related
I am intended to make an app that stream live videos from one android phone to other one via Bluetooth,i need a simple player and there is no need to save the file,just play it.
My knowledge about stream in java is not enough and I really don't know where to start!
Please help me in finding any solution. Any help will be appreciated.
There is a sample android project to do streaming live video and allows you take photos and record videos from remote phone via bluetooth.
BluetoothCameraAndroid
Android allows you to get frames as byte array using camera, you can use that api to get frames and send it across. But the problem is throttling the sending rate. That also has been handled in that project.
In marshmallow and above devices, you have to give permissions
manually in settings. This project does not include runtime
permissions
Xuggler is a Java opensource library that works with streaming and modifying media on the fly. you can start from it at:
http://www.xuggle.com/xuggler/
I am at the beginning stages of a project in which I will be trying to make a hearing aid application for Android. I have wrote a few patches in Pure Data,C sound, and the basic Android sound library which basical take the input from the microphone and play it through headphones. No filtering or amplification.
While Csound gave the best performance, the latency made the tools unusable. I know Android L is suppose to help, but my goal is to create a low cost device hearing assist device. So older phones probably won't get it.
The next idea is to see if I can access the adc and dac values directly, then use C to make my own versions of AudioTrack and Audio record by using the NDK. Basically pointing to the places in memory where these values are coming in.
Is this possible? Also what should I be researching? I can't find anything online about accesses the DAC and ADC directly.
Thank you for your time.
No. "Android" does not provide for direct access by apps to hardware at all.
The NDK does not change that, as you still lack permission to the audio hardware device nodes.
If you have a particular device on which you can install a customized build of Android, then you might be able to do something by adding new APIs or somehow giving your app or a special unix group access to the hardware nodes. But the details of how you might utilize that access would depend on the device chosen.
I am trying to build application which records audio from the microphone for the later processing.
Everything works fairly well, except the following problem:
During the voice call (in and out) the recorded file gets no audio data, it contains just NULLs.
I am using AudioRecorder and MediaRecorder, both have the same problem.
The question is if this is normal API behavior or I am missing something?
Here some additional info:
Permissions:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
SDK: minSdkVersion="9" targetSdkVersion="15"
I am testing my application on Nexus S with Jelly Bean.
Thanks in advance!
PS If somebody needs more specific details, please let me know.
EDIT
OK, All answers suggest that this is a normal behavior, but I still puzzled since I can find a lot applications that record voice call, using the microphone. Recording voice call is not my intention, but I thought I can use the microphone even during the voice call.
Any suggestions?
EDIT
I just tested the application on Galaxy S Gingerbread and it worked! Now I am really puzzled and starting to understand what word "segmentation" means ...
I would not call this normal, but you're entering platform-specific behavior here. Recording with AudioSource.MIC during a voice call will work on some devices, but not on others.
Just to name a few reasons that I've run into myself for why this functionality might've been disabled:
Some platforms only supports capturing microphone audio at a single sample rate at any given time. And since voice calls take priority over anything else (this is a phone after all) and voice calls require either 8 or 16 kHz sample rate, the decision about what to do when you have a ongoing recording when a voice call starts might be to simply mute the recording, so that your recording isn't filled with 8 kHz data when your app thinks it's getting 44.1 kHz for example.
If you leave a recorder idle (stopped but not released) during a voice call some platforms could stop transferring audio data from the microphone, effectively muting the voice call uplink. To avoid this, the vendor might've just decided that a recording during a voice call won't actually be routed to any input device, but rather just be filled with zeroes.
Disclaimer: I have not worked with the Nexus S, so I don't know what the reason was on that specific device.
I believe this is normal behaviour, Audio data while CALL is connected will never be processed through Application processor on which Android is running. Usually the Audio chipset will switch it to stream coming from Modem.
In theory, you can get a feed from the call itself (http://stackoverflow.com/questions/4194342/how-can-i-record-voice-and-record-call-in-android), but most makers apparently don't support this.
I have a bit of am odd question that I am hoping someone here can help me with.
BACKGROUND: I am trying to design a system that will take in continuous-time data from a VLF antenna/preamp system which will take that data, do an FFT analysis of on it (magnitude versus time) and plot the resulting FFT data as a real-time spectrogram. The project is what is known as a "hum sniffer" but specifically to see signal interference in the 15 - 35 kHz range. I have purchased a couple of "teach yourself java" books and am in the process of reading them. I am an engineering student with limited experience with programming in Ansi-C and Matlab.
QUESTION: There are several applications on the Android market that will perform a similar function using the microphone as the input source and I have purchased all of them just to see how they operate. I have also purchased an Arduino Uno with USB Host shield from Sparkfun as well as a IOIO board from Sparkfun. I am really REALLY hoping that I can use a combination of those boards I have purchased in conjunction with the aforementioned antenna/preamp system to plot those real-time spectrograms in an Android program I have yet to create.
I am not looking for anyone to hold my hand through this process but if anyone has any experience with anything similar I would appreciate any insight. My major concern at this point is whether I need to design the external system to do the A/D conversion before feeding that data into the phone or if I might be able to send the CT signal data into Android directly and have the phone do both the A/D conversion and the FFT plots. Oh, and whether or not I can use the USB port to send data into the phone.
I am using my Nexus S 4G for all testing/applications.
Thanks in advance for any input.
Have you tried connecting your audio on the phone's headset microphone connection and using a sound recording app? Then you should get a file that you can read into Matlab and play around with to get an idea of the capabilities of the audio input on the phone.
If the audio input good enough then writing an app to do real-time FFT and plotting shouldn't be too tricky. That way you avoid dealing with Arduino and the Android USB accessory support.
IOIO hardware is capable of 500ksps. This is currently being limited in firmware to 1ksps per channel in order to bound the USB bandwidth being used. However, it is super easy to change (a single number, and a firmware rebuild) in case you know what you're doing and won't overflow the USB channel.
A single sample on a single channel will be a 3B message. At 40KHz, this would be 120KB/s, which is within the effective bandwidth that has been reached over ADB (the maximum is about 300KB/s).
If you need help rebuilding the firmware, the ioio-users list is your friend.
Let me first state that I do not know Java. I'm a .NET developer with solid C# skills, but I'm actually attempting to learn Java and the Android SDK at the same time (I know it's probably not ideal, but oh well, I'm adventurous :))
That said, my end goal is to write a streaming media player for Android that can accept Windows Media streams. I'm okay with restricting myself to Android 2.0 and greater if I need to. My current device is a Motorola Droid running Android 2.0.1. There is one online radio service I listen to religiously on my PC that only offers Windows Media streaming, and I'd like to transcode the stream so my Android device can play it.
Is such a thing possible? If so, would it be feasible (i.e., would it be too CPU intensive and kill the battery)? Should I be looking into doing this with the NDK in native code instead of Java? I'm not opposed to writing some sort of service in between that runs on a desktop computer (even in C#), but ideally I'd like to explore purely device-based options first. Where should I start?
Thanks in advance for any insight you can provide!
Having a proxy on your PC that captures windows audio output, encodes it, and sends it to your phone is perfectly possible. I had something like that 8 years ago on a linux-based PDA (sharp zaurus). The trick is that you're not trying to decode or access the XM radio stream directly, you're simply capturing what is being sent to the speakers on your desktop and re-sending it. There will be a minor hit in audio quality due to the re-encode, but shouldn't be too bad.
I've done cloud-to-phone transcoding using an alpha version of Android Cloud Services. The transcoding is transparently done on a server and the resulting stream is streamed on the phone. Might worth having a look. http://positivelydisruptive.blogspot.com/2010/08/streaming-m4a-files-using-android-cloud.html