does any one know how to capture voice and send it through a network then play it on another computer in java
I did some research on this about 10 years ago at a research lab so I might be a bit out of date! At the time there was no standard for the whole process. You have to use the Java sound API to record and playback, then any network protocol you want to send it.
If it is just for a person to listen to, then use something with good compression - something like the media streaming in the Java Media Framework. If you want to use speech recognition on the data you'll probably need something higher quality and closer to the raw data, and it might be worth looking at the Java Speech API.
Related
I am newbie in android developing, I have searched for this question but i didn't find my answer.
I want to know is there any ability to edit the sound calls in android?
I mean i want to add noise or change sounds of caller, Is it possible to change the sound in calls or adding a new sound to it?
TL DR : The answer is not yet.
And it's not like we've been waiting. The first entry i can find is from July 31 of 2009, the issue #3434 and, as of today (May 13 of 2015) it's still has not been assigned.
It's really hard to actually work on low-latency project, audio recording and of course, voices changers when you can't do low latency.
Not to say there's hasn't been any workarounds, you could emulate yourself the call, and add the voice effect sure (build your own dialer, and work with that), but let me warn you : you probably won't have any good perfs when it comes to real-time appliucations. No low-latency means no efficiency when it comes to audio recording.
You'll have to wait then.
Your question can be resolved partially depending your using model. the premises are :
you just want to eject some noise into your outgoing audio stream,
not into the incoming audio stream.
you may use a third-party VoIP application to make the phone call.
or simply say, you just want the peer to hear some modified voice. it is feasible.
Normal a native phone application on Android platform uses "Android audio system module" in the framework, the vendor provided audio libraries and Linux ALSA audio libraries to transmit/receive the audio data. These .so and .a files are under the read-only mode normally and could not be overwritten by user, so you can not inject data into this data chain.
But you have more capability to manipulate the data if you use a VoIP application to make the phone call, some VoIP applications can give a real phone number, like Fongo, you can receive a phone call to that number, the caller does not know you are using a VoIP application to speak.
So if I was assigned to do this project, here are my steps:
find a usable and open sourced VoIP client on Android.
find the code to sample the audio data from microphone ,add the code
to manipulate the raw PCM data and send the result to audio encoder.
build and run it on Android
register or apply a phone number for this VoIP client.
done.
Hope it help
I'm trying to get the data that the soundcard is outputting. Unfortunately, from my understanding of the Java Sound API, SourceDataLine does not support the read method, and there is no way to listen for raw data. I want to stick to Java for this, rather than C++, so if anyone knows how to listen for audio output on the soundcard that would be great.
Thanks very much!
Sorry if this post is confusing, just woke up.
I've researched this a while, and determined any implementation using only java sound will not work with any reliability on multiple audio cards.
There are a few solutions though. Hopefully one of these helps you.
Bite the bullet, write some C++ code to allow this functionality on different operating systems.
Use Java Sound to capture audio from a virtual audio recorder adapter which loops back the system audio output.
Create a loopback yourself using cables to feed a sound output port into a sound input port.
I recommend option 1 if you're developing this for a professional application as installation will be cleaner.
Go with option 2 if you've a short amount of time, and you expect to spend more time with your users, or your users are tech savvy.
Use option 3 if this is just a hobby, or some one-off project for a client.
I'm trying to build an app for blackberry and I need to be able to read data from an hardware I'l plug into the audio jack. Is it possible on blackberry? Thanks
I don't think so. While somehow possible, this is not what an audio jack is made for. It is an analog interface. While some also have an input channel (e.g. for microphone) it is made for capturing an analog audio signal, which goes through an analog digital converter. This gives you finally a digital audio signal.
You could misuse it for your needs. For that your external hardware has to encode the data as an audio stream and send it by itself. Your app has to decode the audio stream back to data. I guess there is SDK functionallity to get the audio stream, but the rest you have to do yourself.
While all this might be technically possible, I can't emphasize enough what a really really really bad idea this is. Not to mention the effort.
Working on a project requiring the analysis (speech to text) and recording of up to 10 real-time audio (microphone) input streams simultaneously and separately. I'm most comfortable with Java, so my central question is if this is possible with Java Sound API (or other 3rd party lib). But development is just beginning so if there is a better tool for this job I'm open to suggestions.
Development platform is MBP OSX 10.8.
Regarding audio input, I'd assume a bunch of USB microphones, unless someone knows of an appropriate device that allows separate simultaneous addressing of inputs.
Java Sound API isn't the easiest to learn, but it is reasonably low-level and powerful. I've mixed more 10 tracks before, using .wav inputs, custom-made "Clips" (reading sound files from memory) and procedural FM synthesis, using software I was able to write as a Java programmer with intermediate skills and some basic knowledge about sound (but NOT an engineering degree level of sound knowledge).
I've not tried recording multiple lines, or recording at all for that matter except one "toy" program that takes an input wave and stores it for varispeed playback. That really wasn't much more complex than just uploading a .wav, so I don't know the answers to t your question. I do anticipate it will be worth a look at the TargetDataLine interface as a key tool.
There are various sound engines written in Java. A contributor at Java-Gaming.org ("nsigma") has done some fine work on a media tool/system called "Praxis". AH -- just found this link:
http://neilcsmith.net/software
He often answers questions on the "Sound" topic at Java-gaming.org, and has spoken of "JAudioLibs" (linked on the page above).
Let me first state that I do not know Java. I'm a .NET developer with solid C# skills, but I'm actually attempting to learn Java and the Android SDK at the same time (I know it's probably not ideal, but oh well, I'm adventurous :))
That said, my end goal is to write a streaming media player for Android that can accept Windows Media streams. I'm okay with restricting myself to Android 2.0 and greater if I need to. My current device is a Motorola Droid running Android 2.0.1. There is one online radio service I listen to religiously on my PC that only offers Windows Media streaming, and I'd like to transcode the stream so my Android device can play it.
Is such a thing possible? If so, would it be feasible (i.e., would it be too CPU intensive and kill the battery)? Should I be looking into doing this with the NDK in native code instead of Java? I'm not opposed to writing some sort of service in between that runs on a desktop computer (even in C#), but ideally I'd like to explore purely device-based options first. Where should I start?
Thanks in advance for any insight you can provide!
Having a proxy on your PC that captures windows audio output, encodes it, and sends it to your phone is perfectly possible. I had something like that 8 years ago on a linux-based PDA (sharp zaurus). The trick is that you're not trying to decode or access the XM radio stream directly, you're simply capturing what is being sent to the speakers on your desktop and re-sending it. There will be a minor hit in audio quality due to the re-encode, but shouldn't be too bad.
I've done cloud-to-phone transcoding using an alpha version of Android Cloud Services. The transcoding is transparently done on a server and the resulting stream is streamed on the phone. Might worth having a look. http://positivelydisruptive.blogspot.com/2010/08/streaming-m4a-files-using-android-cloud.html