Been banging my head against a brick wall on this...
I am looking to write a virtual midi driver - so android midi apps see a midi port to send to, I get to receive and pre-cook the messages before passing them on...
I can't find any starting point to get going(!).
Surely I just open a 'port' (using some library), the apps will see it, connect to it, send - and I get a call back for each message...
I am programming in Java/android stuido - but could use anything else if appropriate - I have used QT on Linux for anther part of the setup, but thought native Android would be best for an android driver(!).
EDIT
Seems android apps hit the metal for MIDI work (well as close to the metal as a USB driver is!), instead of using the Java MIDI libraries.
If so, my options would be:-
1) Write a USB driver that does the loopback - a 'virtual' device.
or
2) Attach a USB device and physically loop it back so my code can read the incomming.
I don't know where to start with 1 (any pointers?). 2 would be easier, but partly defeats the point... the code I want to write is to trap MIDI communication and push it out over WiFi. Needing a cable/add on is a bit naff - but would at least mean that the tablet isn't teathered to the main MIDI system...
Related
I am newbie in android developing, I have searched for this question but i didn't find my answer.
I want to know is there any ability to edit the sound calls in android?
I mean i want to add noise or change sounds of caller, Is it possible to change the sound in calls or adding a new sound to it?
TL DR : The answer is not yet.
And it's not like we've been waiting. The first entry i can find is from July 31 of 2009, the issue #3434 and, as of today (May 13 of 2015) it's still has not been assigned.
It's really hard to actually work on low-latency project, audio recording and of course, voices changers when you can't do low latency.
Not to say there's hasn't been any workarounds, you could emulate yourself the call, and add the voice effect sure (build your own dialer, and work with that), but let me warn you : you probably won't have any good perfs when it comes to real-time appliucations. No low-latency means no efficiency when it comes to audio recording.
You'll have to wait then.
Your question can be resolved partially depending your using model. the premises are :
you just want to eject some noise into your outgoing audio stream,
not into the incoming audio stream.
you may use a third-party VoIP application to make the phone call.
or simply say, you just want the peer to hear some modified voice. it is feasible.
Normal a native phone application on Android platform uses "Android audio system module" in the framework, the vendor provided audio libraries and Linux ALSA audio libraries to transmit/receive the audio data. These .so and .a files are under the read-only mode normally and could not be overwritten by user, so you can not inject data into this data chain.
But you have more capability to manipulate the data if you use a VoIP application to make the phone call, some VoIP applications can give a real phone number, like Fongo, you can receive a phone call to that number, the caller does not know you are using a VoIP application to speak.
So if I was assigned to do this project, here are my steps:
find a usable and open sourced VoIP client on Android.
find the code to sample the audio data from microphone ,add the code
to manipulate the raw PCM data and send the result to audio encoder.
build and run it on Android
register or apply a phone number for this VoIP client.
done.
Hope it help
I am trying to create a low latency method to use an android device as a secondary display for a PC. So far all I have found has been either wireless streaming, or a slow usb connection (i.e. using iDisplay).
However, I found a DSLR camera contoller app (https://play.google.com/store/apps/details?id=com.dslr.dashboard/) that is able to stream a live feed of the camera to an android display via USB. Would it be possible to edit the source code of this application so it can read the video output of PC via USB? If so, how would you go about this? Do you think that this would be a low latency alternative?
Thank you!
Lots of fantasy in your question. Have you ever seen a PC outputting data from one of its USB ports to another device? How are you supposed to do that? With a plain male-to-male USB cable, in case you find one? Sorry but things don't go that way. To transfer data (files, or a network) via USB between two computers you'd need some propietary/specific software. Of course, once you have acomplished that is technically possible to transfer files with the screen content. Buy you'd need to develop a software that would capture the computer screen, compress it in real time, and send it through USB with enough low latency to be usable. That's going to be resource intensive.
A better, easier approach would be, maybe, using some sort of remote desktop or VNC on the Android machine, with the computer acting as a server. At least far more feasible than trying to implement a similar protocol by yourself.
Sorry but what you are trying to achieve is flawed from the beginning.
Folks,
I am working on an applet that captures audio from the local computer and streams it up to the server. I am using a Java applet that currently hooks onto the default device and performs the upstream. Things are running well.
I now want to extend the functionality to allow users to choose the audio input device and to also show a sound level indicator of the chosen device in the web page.
I wrote a multithreaded utility that would do AudioSystem.getMixerInfo(); periodically and look for changes. There is also a thread that reads from the chosen device and displays sound levels.
My problem is that when I run my code and plug in a USB headset, the new device is not detected. However, if I shut down my code, then plug in the USB, the device does show up.
Is this a known and documented limitation of JavaSound that it does not sample the device set once the process is running?
I am using OSX Lion.
Thanks for any insights.
-Raj
To check if a 'switch' is open or closed and detecting that in Java, I have the following plan: I won't use the data pins, just the USB 5V current, and if the switch is closed there is a current, which I should detect in Java, and so it will be processed by my program.
Would there be a simple solution for this or do I need to find and try out a whole Java usb library for it, of which I would use just a tiny little bit?
Thanks in advance
This will not work in the way you describe it. Have you ever connected a gadget like USB lamp or USB fan? Then you would know that the Software/OS does not even know about them.
The USB spec says you can draw up to 100mA from a port without telling anyone about it, and 500mA when declared in the USB protocol. Most USB HDDs draw quite some more than the allowed 500mA maximum USB2 current.
To make your application work, you absolutely need a device which can talk over USB. This could be an USB=>RS232 adapter (which your application can talk to using RXTX) or a HID device like a USB Joystick. Joysticks can have buttons and switches.
You could try RXTX. This is a library written for serial communication with Java. http://users.frii.com/jarvi/rxtx/. You will have to use native libraries, and I don't know if it would be able to detect if there is a current or not on the USB.
Let me first state that I do not know Java. I'm a .NET developer with solid C# skills, but I'm actually attempting to learn Java and the Android SDK at the same time (I know it's probably not ideal, but oh well, I'm adventurous :))
That said, my end goal is to write a streaming media player for Android that can accept Windows Media streams. I'm okay with restricting myself to Android 2.0 and greater if I need to. My current device is a Motorola Droid running Android 2.0.1. There is one online radio service I listen to religiously on my PC that only offers Windows Media streaming, and I'd like to transcode the stream so my Android device can play it.
Is such a thing possible? If so, would it be feasible (i.e., would it be too CPU intensive and kill the battery)? Should I be looking into doing this with the NDK in native code instead of Java? I'm not opposed to writing some sort of service in between that runs on a desktop computer (even in C#), but ideally I'd like to explore purely device-based options first. Where should I start?
Thanks in advance for any insight you can provide!
Having a proxy on your PC that captures windows audio output, encodes it, and sends it to your phone is perfectly possible. I had something like that 8 years ago on a linux-based PDA (sharp zaurus). The trick is that you're not trying to decode or access the XM radio stream directly, you're simply capturing what is being sent to the speakers on your desktop and re-sending it. There will be a minor hit in audio quality due to the re-encode, but shouldn't be too bad.
I've done cloud-to-phone transcoding using an alpha version of Android Cloud Services. The transcoding is transparently done on a server and the resulting stream is streamed on the phone. Might worth having a look. http://positivelydisruptive.blogspot.com/2010/08/streaming-m4a-files-using-android-cloud.html