I want to make an app where it allows the user to give remote access to viewing their phone (first-person view of user using the phone). Kind of like how tech support can sometimes see what you're doing on your computer to help you with problems. Think "remote desktop for phone" is what I'd like to do.
Does anyone know if this would be possible?
My current idea to do this is screen scraping - somehow take a screenshot of the user's phone (like how DDMS does) every millisecond or something. This seems terribly inefficient though, and again I don't know if it's possible.
Note - the "receiver" of this first-person phone view can be a computer or website or whatever, along as they connect remotely.
It would be possible: extremely difficult, but possible, to implement an Android activity that uses a FrameLayout and hosts another Activity. You could then fetch the image buffer of the FrameLayout as shown here and you'd then have to feed that into a video encoder and stream the output of the encoder to a remote server, but that might work.
I actually wanted to create something like that as well...
First of all, it will only work on rooted devices, since access to the "screenshot" (framebuffer) is allowed only to rooted devices.
Now as to reading events from the controlling device - that's easy... The hard part would be to generate these events on the controlled device. But I'm absolutely sure it's possible for rooted devices, as I saw some similar program for Windows which controls the connected device (can't remember its name).
Anyway, the reason I stopped thinking about this app is its complexity and the fact only Rooted devices could be controlled.
Hope this helps
Related
I am trying to create a low latency method to use an android device as a secondary display for a PC. So far all I have found has been either wireless streaming, or a slow usb connection (i.e. using iDisplay).
However, I found a DSLR camera contoller app (https://play.google.com/store/apps/details?id=com.dslr.dashboard/) that is able to stream a live feed of the camera to an android display via USB. Would it be possible to edit the source code of this application so it can read the video output of PC via USB? If so, how would you go about this? Do you think that this would be a low latency alternative?
Thank you!
Lots of fantasy in your question. Have you ever seen a PC outputting data from one of its USB ports to another device? How are you supposed to do that? With a plain male-to-male USB cable, in case you find one? Sorry but things don't go that way. To transfer data (files, or a network) via USB between two computers you'd need some propietary/specific software. Of course, once you have acomplished that is technically possible to transfer files with the screen content. Buy you'd need to develop a software that would capture the computer screen, compress it in real time, and send it through USB with enough low latency to be usable. That's going to be resource intensive.
A better, easier approach would be, maybe, using some sort of remote desktop or VNC on the Android machine, with the computer acting as a server. At least far more feasible than trying to implement a similar protocol by yourself.
Sorry but what you are trying to achieve is flawed from the beginning.
I'm going to build a music player working on both Android and Desktops. It won't be anything special, I'm doing it more to training myself and know more or less what problems I might encounter if I want to do a real app/program one day. Therefore, since I'm already rather decent at web technologies, I'll try to use something else: Java.
My app / program with have to
be able to read music files and play them (I'm planning on reading the files myself, meaning that I only need to be able to read "raw" sound, WAV or such)
be able to write to music files (to change tags)
be able to communicate with another instance of the program on another device that's on the same network (I want to be able to use my phone as a remote control and my pc as a remote control for my phone)
If possible, show some play/pause buttons on the screen even if it's locked (probably just on android)
And this is where I need your help: What you I do to write as little "device specific" code as possible?
It's obvious I can reuse classes used to encode/decode some music types. Finding the files, reading them, writing them, playing raw sound and connection to the network will be easy to abstract if needed.
But then there is the UI and it looks like if I don't plan carefully, I'll have to do it twice... I've seen libGDX but they kinda insist a lot on the fact it's for games...
All I need is some way to build a simple UI (a few buttons, the cover of the albums) that'd work for both the desktop and the phone.
Should I use libGDX, the "normal" libs (*WT, Swing, neither of which seem to be "compatible" with Android) or something else?
I'd also like to request as few permissions as possible. Meaning that I'd like to have a base music player that only request access to the sd card, and then features requiring additional permissions would be added as other apps/programs or addons.
From what I understood, the only way to achieve this is to create a second app and make the user install it. I think I'll manage to make the two apps communicate (with Intent?) but is it really the only solution?
Thank you in advance for your answers.
Maybe you could consider building the app with something such as Phonegap: http://phonegap.com/ This would let you use your web technologies strength and write a very slim layer of device specific code if any at all!
As for getting a phonegap app to run on the desktop....you could use something like :http://ripple.incubator.apache.org/ to have it run on the desktop. I know this is slightly different and you wanted to tackle writing something in Java - however this is the way mobile development is moving so you may want to get started like this!
Im building an application which needs to toggle the networkmode e.g. from 4g to 3g, to simulate a inter-MSC handover. Since this application will be private only, and the device is rooted, I am open fo any ideas to achieve that.
First thing I tried was Reflection. I tried to get Access to the BaseCommands-Class which implements
the CommandsInterface-Interface. But i was not successfull with that.
Since the device is rooted and since the app will never be published, Im fine with any strategy/Idea
even things like direct manipulation of Systems.Properties or other things like that.
Hope someone of you has some good ideas :>
i'm doin my project in 8th sem telecomm engineering, and i'm plannin to create a DUPLEX(not confident whether it'd be full or half) communication app using bluetooth and wifi as channels,something more advanced than a simple walkie talkie, and i was wondering if this is possible for a one man army??? also i was wondering if it is possible to do so with android versions 2.2 and above... can i just program the bluetooth settings in app in such a way, that, it doesn't pop up for user permission to accept a voice message from the calling party??
and is there a possibility for creating multiple channels(one for Forward Voice Channel and one for Reverse Voice Channel) using bluetooth or wifi?? here's a list of few knowledge i possess:
JAVA: basics, done some gui in desktops, know some imp classes,only SE6...
WIRELESS COMMUNICATION: learning it this semester, stuff like how base station accepts incoming mobile station request and redirects it to dest, mostly 1g in our portions...
OPERATING SYSTEMS: general, looking forward to learning android and linux os...
C,C++,DSP,and SOME ELECTRONICS...
oh, and iwoul like to implement these well within 7 months duration...
people please ENLIGTHEN me with your wisdom and references to useful websites ASAP...
my THANKS AND WISHES to thee...:)
The first big problem i see is that on using wifi for this, and as i understood it is some sort of (advanced) walkie-talkie app with no rooter inbetween the communicating phones, you have to implement adhoc-wlan on your android device, which is not supported by android, so you will need a rooted device for that, and the implementation of adhoc-wlan on android is definitve possible (have a look at this code: http://code.google.com/p/android-wifi-tether/) but nothing easy (i have done it myself for an university project).
And you asked if you can avoid the permission pop-up for an incoming message, but on an android phone activating your bluetooth or pairing it with an other device will always ask for permission from the user.
I cant help about the multiple channels you were asking for.
As Answer to your big Question: "is it possible for a one man army?" i would say generelly yes, but it depends on how much other stuff you have to do. Since you were writing this is an project for university, i dont know if this is your only project and you can invest a lot of time in it. If so i guess it is possible, but it will be an quite big project and you should be willing to work yourself relativly deep into networking stuff.
On google.Code you can find some projects similar (at least the wifi part) to what you think about to do, take a look at them...
I know this in general is beyond the scope of SO, but I am looking for some basic yes/no info to see if it is even feasible to proceed... I am thinking about building and Android 'note-taking/annotation' app that runs 'over' other installed Android apps, such as the web browser for example.
Essentially, while the user is browsing, my app would be running in the bg as a service, and then they could activate it which would then essentially intercept user inputs and translate those on a transparent canvas over the web browser into lines, shapes, etc. The user could then take a screen-cap of their marking with the underlying web page, which would be stored to the sd card.
This is a very good idea and a great question, but sadly, I do not believe it is possible.
The way Android is designed only one Activity can have focus at a time, while a Service could run in the background, the user would not be able to interact with it. The user can only interact with the currently active Activity.
Again, love the idea, but it is sadly not supported.
You might be able to achieve this with the WindowManager service. You can then use that to call addView() with a view of type TYPE_SYSTEM_ALERT, or possibly TYPE_SYSTEM_OVERLAY (but see the notes in the documentation about taking input focus).
I haven't tried it myself, but I've seen several apps (often dictionary apps that translate whatever words you tap on) that do overlays, and they always seem to require the SYSTEM_ALERT_WINDOW permission.