Light control using java-jni - java

I am working on a project under which i am going to control lights of one floor of building through the server pc on the same floor using JAVA and C programming.I have almost designed the things but I want to check whether my design is upto the standards or not.
I would like to know if there are any such products/projects going on in the market,or any reasearch papers.
I am not asking about all those hobby project links,i want something that has been implemented on larger scale.

What do you mean by controlling lights? On/off?
There are X-10 devices available in market, which can do such type of things.
Most of home automation system uses those. They can easily communicate with PC.
You can go though this for more details: http://en.wikipedia.org/wiki/X10_%28industry_standard%29

Related

Getting user behavior on the Android Phone (App History, Browse History etc)

Is it possible to get the user behavior on the phone (for example Alpesh has an Android phone and he uses multiple apps, browser YouTube etc). Whatever he is doing on the phone I want to get all those things from behind (which apps he has installed, which app he opens and what he search on the phone, All these data I want to get programmatically so what all can be get in android).
For now I am aware that installed apps list can be get easily but I want to get usage history and what he do all on mobile.
This is not a code solution, but an answer to your question, so you can get start some where.
In my opinion your question title are asking about two things.
(part 1) Getting User Behavior on the Android Phone (part 2)(App History, Browse
History etc)
1- First part Getting User Behavior on the Android Phone:
There is a concept called context awareness. Short described; it is about gathering different information from the phone, like light sensor, motion sensor, sound, location or even user behavior etc. and depending on your app requirement and the gathered information:
You could send these information over cloud data store for statically usage
You could make your phone doing (behavior) different things depending on location, motion or what ever.
etc.
For context awareness it is an open area for pervasive computing research. And it is not just few lines of code to write, it is typically a complete solution depending on requirement. Example I have built a context awareness application to gather noise collected by phones from different locations for research purpose inspired from this framework, but I am pretty sure you can find other frameworks or even build your own, as I did in my case.
The mentioned framework has some examples.
2- The second part is about App History, Browse History etc.:
This is possible, but you still need to build a peace of software (App) to collect all these information (logs) from the phone. Hereafter you can make phone act on different conditions and/or again send it over a RESTful API over cloud service data store, there is no limit for it.
The problem is, there is no thing out of the box for your requirement. Even if you find frameworks you still need to research it and further work on it.
You can find different examples for your requirement, like to collect browser history, you can find SO question here:
Get browser history and search result in android
Or get list of installed application:
How to get a list of installed android applications and pick one to run
My point here is you need to solve small goals at a time and put your knowledge together at the end.
Both 1 and 2 can also be related to each other, depending on your achievement.
Conclusion
Make a goal to your project.
Define the main requirements and tasks of your project.
Research your options (Technology, Cost, Target Audience, What data I can or I should not collect, what is possible to collect, what is the limits, Privacy issues etc.).
Split your project in small assets and try to solve small problems/goals.
Finally you would be able to put the puzzles together and build your final application
but i want to get usage history and what he do all on mobile
This is not possible and shouldn't ever be possible. Each app is sandboxed by Android so apps cannot inspect what other apps are doing. Think about it, you wouldn't want apps to be able to intercept private information such as banking details.
Every app is isolated from the other ones. Unless you develop a system signed app, you will not be able to gather all that data.
What you could do is to develop your own Android Rom where you then develop your data collection the exact way you want. Then you need to distribute your rom, which is another story...

Building a music player for both android and Desktop at the same time

I'm going to build a music player working on both Android and Desktops. It won't be anything special, I'm doing it more to training myself and know more or less what problems I might encounter if I want to do a real app/program one day. Therefore, since I'm already rather decent at web technologies, I'll try to use something else: Java.
My app / program with have to
be able to read music files and play them (I'm planning on reading the files myself, meaning that I only need to be able to read "raw" sound, WAV or such)
be able to write to music files (to change tags)
be able to communicate with another instance of the program on another device that's on the same network (I want to be able to use my phone as a remote control and my pc as a remote control for my phone)
If possible, show some play/pause buttons on the screen even if it's locked (probably just on android)
And this is where I need your help: What you I do to write as little "device specific" code as possible?
It's obvious I can reuse classes used to encode/decode some music types. Finding the files, reading them, writing them, playing raw sound and connection to the network will be easy to abstract if needed.
But then there is the UI and it looks like if I don't plan carefully, I'll have to do it twice... I've seen libGDX but they kinda insist a lot on the fact it's for games...
All I need is some way to build a simple UI (a few buttons, the cover of the albums) that'd work for both the desktop and the phone.
Should I use libGDX, the "normal" libs (*WT, Swing, neither of which seem to be "compatible" with Android) or something else?
I'd also like to request as few permissions as possible. Meaning that I'd like to have a base music player that only request access to the sd card, and then features requiring additional permissions would be added as other apps/programs or addons.
From what I understood, the only way to achieve this is to create a second app and make the user install it. I think I'll manage to make the two apps communicate (with Intent?) but is it really the only solution?
Thank you in advance for your answers.
Maybe you could consider building the app with something such as Phonegap: http://phonegap.com/ This would let you use your web technologies strength and write a very slim layer of device specific code if any at all!
As for getting a phonegap app to run on the desktop....you could use something like :http://ripple.incubator.apache.org/ to have it run on the desktop. I know this is slightly different and you wanted to tackle writing something in Java - however this is the way mobile development is moving so you may want to get started like this!

What's the right approach for creating an Android app?

I have a great idea for an Android app, but as I'm only familiar with php/js, I'm uncertain of which approach I should choose for creating it. The app will be based on a google map with a lot of position markers. There won't be any fancy animations or other heavy resource-demanding activities.
As I see it there are three different options:
Read up on Java and program the whole thing in Java
Create the map activity in Java as a mapview and then use webviews for the other activities (which can easily be scripted as html5 webpages.)
Script everything as a webapp (not really an option, as this is not a real mobile app imho.
I'm most keen on using no. 2 as I'm quite familiar with html/php/js/mysql. Have to read up on the html5 specifics, though. Questions:
I need access to GPS and camera hardware. Is that acheivable in webviews?
How complicated is it to pass variables between js in webview activities and java in other activities?
How big a difference in performance can I expect if I use option 1 vs option 2?
Other thoughts?
Kind regards,
Anders
You can choose number 2, but as we are talking about an android phone, you might want to get really accurate coordinates for your map, and you can only achieve this by accessing your phone GPS, through webviews the best you can get is the location trought the device internet IP adress, wich doesnt lead to a very accurate geo position.
The best choice is a 100% java application in my opinion.
1) Yes it's possible, but as commented it will be less accurate and probably slow.
2) Not complicated. Painful if you need loads of interaction between a webview and native app. Using a Javascript Interface that can be set up from the native app. You can basically inject javascript in a webview's html.
3) Heterogeneity of performance depending on device. Because your implementation will be based on the device's browser you can expect to get really sluggish behavior for older devices. Anything to do with HTML events (Dragging, Tabbing...) will have a knock on most devices, from my experience.
4) As #vodich comments there are other party frameworks. My benchmarking on PhoneGap and other js-based options is that they're a waste of time if you are looking at developing a professional app. I haven't developed on Adobe AIR but find a pain the need to be installing plugins to get native functionality (access to sensors, camera, etc) Mobile is all about fast, responsive behaviour. HDI is your finger, user is fast, so app needs to be fast.
EDIT: So hell yeah! Java FTW!
Albert.
4.Other toughts?
Yes, if you really want to make a great Android app, you should be using only Android and specific Android UI components, and give it a native look and feel. And regarding 1,2 yes it is possible, I would say not so complicated to just integrate them, but I think you'll eventually get in big problems.
Learn Java and write your application natively.
Webviews might allow you to use your php skills to present something to the user, but it's entirely one-way - you'll not be able to interact with what's inside.
The Android developer site offers fantastic documentation and jumping from PHP to Java isn't greatly difficult, though you'll need to get used to strict typing and "real" OOP.
Other thoughts? Don't go down the PhoneGap/Cross platform toolkit road - it might allow you to write applications for multiple platforms and using your current skills, but in the end you get a subpar app that doesn't feel right on either platform and doesn't fair well as future versions of iOS and Android are released.

kinect API for controlling web applications

I would like to connect a kinect (sorry) to a PC so that users can interact with my webapp via gestures. I don't have a clear idea about what level of programming is involved in order to achieve this, but a JavaScript API would be ideal (Java would also be tolerable).
I've had a look at DepthJS, but the installation/setup alone has almost defeated me. At a minimum I need the user to be able to move the cursor and click, but ideally I'd also like them to be able to use smartphone gestures such as pinching.
Is there an API available that provides these features, can be installed/setup relatively easily, and can be programmed with JavaScript? I don't know if this makes any difference, but I'll be doing the development on Ubuntu.
Kinesis leverages web technologies developers already know best HTML/CSS/JavaScript. So you can reuse your existing code and existing team to build gesture enabled applications on top of Kinect for Windows SDK
Zigfu provides a browser plugin called ZigJS for Kinect and will enable HTML/JavaScript Kinect apps using hand gestures.
OpenKinect is an open community of people interested in making use of the amazing Xbox Kinect hardware with our PCs and other devices. They are working on free, open source libraries that will enable the Kinect to be used with Windows, Linux, and Mac.
Don about that
At a minimum I need the user to be able to move the cursor and click, but ideally I'd also like them to be able to use smartphone gestures such as pinching.
You`ll find many examples about Mouse cursos tracking. I think that connecting mouse to kinect is one of the first that kinect developer is trying to achive. This is very simple thing. You just connect cursor with one join, track him and scale it to monitor resolution :)
But I'm not sure that you want it . Even as a minimu. I remember that on channel9 I was watching movie about websites controlled by kinect. This technology exists for 100% and it's preaty stable. So you need just to look there.
IMO focus on api`a/frameworks for that. Cause connecting mouse to kinect just for using this on websites has many disadvantages
Microsoft released new SDK 1.8 with Kinect.js library last September. I'm sure this is what you need http://blogs.msdn.com/b/kinectforwindows/archive/2013/09/16/updated-sdk-with-html5-kinect-fusion-improvements-and-more.aspx

communication project using android

i'm doin my project in 8th sem telecomm engineering, and i'm plannin to create a DUPLEX(not confident whether it'd be full or half) communication app using bluetooth and wifi as channels,something more advanced than a simple walkie talkie, and i was wondering if this is possible for a one man army??? also i was wondering if it is possible to do so with android versions 2.2 and above... can i just program the bluetooth settings in app in such a way, that, it doesn't pop up for user permission to accept a voice message from the calling party??
and is there a possibility for creating multiple channels(one for Forward Voice Channel and one for Reverse Voice Channel) using bluetooth or wifi?? here's a list of few knowledge i possess:
JAVA: basics, done some gui in desktops, know some imp classes,only SE6...
WIRELESS COMMUNICATION: learning it this semester, stuff like how base station accepts incoming mobile station request and redirects it to dest, mostly 1g in our portions...
OPERATING SYSTEMS: general, looking forward to learning android and linux os...
C,C++,DSP,and SOME ELECTRONICS...
oh, and iwoul like to implement these well within 7 months duration...
people please ENLIGTHEN me with your wisdom and references to useful websites ASAP...
my THANKS AND WISHES to thee...:)
The first big problem i see is that on using wifi for this, and as i understood it is some sort of (advanced) walkie-talkie app with no rooter inbetween the communicating phones, you have to implement adhoc-wlan on your android device, which is not supported by android, so you will need a rooted device for that, and the implementation of adhoc-wlan on android is definitve possible (have a look at this code: http://code.google.com/p/android-wifi-tether/) but nothing easy (i have done it myself for an university project).
And you asked if you can avoid the permission pop-up for an incoming message, but on an android phone activating your bluetooth or pairing it with an other device will always ask for permission from the user.
I cant help about the multiple channels you were asking for.
As Answer to your big Question: "is it possible for a one man army?" i would say generelly yes, but it depends on how much other stuff you have to do. Since you were writing this is an project for university, i dont know if this is your only project and you can invest a lot of time in it. If so i guess it is possible, but it will be an quite big project and you should be willing to work yourself relativly deep into networking stuff.
On google.Code you can find some projects similar (at least the wifi part) to what you think about to do, take a look at them...

Categories

Resources