I have implemented Google Maps in my App and doing reverse geocoding when clicking on map.Im using This API. but it gives me more than one result on Single tap. I know why Google behave like this.
Then I look into PlacePicker widget for Android, but Im unable to use that widget without opening another Map screen to pick place.
What I want is that I want to use my Google Map screen to get tapped place instead of opening PlacePicker screen. I just want PlacePicker work with my implemented Map i.e want PlacePicker to respond map click and give place without opening widget if this is not possible then inform me about other solution to get exactly one and correct place on Map tap? Thank you in advance
Updated
Why Im getting closed vote for question? it is not too broad? Simply,I want to know whether is it possible to use Google Map (MapView|SuportFragment) as PlacePicker's main screen. Have look code below
int PLACE_PICKER_REQUEST = 1;
PlacePicker.IntentBuilder builder = new PlacePicker.IntentBuilder();
// I don't want this line, Don't want PlacePicker to open another Map Screen to allow user to pick place.
startActivityForResult(builder.build(this), PLACE_PICKER_REQUEST);
now look at below code
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == PLACE_PICKER_REQUEST) {
if (resultCode == RESULT_OK) {
Place place = PlacePicker.getPlace(data, this);
String toastMsg = String.format("Place: %s", place.getName());
Toast.makeText(this, toastMsg, Toast.LENGTH_LONG).show();
}
}
}
above code is executed when user pick a place from PlacePicker screen and that screen returns a Intent with picked place. Is there any way to make that type of 'Intent' on Google Map tap with lat,lng? so that I could get Place detail. I don't want to user reverse geocoding as it gives me many results.
I Donot know anything about the google geo api or PlacePicker api.
The google independent android standard way would be to use "ACTION_PICK" with a geo uri.
For more infos see https://github.com/k3b/k3b-geoHelper/wiki/Android-Geo-howto.
This only works if you have installed a geo-pick-aware app that supports geo-pick.
Note: you do not have to use the lib. All you need to do is to decode the geo-uri returned by activity result
I have found solution to my problem, To get rid of reverse geocoding to many results and to get clicked place I use following code
googleMap.setOnPoiClickListener(new GoogleMap.OnPoiClickListener() {
#Override
public void onPoiClick(PointOfInterest pointOfInterest) {
String SS="";
isPOI=true;
poiPlaceID=pointOfInterest.placeId;
// Then using Google PlaceDetail API with placeId, It gives me exact Location click
}
});
Related
Looking to handle a deep link from the Google Assistant. As I only have an emulator at the moment I am having trouble testing it (from what I have read it requires a real device). That said, I was wondering if I am handling it the correct way. I am unfamiliar with Kotlin and my code was turning into Spaghetti trying to integrate, so I put this together in my existing launcher activity just to try and get it bootstrapped for now. The manifest and actions.xml were set up like the fitness app tutorial.
Am I doing this correctly?
if (mAuth.getCurrentUser() != null) {
data = this.getIntent().getData();
if (data != null && data.isHierarchical()) {
uriData = data.toString();
containsStart = containsIgnoreCase(uriData,"start");
containsRun = containsIgnoreCase(uriData,"run");
if(containsStart && containsRun) {
Intent intent = new Intent(getApplication(), RunActivity.class);
intent.putExtra("runStart", true);
startActivity(intent);
}
}
else {
checkUserAccType();
}
//Else, if there is no current user, start the Authentication activity
}
A few observations and recommendation about your code:
Instead of using containsIgnoreCase uses getPath() and match the path. See example.
Also, for the activity parameter use URL query param instead of containsIgnoreCase. See example
Starting the activity or fragment. I assume startActivity and checkUserAccType will handle that part. See example.
// Else... section should go one line below.
Authentication. It looks fine. And it seems you're using Firebase by the getCurrent method signature. See example
I am working on stripe-terminal-android-app, to connect to BBPOS 2X Reader device,
wanted to click-item from list,(recyclerView).
I am trying to do:
when list of devices appears(readers), I am checking if readers.size()==1, then click first-device from list,else show recyclerView();
I have very less experience in Android(coming from JS, PY), :)
After going through debugger to understand flow of program-running, I used F8 key, or stepOver the functions one by one,
and where value is assigned to convert in displayble-format in adapter as here.
public ReaderAdapter(#NotNull DiscoveryViewModel viewModel) {
super();
this.viewModel = viewModel;
if (viewModel.readers.getValue() == null) {
readers = new ArrayList<>();
} else {
readers = viewModel.readers.getValue();
if(readers.size() == 1){
Log.e(TAG, "readers.size() is 1 "+ readers.size());
}
}
}
then in ReaderHolder-file, values are bind() as
void bind(#NotNull Reader reader) {
binding.setItem(reader);
binding.setHandler(clickListener);
binding.executePendingBindings();
}
}
I tried assigining button and manually clicking when only-one device appears, by clicing on reader[0], can't do that by findViewById inside Adapter file, to call onClick() method manually,
I tired another StackOverflow's answer but didn't understood, from here.
Main fragment is discovery-fragment,
how can I click first-device by checking readers.size()==1, then click onClick()?
my final-goal is to automate, whole stripe-terminal-payment process on android.
extra-info:
I am fetching data from python-odoo server, then using url, will open app through browser, (done this part), then device will be selected automatically as everytime-no any devices will be present except one,
so will automatically select that from recyclerView, then proceed.
I have asked for help in detailed way on GitHub-issues, and started learning Android's concepts for this app(by customizing stripe's demo app, which works great, but I wanted to avoid manually clicking/selection of devices).
I tried finding something similar on the net, but couldn't. What I want, specifically, is the ability to have a button paste some text that originated in some other app rather than the one I'm making. So, say you copy some text from the "Google Chrome" app and go through the regular long tap and copy. Then, you open this app and press a button and it fetches the text from the clipboard and pastes it in a TextView. I understand that this isn't possible with the clipboard manager since all the examples I've seen show it as an object that stores information from within the app.
No, ClipboardManager is a system service, providing access to a device-wide clipboard.
Part of the reason why many examples might show both copying and pasting to the clipboard is so that the example is self-contained.
So, you get a ClipboardManager from getSystemService(), get the current contents via getPrimaryClip(), and use the ClipData as you see fit.
For example, this sample project contains two apps: drag/ and drop/. Mostly, this is to illustrate cross-app drag-and-drop operations on Android 7.0. But, drop/ supports a "Paste" action bar item (with associated keyboard shortcut), where I grab whatever is on the clipboard and, if it has a Uri, use it:
#Override
public boolean onOptionsItemSelected(MenuItem item) {
if (item.getItemId()==R.id.paste) {
boolean handled=false;
ClipData clip=
getSystemService(ClipboardManager.class)
.getPrimaryClip();
if (clip!=null) {
ClipData.Item clipItem=clip.getItemAt(0);
if (clipItem!=null) {
imageUri=clipItem.getUri();
if (imageUri!=null) {
showThumbnail();
handled=true;
}
}
}
if (!handled) {
Toast
.makeText(this, "Could not paste an image!", Toast.LENGTH_LONG)
.show();
}
return(handled);
}
return(super.onOptionsItemSelected(item));
}
There is no code in this app to put stuff on the clipboard, though the associated drag/ app has code for that.
I think what you want to achieve is available in this open-source library: https://github.com/heruoxin/Clip-Stack
The idea is that it keeps track of the clipboard entries in its own internal database while running a (in your case a floating button) service and then pasting that.
I am trying to add a button in my application that starts Google Voice Typing (or the default speech recognition). I have tried following this tutorial. This tutorial is incredibly confusing to me. I imported the .jar, and added the necessary permissions, services, and activities to my Manifest. But I can't seem to figure out how to "put it all together". I'm wondering:
Am I supposed to call the inputMethodService from my button click in my Main Activity? Or does my inputMethodService essentially become my Main Activity?
What does IME mean? I tried to Google it, but the definitions it gave me didn't help my understanding.
When I try to copy and paste the whole DemoInputMethodService code into my current activity, I get an error saying I cannot extend InputMethodService inside of this activity. (Which leads back to to ask question one.)
How can I get this to work?
If you want to follow the tutorial that you mention then you need to implement an IME (input method editor) first, see http://developer.android.com/guide/topics/text/creating-input-method.html
This IME can have a regular keyboard look-and-feel or contain just a microphone button.
The user of your app will first have to click on a text field to launch the IME. (Note that there can be several IMEs installed on the device and they have to be explicitly enabled in the Settings.) Then the user will have to click on the microphone button to trigger the speech recognition.
The tutorial provides a jar that lets you directly call Google's recognizer. It would be nicer if instead you called the recognizer via the SpeechRecognizer-interface (http://developer.android.com/reference/android/speech/SpeechRecognizer.html), this way the user can decide whether to use Google's or something else.
The SpeechRecognizer is given a listener which supports the method onPartialResults, which allows you to monitor the recognition hypotheses while the user is speaking. It's up to you how you display them. Note however that the specification of SpeechRecognizer does not promise that this method gets called. This depends on the implementation of the recognizer service. Regarding Google's implementation: what it supports keeps changing unannounced, it does not have a public API nor even release notes.
You might be able to reuse my project Kõnele (http://kaljurand.github.io/K6nele/about/), which contains two implementations of SpeechRecognizer and an IME that uses them. One of the implementations offers continuous recognition of arbitrarily long audio input, using the Kaldi GStreamer server (https://github.com/alumae/kaldi-gstreamer-server). You would need to set up your own instance of the server porting it to the language that you want to recognize (unless you want to use the Estonian server that Kõnele uses by default).
Voice recognition samples are found where you have the android SDK..
example:
$ find $SDK_ROOT/samples -name *recogni*
./android-19/legacy/VoiceRecognitionService/res/xml/recognizer.xml
./android-19/legacy/VoiceRecognitionService/src/com/example/android/voicerecognitionservice
./android-19/legacy/ApiDemos/res/layout/voice_recognition.xml
./android-18/legacy/VoiceRecognitionService/res/xml/recognizer.xml
./android-18/legacy/VoiceRecognitionService/src/com/example/android/voicerecognitionservice
./android-18/legacy/ApiDemos/res/layout/voice_recognition.xml
./android-21/legacy/VoiceRecognitionService/res/xml/recognizer.xml
./android-21/legacy/VoiceRecognitionService/src/com/example/android/voicerecognitionservice
./android-21/legacy/ApiDemos/res/layout/voice_recognition.xml
any one of the services should help show how to do a RecognizerIntent
The "APIDemo" seems to include use of a RecognizerIntent. check the source for that one. Otherwise look into the services and carve them up into an intent.
I had the same issue, but after a long time looking for continuous voice dictation on an activity, I solved that problem using pocketsphinx.
I couldn't find the way to integrate Google Voice Typing on an activity, just on an input method by following that tutorial. If it confuse you, just download this demo and modify it.
Good Luck!
You can trigger an intent from a button listener
Intent checkIntent = new Intent();
checkIntent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA);
startActivityForResult(checkIntent, MY_DATA_CHECK_CODE);
And the result can be get from
private TextToSpeech mTts;
protected void onActivityResult(
int requestCode, int resultCode, Intent data) {
if (requestCode == MY_DATA_CHECK_CODE) {
if (resultCode == TextToSpeech.Engine.CHECK_VOICE_DATA_PASS) {
// success, create the TTS instance
mTts = new TextToSpeech(this, this);
} else {
// missing data, install it
Intent installIntent = new Intent();
installIntent.setAction(
TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
startActivity(installIntent);
}
}
}
Refer this link for more info.
Can someone tell me if creating barcode scanner app (for Android) is difficult? Is OpenCV library good start? Where can I find algorithm which clearly explains how to read barcodes? I will appreciate all good materials about this topic!
Thanks in advance!
The ZXing project provides a standalone barcode reader application which — via Android's intent mechanism — can be called by other applications who wish to integrate barcode scanning.
The easiest way to do this is to call the ZXing SCAN Intent from your application, like this:
public Button.OnClickListener mScan = new Button.OnClickListener() {
public void onClick(View v) {
Intent intent = new Intent("com.google.zxing.client.android.SCAN");
intent.putExtra("SCAN_MODE", "QR_CODE_MODE");
startActivityForResult(intent, 0);
}
};
public void onActivityResult(int requestCode, int resultCode, Intent intent) {
if (requestCode == 0) {
if (resultCode == RESULT_OK) {
String contents = intent.getStringExtra("SCAN_RESULT");
String format = intent.getStringExtra("SCAN_RESULT_FORMAT");
// Handle successful scan
} else if (resultCode == RESULT_CANCELED) {
// Handle cancel
}
}
}
Pressing the button linked to mScan would launch directly into the ZXing barcode scanner screen (or crash if ZXing isn't installed). Once a barcode has been recognised, you'll receive the result in your Activity, here in the contents variable.
To avoid the crashing and simplify things for you, ZXing have provided a utility class which you could integrate into your application to make the installation of ZXing smoother, by redirecting the user to the Android Market if they don't have it installed already.
Finally, if you want to integrate barcode scanning directly into your application without relying on having the separate ZXing application installed, well then it's an open source project and you can do so! :)
You can use the existing Zebra Crossing barcode scanner for Android, available at: http://code.google.com/p/zxing/. Typically the idea is that you would invoke it via intents, like in the example here: http://code.google.com/p/zxing/wiki/ScanningViaIntent.
Zebra Crossing is the best documented java 1D or 2D barcode decoder or encoder around. Lots of people use it, and it's become the de facto standard for android. There's a healthy buzz about it on here too.
RedLaser has an api, but you'll have to pay if you use it in production. When I tried it out, I didn't find it to be a spectacular improvement over Zebra Crossing. Certainly not for the price.
jjil does barcodes but there are only 3 committers on the project, and I've never used it myself so I don't know what to tell you about it. Its source is certainly readable.
Once you start reading, you'll find readers are tricky things to implement due to blurry images, noise, distortion, weird angles, and so forth. So if you want something reliable, you probably want to go with a community-maintained library.
You can use zbar library. Download it from:
http://sourceforge.net/projects/zbar/files/AndroidSDK/
I think this is more fast and accurate than zxing.