Due to the recent changes in the Google Play Games Service API I'm forced to replace all the deprecated code in my Android app. I'm following the Google guide in https://developers.google.com/games/services/android/savedgames and it's not clear for me how to pass the snapshot to this function that writes the data to be saved.
private Task writeSnapshot(Snapshot snapshot, byte[] data, Bitmap coverImage, String desc) {
// Set the data payload for the snapshot
snapshot.getSnapshotContents().writeBytes(data);
// Create the change operation
SnapshotMetadataChange metadataChange = new SnapshotMetadataChange.Builder()
.setCoverImage(coverImage)
.setDescription(desc)
.build();
SnapshotsClient snapshotsClient =
Games.getSnapshotsClient(this, GoogleSignIn.getLastSignedInAccount(this));
// Commit the operation
return snapshotsClient.commitAndClose(snapshot, metadataChange);
}
Can you help me? I think an example of use of this function should be added to the documentation to make everything clearer and to help developers who need to learn this from scratch.
Ok, I realized how to do it. Basically, when you open the snapshot client, you must use continueWith and obtain the snapshot from the task.
Considering you have a proper cover image and description and a Google account where you signed in
mAccount = GoogleSignIn.getLastSignedInAccount(activity);
this is the code:
SnapshotsClient snapshotsClient = Games.getSnapshotsClient(activity, mAccount);
int conflictResolutionPolicy = SnapshotsClient.RESOLUTION_POLICY_MOST_RECENTLY_MODIFIED;
snapshotsClient.open(getSaveFileName(), true, conflictResolutionPolicy)
.addOnFailureListener(new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
Log.e(TAG, "Error", e);
}
}).continueWith(new Continuation<SnapshotsClient.DataOrConflict<Snapshot>, byte[]>() {
#Override
public byte[] then(#NonNull Task<SnapshotsClient.DataOrConflict<Snapshot>> task)
throws Exception {
Snapshot snapshot = task.getResult().getData();
snapshot.getSnapshotContents().writeBytes(getSaveGameData());
SnapshotMetadataChange metadataChange = new SnapshotMetadataChange.Builder()
.setCoverImage(coverImage)
.setDescription(desc)
.build();
SnapshotsClient snapshotsClient = Games.getSnapshotsClient(activity, mAccount);
snapshotsClient.commitAndClose(snapshot, metadataChange);
return null;
}
});
Related
How to attach PDF file to post API with two parameter? I am using Fast android networking library.
I am able to call API but I when user touched button my API called in my API have three parameters like this:
message = "Test"
receiver_Email = "#gmail.com"
File = text.PDF;
Sy API allows only PDF form met with message and email. I am using Fast android networking library. I try to call API but I am not able to do it.
I also looked at some examples but it couldn't help me out.
just call this method from your onCreate this is the easy way to call API with file and parameter I hope it helps you
//method used to call API to send email
enter code here
#RequiresApi(API = Build.VERSION_CODES.LOLLIPOP_MR1)
public void call_Api()
{
final String key = "file";
final File file = new File(Environment.getExternalStorageDirectory().getAbsolutePath(), "/test.pdf");
AndroidNetworking.initialize(this);
AndroidNetworking.upload("your API")
.setPriority(Priority.HIGH)
.addMultipartParameter("message", "test")
.addMultipartParameter("receiverEmail","testrg0017#gmail.com")
.addMultipartFile(key, file)
.setPriority(Priority.HIGH)
.build()
.getAsJSONObject(new JSONObjectRequestListener()
{
#Override
public void onResponse(JSONObject response)
{
Log.d("res ",response.toString());
if(file.exists())
{
Toast.makeText(PdfGeneration.this, "API call successfully",
Toast.LENGTH_SHORT).show();
}
}
#Override
public void onError(ANError anError)
{
anError.printStackTrace();
Log.d("res12 ",anError.toString());
if(!file.exists())
{
Toast.makeText(PdfGeneration.this, "file not available",
Toast.LENGTH_SHORT).show();
}
}
});
}
I am trying to setup an ImageAnalyzer with my Android app so I can run object classification using Google's ML Kit API. The issue I am currently facing, as the title suggests, is constantly seeing the error "Failed to initialize detector".
I've reread this tutorial about three times now and followed this post about someone facing the same error (although for a different reason) to no avail. I've also made sure everything with the CameraX API (except the ImageAnalyzer code that I will show in a second) works as expected.
As mentioned in the ML Kit documentation, here is the code I have regarding setting up a LocalModel, a CustomObjectDetectorOptions, and an ObjectDetector:
LocalModel localModel = new LocalModel.Builder()
.setAssetFilePath("mobilenet_v1_1.0_224_quantized_1_metadata_1.tflite")
.build();
CustomObjectDetectorOptions customObjectDetectorOptions =
new CustomObjectDetectorOptions.Builder(localModel)
.setDetectorMode(CustomObjectDetectorOptions.STREAM_MODE)
.enableClassification()
.setClassificationConfidenceThreshold(0.5f)
.setMaxPerObjectLabelCount(3)
.build();
ObjectDetector objectDetector = ObjectDetection.getClient(customObjectDetectorOptions);
Here is the ImageAnalyzer code I have, which basically makes a call to the ML Kit API by way of the processImage helper method:
// Creates an ImageAnalysis for analyzing the camera preview feed
ImageAnalysis imageAnalysis = new ImageAnalysis.Builder()
.setTargetResolution(new Size(224, 224))
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build();
imageAnalysis.setAnalyzer(ContextCompat.getMainExecutor(this),
new ImageAnalysis.Analyzer() {
#Override
public void analyze(#NonNull ImageProxy imageProxy) {
#SuppressLint("UnsafeExperimentalUsageError") Image mediaImage =
imageProxy.getImage();
if (mediaImage != null) {
Log.i(TAG, "Obtained ImageProxy object");
processImage(mediaImage, imageProxy)
.addOnCompleteListener(new OnCompleteListener<List<DetectedObject>>() {
#Override
public void onComplete(#NonNull Task<List<DetectedObject>> task) {
imageProxy.close();
}
});
}
}
});
Here is the processImage helper method, where I actually call objectDetector.process(...), the line of code that actually runs the tflite model.
private Task<List<DetectedObject>> processImage(Image mediaImage, ImageProxy imageProxy) {
InputImage image =
InputImage.fromMediaImage(mediaImage,
imageProxy.getImageInfo().getRotationDegrees());
return objectDetector.process(image)
.addOnFailureListener(new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
String error = "Failed to process. Error: " + e.getMessage();
Log.i(TAG, error);
}
})
.addOnSuccessListener(new OnSuccessListener<List<DetectedObject>>() {
#Override
public void onSuccess(List<DetectedObject> results) {
String success = "Object(s) detected successfully!";
Log.i(TAG, success);
for (DetectedObject detectedObject : results) {
Rect boundingBox = detectedObject.getBoundingBox();
Integer trackingId = detectedObject.getTrackingId();
for (DetectedObject.Label label : detectedObject.getLabels()) {
String text = label.getText();
int index = label.getIndex();
float confidence = label.getConfidence();
Log.i(TAG, "Object detected: " + text + "; "
+ "Confidence: " + confidence);
}
}
}
});
}
Essentially, once I run the app, logcat just keeps logging these two lines on repeat. I know it means the ImageAnalyzer is continuously trying to analyze the image input, but for some reason the LocalModel just cannot process the input
2021-01-21 22:02:24.020 9328-9328/com.example.camerax I/MainActivity: Obtained ImageProxy object
2021-01-21 22:02:24.036 9328-9328/com.example.camerax I/MainActivity: Failed to process. Error: Failed to initialize detector.
I have only just started to work with Android, especially ML in Android, so any sort of help would be appreciated!
I managed to fix my issue before anyone answered, but in case anyone who just started to learn Android like me I'll leave my solution here.
Basically, remember to create an asset folder in the /src/main directory rather than the /src/androidTest directory :P
Once I did that, the model loaded correctly and now I just have to figure out how to display the results in my application.
// Do NOT compress tflite model files (need to call out to developers!)
aaptOptions {
noCompress "tflite"
}
add this line in build gradle for app under android tag
I'm using ml kit cloud text recognition by java, and it works perfectly for all languages except Gujarati.
i cant understand whats wrong, i did also add "gu" language to recognition options but it didn't matter.
whats wrong?
FirebaseVisionImage visionImage = FirebaseVisionImage.fromBitmap(myBitmap);
FirebaseVisionCloudTextRecognizerOptions options = new FirebaseVisionCloudTextRecognizerOptions.Builder()
.setLanguageHints(Arrays.asList("gu"))
.build();
FirebaseVisionTextRecognizer detector = FirebaseVision.getInstance()
.getCloudTextRecognizer(options);
Task<FirebaseVisionText> result =
detector.processImage(visionImage)
.addOnSuccessListener(new OnSuccessListener<FirebaseVisionText>() {
#Override
public void onSuccess(FirebaseVisionText firebaseVisionText) {
Log.e("Recognition", "Text : " + firebaseVisionText.getText());
}
})
.addOnFailureListener(
new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
Log.e(TAG, "Recognition failed : " + e.getMessage());
}
});
I had communications with cloud support, and it turned out that problem is from their side, and they are working on that.
Have you tried the SPARSE_MODEL without the language hint? It should automatically detect the language. There is a known internal issue with 'gu' hint for SPARSE_MODEL, and we are working on it.
Also, you could also try to use DENSE_MODEL instead of SPARSE_MODEL with the language hint.
FirebaseVisionCloudTextRecognizerOptions options = new FirebaseVisionCloudTextRecognizerOptions.Builder()
.setLanguageHints(Arrays.asList("gu"))
.setModelType(FirebaseVisionCloudTextRecognizerOptions.DENSE_MODEL)
.build();
I am attempting to integrate Android Pay into my application and I am following the tutorial provided b google. However I am stuck at the point where the IsReadyToPayRequest is executed;
IsReadyToPayRequest request =
IsReadyToPayRequest.fromJson(getIsReadyToPayRequest().toString());
Task<Boolean> task = mPaymentsClient.isReadyToPay(request);
task.addOnCompleteListener(
new OnCompleteListener<Boolean>() {
#Override
public void onComplete(#NonNull Task<Boolean> task) {
try {
boolean result = task.getResult(ApiException.class);
if (result) {
// show Google Pay as a payment option
}
} catch (ApiException e) {
}
}
});
I am getting the error, cannot resolve method 'fromJson java.lang.string'
I am using com.google.android.gms:play-services:12.0.1
Any help would be greatly appreciated.
The fromJson method is relatively new, as you can find here.
According to this, you need a newer library version or use the old Builder if you want to stick to your old version.
I am trying to retrieve from a Azure Mobile service to the android application, I have been using Mobile Service client to do so. Here is my code.
try {
mClient = new MobileServiceClient(
"URL", "Key",
this);
ListenableFuture<MyObject> result = mClient.invokeApi("CCOOutageHistoryData", "GET", null, MyObject.class);
Futures.addCallback(result, new FutureCallback<MyObject>() {
#Override
public void onFailure(Throwable exc) {
//createAndShowDialog((Exception) exc, "Error");
}
#Override
public void onSuccess(MyObject result) {
String Incdenti= result.getCount();
//createAndShowDialog(result.getCount() + " item(s) marked as complete.", "Completed Items");
//refreshItemsFromTable();
}
});
However, I don't get any error but , when checking using a breakpoint it does not reach the onSuccess or Onfailure methods within Future.addcallback.
I am trying to retrieve the JSON data after invoking the azure mobile service api . Please help
Try with String[]
try {
mClient = new MobileServiceClient(
"URL", "Key",
this);
ListenableFuture<String[]> result = mClient.invokeApi("CCOOutageHistoryData", "GET", null, (new String[0]).getClass());
Futures.addCallback(result, new FutureCallback<String[]>() {
#Override
public void onFailure(Throwable exc) {
//createAndShowDialog((Exception) exc, "Error");
}
#Override
public void onSuccess(String[] result) {
}
});
There are some samples below that may help you learning the Coutom API for Android with the .NET backend of mobile service.
A test sample for the Custom API in Android, please see https://github.com/Azure/azure-mobile-apps-android-client/blob/master/e2etest/src/main/java/com/microsoft/windowsazure/mobileservices/zumoe2etestapp/tests/CustomApiTests.java
The sample for the .NET backend of mobile service, please see https://github.com/Azure-Samples/app-service-mobile-dotnet-backend-quickstart.
Hope it helps. Best Regards。