How to use a reprocessCaptureRequest with camera2 API - java

I'm trying to update a camera project to Android N and in consequence I'm moving my old CameraCaptureSession to a ReprocessableCaptureSession. I did it and it is working fine, but with this new feature I can use the CameraDevice.TEMPLATE_ZERO_SHUTTER_LAG template in my device and I can reprocess frames with the reprocessCaptureRequest.
Here is where my problem appear. Because I don't find any example, and I don't really understand the little documentation about how to use a reprocessCaptureRequest:
Each reprocess CaptureRequest processes one buffer from CameraCaptureSession's input Surface to all output Surfaces included in the reprocess capture request. The reprocess input images must be generated from one or multiple output images captured from the same camera device. The application can provide input images to camera device via queueInputImage(Image). The application must use the capture result of one of those output images to create a reprocess capture request so that the camera device can use the information to achieve optimal reprocess image quality. For camera devices that support only 1 output Surface, submitting a reprocess CaptureRequest with multiple output targets will result in a CaptureFailure.
I tried to have a look to the CTS tests about the camera in google.sources but they do the same than me. Using multiples imageReaders, saving the TotalCaptureResult of the pictures in a LinkedBlockingQueue<TotalCaptureResult>. And later just calling:
TotalCaptureResult totalCaptureResult = state.captureCallback.getTotalCaptureResult();
CaptureRequest.Builder reprocessCaptureRequest = cameraStore.state().cameraDevice.createReprocessCaptureRequest(totalCaptureResult);
reprocessCaptureRequest.addTarget(state.yuvImageReader.getSurface());
sessionStore.state().session.capture(reprocessCaptureRequest.build(), null, this.handlers.bg());
But it always throw me a RuntimeException:
java.lang.RuntimeException: Capture failed: Reason 0 in frame 170,
I just want to know which is the right way to work with the ReprocessableCaptureSession because I already tried everything and I don't know what I'm doing wrong.

Finally I found the solution to make my reprocessableCaptureSession work.
I use with Flux architecture so don't be confused when you see Dispatcher.dispatch(action), just see it as a callback. So, here is my code:
First How the session is created:
//Configure preview surface
Size previewSize = previewState.previewSize;
previewState.previewTexture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight());
ArrayList<Surface> targets = new ArrayList<>();
for (SessionOutputTarget outputTarget : state.outputTargets) {
Surface surface = outputTarget.getSurface();
if (surface != null) targets.add(surface);
}
targets.add(previewState.previewSurface);
CameraCharacteristics cameraCharacteristics = cameraStore.state().availableCameras.get(cameraStore.state().selectedCamera);
Size size = CameraCharacteristicsUtil.getYuvOutputSizes(cameraCharacteristics).get(0);
InputConfiguration inputConfiguration = new InputConfiguration(size.getWidth(),
size.getHeight(), ImageFormat.YUV_420_888);
CameraCaptureSession.StateCallback sessionStateCallback = new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession session) {
if (sessionId != currentSessionId) {
Timber.e("Session opened for an old open request, skipping. Current %d, Request %d", currentSessionId, sessionId);
//performClose(session);
return;
}
try {
session.getInputSurface();
//This call is irrelevant,
//however session might have closed and this will throw an IllegalStateException.
//This happens if another camera app (or this one in another PID) takes control
//of the camera while its opening
} catch (IllegalStateException e) {
Timber.e("Another process took control of the camera while creating the session, aborting!");
}
Dispatcher.dispatchOnUi(new SessionOpenedAction(session));
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession session) {
if (sessionId != currentSessionId) {
Timber.e("Configure failed for an old open request, skipping. Current %d, request %d", currentSessionId, sessionId);
return;
}
Timber.e("Failed to configure the session");
Dispatcher.dispatchOnUi(new SessionFailedAction(session, new IllegalStateException("onConfigureFailed")));
}
};
if (state.outputMode == OutputMode.PHOTO) {
cameraState.cameraDevice.createReprocessableCaptureSession(inputConfiguration, targets, sessionStateCallback, handlers.bg());
} else if (state.outputMode == OutputMode.VIDEO) {
cameraState.cameraDevice.createCaptureSession(targets, sessionStateCallback, handlers.bg());
}
} catch (IllegalStateException | IllegalArgumentException e) {
Timber.e(e, "Something went wrong trying to start the session");
} catch (CameraAccessException e) {
//Camera will throw CameraAccessException if another we try to open / close the
//session very fast.
Timber.e("Failed to access camera, it was closed");
}
Photo session as been created with 4 surfaces(Preview, YUV(input), JPEG and RAW). After it, I configure my imageWriter:
Dispatcher.subscribe(Dispatcher.VERY_HIGH_PRIORITY, SessionOpenedAction.class)
.filter(a -> isInPhotoMode())
.subscribe(action -> {
PhotoState newState = new PhotoState(state());
newState.zslImageWriter = ImageWriter.newInstance(action.session.getInputSurface(), MAX_REPROCESS_IMAGES);
setState(newState);
});
Ok, now we have the ImageWriter and the session created. No we start the streaming with the repeating request:
CaptureRequest.Builder captureRequestBuilder =
cameraStore.state().cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_ZERO_SHUTTER_LAG);
captureRequestBuilder.addTarget(previewStore.state().previewSurface);
captureRequestBuilder.addTarget(photoStore.state().yuvImageReader.getSurface());
state.session.setRepeatingRequest(captureRequestBuilder.build(), state.zslCaptureCallback, handlers.bg());
To don't add a lot of code, just say that the zslCaptureCallback is a custom callback which save in a LinkedBlockingQueue<TotalCaptureRequest> the X last TotalCaptureRequests. Also, I do the same with the yuvImageReader(input one) saving the last X images in a queue.
Finally here is my "take photo" method:
try {
//Retrieve the last image stored by the zslImageReader
Image image = zslImageReaderListener.getImage();
//Retrieve the last totalCaptureResult from the zslCaptureCallback and create a reprocessableCaptureRequest with it
TotalCaptureResult captureResult = sessionStore.state().zslCaptureCallback.getCaptureResult(image.getTimestamp());
CaptureRequest.Builder captureRequest = cameraStore.state().cameraDevice.createReprocessCaptureRequest(captureResult);
//Add the desired target and values to the captureRequest
captureRequest.addTarget(state().jpegImageReader.getSurface());
//Queued back to ImageWriter for future consumption.
state.zslImageWriter.queueInputImage(image);
//Drain all the unused and queued CapturedResult from the CaptureCallback
sessionStore.state().zslCaptureCallback.drain();
//Capture the desired frame
CaptureRequest futureCaptureResult = captureRequest.build();
sessionStore.state().session.capture(futureCaptureResult, new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(#NonNull CameraCaptureSession session,
#NonNull CaptureRequest request,
#NonNull TotalCaptureResult result) {
Dispatcher.dispatchOnUi(new PhotoStatusChangedAction(PhotoState.Status.SUCCESS));
}
#Override
public void onCaptureFailed(#NonNull CameraCaptureSession session,
#NonNull CaptureRequest request,
#NonNull CaptureFailure failure) {
super.onCaptureFailed(session, request, failure);
Exception captureFailedException = new RuntimeException(
String.format("Capture failed: Reason %s in frame %d, was image captured? -> %s",
failure.getReason(),
failure.getFrameNumber(),
failure.wasImageCaptured()));
Timber.e(captureFailedException, "Cannot take mediaType, capture failed!");
Dispatcher.dispatchOnUi(new PhotoStatusChangedAction(PhotoState.Status.ERROR, captureFailedException));
}
}, this.handlers.bg());
//Capture did not blow up, we are taking the photo now.
newState.status = PhotoState.Status.TAKING;
} catch (CameraAccessException | InterruptedException| IllegalStateException | IllegalArgumentException | SecurityException e) {
Timber.e(e, "Cannot take picture, capture error!");
newState.status = PhotoState.Status.ERROR;
}

Related

Keep receiving this error "Failed to initialize detector". Am I not loading the tflite model correctly?

I am trying to setup an ImageAnalyzer with my Android app so I can run object classification using Google's ML Kit API. The issue I am currently facing, as the title suggests, is constantly seeing the error "Failed to initialize detector".
I've reread this tutorial about three times now and followed this post about someone facing the same error (although for a different reason) to no avail. I've also made sure everything with the CameraX API (except the ImageAnalyzer code that I will show in a second) works as expected.
As mentioned in the ML Kit documentation, here is the code I have regarding setting up a LocalModel, a CustomObjectDetectorOptions, and an ObjectDetector:
LocalModel localModel = new LocalModel.Builder()
.setAssetFilePath("mobilenet_v1_1.0_224_quantized_1_metadata_1.tflite")
.build();
CustomObjectDetectorOptions customObjectDetectorOptions =
new CustomObjectDetectorOptions.Builder(localModel)
.setDetectorMode(CustomObjectDetectorOptions.STREAM_MODE)
.enableClassification()
.setClassificationConfidenceThreshold(0.5f)
.setMaxPerObjectLabelCount(3)
.build();
ObjectDetector objectDetector = ObjectDetection.getClient(customObjectDetectorOptions);
Here is the ImageAnalyzer code I have, which basically makes a call to the ML Kit API by way of the processImage helper method:
// Creates an ImageAnalysis for analyzing the camera preview feed
ImageAnalysis imageAnalysis = new ImageAnalysis.Builder()
.setTargetResolution(new Size(224, 224))
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build();
imageAnalysis.setAnalyzer(ContextCompat.getMainExecutor(this),
new ImageAnalysis.Analyzer() {
#Override
public void analyze(#NonNull ImageProxy imageProxy) {
#SuppressLint("UnsafeExperimentalUsageError") Image mediaImage =
imageProxy.getImage();
if (mediaImage != null) {
Log.i(TAG, "Obtained ImageProxy object");
processImage(mediaImage, imageProxy)
.addOnCompleteListener(new OnCompleteListener<List<DetectedObject>>() {
#Override
public void onComplete(#NonNull Task<List<DetectedObject>> task) {
imageProxy.close();
}
});
}
}
});
Here is the processImage helper method, where I actually call objectDetector.process(...), the line of code that actually runs the tflite model.
private Task<List<DetectedObject>> processImage(Image mediaImage, ImageProxy imageProxy) {
InputImage image =
InputImage.fromMediaImage(mediaImage,
imageProxy.getImageInfo().getRotationDegrees());
return objectDetector.process(image)
.addOnFailureListener(new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
String error = "Failed to process. Error: " + e.getMessage();
Log.i(TAG, error);
}
})
.addOnSuccessListener(new OnSuccessListener<List<DetectedObject>>() {
#Override
public void onSuccess(List<DetectedObject> results) {
String success = "Object(s) detected successfully!";
Log.i(TAG, success);
for (DetectedObject detectedObject : results) {
Rect boundingBox = detectedObject.getBoundingBox();
Integer trackingId = detectedObject.getTrackingId();
for (DetectedObject.Label label : detectedObject.getLabels()) {
String text = label.getText();
int index = label.getIndex();
float confidence = label.getConfidence();
Log.i(TAG, "Object detected: " + text + "; "
+ "Confidence: " + confidence);
}
}
}
});
}
Essentially, once I run the app, logcat just keeps logging these two lines on repeat. I know it means the ImageAnalyzer is continuously trying to analyze the image input, but for some reason the LocalModel just cannot process the input
2021-01-21 22:02:24.020 9328-9328/com.example.camerax I/MainActivity: Obtained ImageProxy object
2021-01-21 22:02:24.036 9328-9328/com.example.camerax I/MainActivity: Failed to process. Error: Failed to initialize detector.
I have only just started to work with Android, especially ML in Android, so any sort of help would be appreciated!
I managed to fix my issue before anyone answered, but in case anyone who just started to learn Android like me I'll leave my solution here.
Basically, remember to create an asset folder in the /src/main directory rather than the /src/androidTest directory :P
Once I did that, the model loaded correctly and now I just have to figure out how to display the results in my application.
// Do NOT compress tflite model files (need to call out to developers!)
aaptOptions {
noCompress "tflite"
}
add this line in build gradle for app under android tag

My List<FirebaseVisionFace> faces is always empty

I am implementing the MLKit face detection library with a simple application. The application is a facial monitoring system so i am setting up a preview feed from the front camera and attempting to detect a face. I am using camera2Api. At my ImageReader.onImageAvailableListener, I want to implement the firebase face detection on each read in the image. After creating my FirebaseVisionImage and running the FirebaseVisionFaceDetector I am getting an empty faces list, this should contain detected faces but I always get a face of size 0 even though a face is in the image.
I have tried other forms of creating my FirebaseVisionImage. Currently, I am creating it through the use of a byteArray which I created following the MlKit docs. I have also tried to create a FirebaseVisionImage using the media Image object.
private final ImageReader.OnImageAvailableListener onPreviewImageAvailableListener = new ImageReader.OnImageAvailableListener() {
/**Get Image convert to Byte Array **/
#Override
public void onImageAvailable(ImageReader reader) {
//Get latest image
Image mImage = reader.acquireNextImage();
if(mImage == null){
return;
}
else {
byte[] newImg = convertYUV420888ToNV21(mImage);
FirebaseApp.initializeApp(MonitoringFeedActivity.this);
FirebaseVisionFaceDetectorOptions highAccuracyOpts =
new FirebaseVisionFaceDetectorOptions.Builder()
.setPerformanceMode(FirebaseVisionFaceDetectorOptions.ACCURATE)
.setLandmarkMode(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS)
.setClassificationMode(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.build();
int rotation = getRotationCompensation(frontCameraId,MonitoringFeedActivity.this, getApplicationContext() );
FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder()
.setWidth(480) // 480x360 is typically sufficient for
.setHeight(360) // image recognition
.setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
.setRotation(rotation)
.build();
FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(newImg, metadata);
FirebaseVisionFaceDetector detector = FirebaseVision.getInstance()
.getVisionFaceDetector(highAccuracyOpts);
Task<List<FirebaseVisionFace>> result =
detector.detectInImage(image)
.addOnSuccessListener(
new OnSuccessListener<List<FirebaseVisionFace>>() {
#Override
public void onSuccess(List<FirebaseVisionFace> faces) {
// Task completed successfully
if (faces.size() != 0) {
Log.i(TAG, String.valueOf(faces.get(0).getSmilingProbability()));
}
}
})
.addOnFailureListener(
new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
// Task failed with an exception
// ...
}
});
mImage.close();
The aim is to have the resulting faces list contain the detected faces in each processed image.
byte[] newImg = convertYUV420888ToNV21(mImage);
FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(newImg, metadata);
These two lines are important. make sure Its creating proper VisionImage.
Checkout my project for all functionality
MLKIT demo

Speeding up image capture from camera2

I'm working on an Android app, which takes a picture via camera2 api. The problem I'm facing is a huge delay between hitting the "Take picture" button and the actual image capture - somewhere around ~1800-2000ms, which I personally think isn't acceptable.
I'd really appreciate if someone could point out the way to improve that. For what it's worth, I later process the JPEG output image into a bitmap, if that makes a difference.
The "picture-taking" class is displayed below.
protected void takePicture() {
if (null == cameraDevice) {
Log.e(TAG, "cameraDevice is null");
return;
}
debugTime(1,"");
try {
Log.e(TAG, "Taking a picture");
int width, height;
Size trySize = defineSize();
width = trySize.getWidth();
height = trySize.getHeight();
debugTime(3,"Defining sizes took ");
ImageReader reader = ImageReader.newInstance(width, height, ImageFormat.JPEG, 1);
List<Surface> outputSurfaces = new ArrayList<>(2);
outputSurfaces.add(reader.getSurface());
outputSurfaces.add(new Surface(textureView.getSurfaceTexture()));
debugTime(3,"Defining output surfaces took ");
final CaptureRequest.Builder captureBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(reader.getSurface());
captureBuilder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO);
debugTime(3,"Creating capture request took ");
// Orientation
int rotation = getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, ORIENTATIONS.get(rotation));
debugTime(3,"Defining rotation took ");
final File file = new File(Environment.getExternalStorageDirectory() + "/pic.jpg");
debugTime(3,"Creating new file took ");
ImageReader.OnImageAvailableListener readerListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Log.e(TAG, "Running method onImageAvailable");
Image image = null;
try {
debugTime(3,"Image becoming available took ");
image = reader.acquireLatestImage();
debugTime(2,"");
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
bytes = new byte[buffer.capacity()];
buffer.get(bytes); //My final output
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (image != null) {
image.close();
}
}
}
};
reader.setOnImageAvailableListener(readerListener, mBackgroundHandler);
final CameraCaptureSession.CaptureCallback captureListener = new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request, TotalCaptureResult result) {
super.onCaptureCompleted(session, request, result);
Log.e(TAG, "Invoking running method sendToScan");
sendToScan();
}
};
cameraDevice.createCaptureSession(outputSurfaces, new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
Log.e(TAG, "Running method onConfigured");
try {
session.capture(captureBuilder.build(), captureListener, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
debugTime(3,"Configuring capture session took ");
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
}
}, mBackgroundHandler);
//stopBackgroundThread();
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
And here's the log
E/CameraActivity: Taking a picture
E/CameraActivity: Defining Size
E/CameraActivity: CHOOSING at size 720x1280
E/DEBUG_TIME: Defining sizes took 25 ms
E/DEBUG_TIME: Defining output surfaces took 20 ms
E/DEBUG_TIME: Creating capture request took 4 ms
E/DEBUG_TIME: Defining rotation took 1 ms
E/DEBUG_TIME: Creating new file took 5 ms
I/RequestQueue: Repeating capture request cancelled.
I/CameraDeviceState: Legacy camera service transitioning to state IDLE
I/CameraDeviceState: Legacy camera service transitioning to state CONFIGURING
I/RequestThread-0: Configure outputs: 2 surfaces configured.
D/Camera: app passed NULL surface
I/RequestThread-0: configureOutputs - set take picture size to 1280x720
I/CameraDeviceState: Legacy camera service transitioning to state IDLE
E/CameraActivity: Running method onConfigured
I/Choreographer: Skipped 38 frames! The application may be doing too much work on its main thread.
W/art: Long monitor contention event with owner method=void java.lang.Object.wait!() from Object.java:4294967294 waiters=0 for 563ms
W/LegacyRequestMapper: convertRequestMetadata - control.awbRegions setting is not supported, ignoring value
W/LegacyRequestMapper: Only received metering rectangles with weight 0.
E/DEBUG_TIME: Configuring capture session took 586 ms
E/BufferQueueProducer: [SurfaceTexture-0-17825-3] dequeueBuffer: min undequeued buffer count (2) exceeded (dequeued=13 undequeued=0)
E/BufferQueueProducer: [SurfaceTexture-0-17825-3] dequeueBuffer: min undequeued buffer count (2) exceeded (dequeued=12 undequeued=1)
I/CameraDeviceState: Legacy camera service transitioning to state CAPTURING
I/RequestThread-0: Received jpeg.
I/RequestThread-0: Producing jpeg buffer...
E/CameraActivity: Running method onImageAvailable
E/DEBUG_TIME: Image becoming available took 1110 ms
D/ImageReader_JNI: ImageReader_lockedImageSetup: Receiving JPEG in HAL_PIXEL_FORMAT_RGBA_8888 buffer.
W/ImageReader_JNI: Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
E/DEBUG_TIME: Taking picture took 1752 ms in TOTAL
According to it, it takes ~500-600ms to configure the capture session (!), but what's even worse, is that it takes almost ~1100-1200ms for the image to become available in imageReader!
There is clearly something wrong, and I can't figure it out. I'd glad for any assistance.
P.S. For the record, I've made an attempt to save the result in YUV_420_888 format, but that only sped me up to ~1000 in total.

Parse: Duplicate records for the a single saveInBackground Call

I have a single incident where a complete duplicate of a entry was made into the database (the same user comment appeared twice). They had different object IDs but were otherwise the exact same. It was slower than usual to finish the posting and only happened once out of dozens of comments, so I want to say it was a Parse issue during the saveInBackground call. Even so, I expect a service like Parse to be a little more robust. As my first time working with Android though, I also can't be sure nothing is wrong on my end. Any help? Also just any criticisms? This is the method called when the user hits a comment submission button:
private void submitComment() {
String text = commentText.getText().toString().trim();
Intent intent = getIntent();
String ID = intent.getStringExtra("imageID");
String parentID = intent.getStringExtra("parent");
// Set up a progress dialog
final ProgressDialog loadingDialog = new ProgressDialog(CommentSubmitActivity.this);
loadingDialog.setMessage(getString(R.string.publishing_comment));
loadingDialog.show();
Comment comment = new Comment();
comment.setText(text);
comment.setUser((ParseUser.getCurrentUser()));
if (ID.equals("#child")) {
comment.setParent(parentID);
comment.setImage("#child");
ParseQuery<ParseObject> query = ParseQuery.getQuery("Comment");
query.getInBackground(parentID, new GetCallback<ParseObject>() {
public void done(ParseObject parentComment, ParseException e) {
if (e == null) {
int numChild = parentComment.getInt("numChild");
parentComment.put("numChild", ++numChild);
parentComment.saveInBackground();
} else {
Log.d("numChild: ", "error");
}
}
});
} else {
comment.setImage(ID);
comment.put("numChild", 0);
ParseQuery<ParseObject> query = ParseQuery.getQuery("ImageUpload");
query.getInBackground(ID, new GetCallback<ParseObject>() {
public void done(ParseObject image, ParseException e) {
if (e == null) {
int numComments = image.getInt("numComments");
image.put("numComments", ++numComments);
image.saveInBackground();
} else {
Log.d("numComments: ", "error");
}
}
});
}
comment.saveInBackground(new SaveCallback() {
#Override
public void done(ParseException e) {
if (e == null) {
loadingDialog.dismiss();
finish();
}
}
});
}
I encountered similar problem like yours.
I created an app where user can create account and add photo to it and list of objects (friends in my case).
Once when I was testing it user was created twice.
I went through my code and my my suspicions are connected with async calls.
I see that you use asynchronous parse api in you application so no fragment of code is waiting for response and blocking the rest of operations.
You cannot control when parse server will response.
What I did I just put all synchronous requests in my custom async code (AsyncTask in Android).
Hope that my answer somehow meeets your expectations.

New Facebook SDK + Android. Add Image on wall facebook

I can not find a working way to send a picture on the wall.
My code is that I do not.
Bundle postParams = new Bundle();
postParams.putByteArray("image", byteArray);
postParams.putString("message", "A wall picture");
Session session = Session.getActiveSession();
if (session != null) {
Log.e("Session", "don t null");
Request request = new Request(session, "me/feed", postParams,
HttpMethod.POST);
RequestAsyncTask task = new RequestAsyncTask(request);
task.execute();
I've never sent pictures from the phone, but from a server using an image link. I'll put this code in case it helps you since most of the logic is similar.
final Bundle parameters = new Bundle();
parameters.putString("name", getString(R.string.app_name));
parameters.putString("caption", "haha");
parameters.putString("link", "www.google.com");
parameters.putByteArray("picture", byteArray);//I took this one from your code. My key is "picture" instead of "image"
Session.openActiveSession(this, true, new StatusCallback() {
#Override
public void call(Session session, SessionState state, Exception exception) {
if (session.isOpened()) {
new FeedDialogBuilder(EndGameActivity.this, session, parameters).build().show();
//you can try this one instead of the one above if you want, but both work well
//Request.newMeRequest(session, new Request.GraphUserCallback() {
//
// #Override
// public void onCompleted(GraphUser user, Response response) {
// final Session session = Session.getActiveSession();
// new FeedDialogBuilder(EndGameActivity.this, session, parameters).build().show();
// }
//}).executeAsync();
}
}
});
This code will only work in the last Facebook SDK 3.5 since Request.newMeRequest was recently introduced and should be used instead of Request.executeMeRequestAsync, which has been deprecated.
Also notice that the key I use is "picture" instead of "image". Maybe that's the problem with your code.
But I do it inside a onClick event when the user touch a button. Why do you need it in your onCreate method?

Categories

Resources