I'm working on an Android app, which takes a picture via camera2 api. The problem I'm facing is a huge delay between hitting the "Take picture" button and the actual image capture - somewhere around ~1800-2000ms, which I personally think isn't acceptable.
I'd really appreciate if someone could point out the way to improve that. For what it's worth, I later process the JPEG output image into a bitmap, if that makes a difference.
The "picture-taking" class is displayed below.
protected void takePicture() {
if (null == cameraDevice) {
Log.e(TAG, "cameraDevice is null");
return;
}
debugTime(1,"");
try {
Log.e(TAG, "Taking a picture");
int width, height;
Size trySize = defineSize();
width = trySize.getWidth();
height = trySize.getHeight();
debugTime(3,"Defining sizes took ");
ImageReader reader = ImageReader.newInstance(width, height, ImageFormat.JPEG, 1);
List<Surface> outputSurfaces = new ArrayList<>(2);
outputSurfaces.add(reader.getSurface());
outputSurfaces.add(new Surface(textureView.getSurfaceTexture()));
debugTime(3,"Defining output surfaces took ");
final CaptureRequest.Builder captureBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(reader.getSurface());
captureBuilder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO);
debugTime(3,"Creating capture request took ");
// Orientation
int rotation = getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, ORIENTATIONS.get(rotation));
debugTime(3,"Defining rotation took ");
final File file = new File(Environment.getExternalStorageDirectory() + "/pic.jpg");
debugTime(3,"Creating new file took ");
ImageReader.OnImageAvailableListener readerListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Log.e(TAG, "Running method onImageAvailable");
Image image = null;
try {
debugTime(3,"Image becoming available took ");
image = reader.acquireLatestImage();
debugTime(2,"");
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
bytes = new byte[buffer.capacity()];
buffer.get(bytes); //My final output
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (image != null) {
image.close();
}
}
}
};
reader.setOnImageAvailableListener(readerListener, mBackgroundHandler);
final CameraCaptureSession.CaptureCallback captureListener = new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request, TotalCaptureResult result) {
super.onCaptureCompleted(session, request, result);
Log.e(TAG, "Invoking running method sendToScan");
sendToScan();
}
};
cameraDevice.createCaptureSession(outputSurfaces, new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
Log.e(TAG, "Running method onConfigured");
try {
session.capture(captureBuilder.build(), captureListener, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
debugTime(3,"Configuring capture session took ");
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
}
}, mBackgroundHandler);
//stopBackgroundThread();
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
And here's the log
E/CameraActivity: Taking a picture
E/CameraActivity: Defining Size
E/CameraActivity: CHOOSING at size 720x1280
E/DEBUG_TIME: Defining sizes took 25 ms
E/DEBUG_TIME: Defining output surfaces took 20 ms
E/DEBUG_TIME: Creating capture request took 4 ms
E/DEBUG_TIME: Defining rotation took 1 ms
E/DEBUG_TIME: Creating new file took 5 ms
I/RequestQueue: Repeating capture request cancelled.
I/CameraDeviceState: Legacy camera service transitioning to state IDLE
I/CameraDeviceState: Legacy camera service transitioning to state CONFIGURING
I/RequestThread-0: Configure outputs: 2 surfaces configured.
D/Camera: app passed NULL surface
I/RequestThread-0: configureOutputs - set take picture size to 1280x720
I/CameraDeviceState: Legacy camera service transitioning to state IDLE
E/CameraActivity: Running method onConfigured
I/Choreographer: Skipped 38 frames! The application may be doing too much work on its main thread.
W/art: Long monitor contention event with owner method=void java.lang.Object.wait!() from Object.java:4294967294 waiters=0 for 563ms
W/LegacyRequestMapper: convertRequestMetadata - control.awbRegions setting is not supported, ignoring value
W/LegacyRequestMapper: Only received metering rectangles with weight 0.
E/DEBUG_TIME: Configuring capture session took 586 ms
E/BufferQueueProducer: [SurfaceTexture-0-17825-3] dequeueBuffer: min undequeued buffer count (2) exceeded (dequeued=13 undequeued=0)
E/BufferQueueProducer: [SurfaceTexture-0-17825-3] dequeueBuffer: min undequeued buffer count (2) exceeded (dequeued=12 undequeued=1)
I/CameraDeviceState: Legacy camera service transitioning to state CAPTURING
I/RequestThread-0: Received jpeg.
I/RequestThread-0: Producing jpeg buffer...
E/CameraActivity: Running method onImageAvailable
E/DEBUG_TIME: Image becoming available took 1110 ms
D/ImageReader_JNI: ImageReader_lockedImageSetup: Receiving JPEG in HAL_PIXEL_FORMAT_RGBA_8888 buffer.
W/ImageReader_JNI: Unable to acquire a lockedBuffer, very likely client tries to lock more than maxImages buffers
E/DEBUG_TIME: Taking picture took 1752 ms in TOTAL
According to it, it takes ~500-600ms to configure the capture session (!), but what's even worse, is that it takes almost ~1100-1200ms for the image to become available in imageReader!
There is clearly something wrong, and I can't figure it out. I'd glad for any assistance.
P.S. For the record, I've made an attempt to save the result in YUV_420_888 format, but that only sped me up to ~1000 in total.
Related
I am implementing the MLKit face detection library with a simple application. The application is a facial monitoring system so i am setting up a preview feed from the front camera and attempting to detect a face. I am using camera2Api. At my ImageReader.onImageAvailableListener, I want to implement the firebase face detection on each read in the image. After creating my FirebaseVisionImage and running the FirebaseVisionFaceDetector I am getting an empty faces list, this should contain detected faces but I always get a face of size 0 even though a face is in the image.
I have tried other forms of creating my FirebaseVisionImage. Currently, I am creating it through the use of a byteArray which I created following the MlKit docs. I have also tried to create a FirebaseVisionImage using the media Image object.
private final ImageReader.OnImageAvailableListener onPreviewImageAvailableListener = new ImageReader.OnImageAvailableListener() {
/**Get Image convert to Byte Array **/
#Override
public void onImageAvailable(ImageReader reader) {
//Get latest image
Image mImage = reader.acquireNextImage();
if(mImage == null){
return;
}
else {
byte[] newImg = convertYUV420888ToNV21(mImage);
FirebaseApp.initializeApp(MonitoringFeedActivity.this);
FirebaseVisionFaceDetectorOptions highAccuracyOpts =
new FirebaseVisionFaceDetectorOptions.Builder()
.setPerformanceMode(FirebaseVisionFaceDetectorOptions.ACCURATE)
.setLandmarkMode(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS)
.setClassificationMode(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.build();
int rotation = getRotationCompensation(frontCameraId,MonitoringFeedActivity.this, getApplicationContext() );
FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder()
.setWidth(480) // 480x360 is typically sufficient for
.setHeight(360) // image recognition
.setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
.setRotation(rotation)
.build();
FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(newImg, metadata);
FirebaseVisionFaceDetector detector = FirebaseVision.getInstance()
.getVisionFaceDetector(highAccuracyOpts);
Task<List<FirebaseVisionFace>> result =
detector.detectInImage(image)
.addOnSuccessListener(
new OnSuccessListener<List<FirebaseVisionFace>>() {
#Override
public void onSuccess(List<FirebaseVisionFace> faces) {
// Task completed successfully
if (faces.size() != 0) {
Log.i(TAG, String.valueOf(faces.get(0).getSmilingProbability()));
}
}
})
.addOnFailureListener(
new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
// Task failed with an exception
// ...
}
});
mImage.close();
The aim is to have the resulting faces list contain the detected faces in each processed image.
byte[] newImg = convertYUV420888ToNV21(mImage);
FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(newImg, metadata);
These two lines are important. make sure Its creating proper VisionImage.
Checkout my project for all functionality
MLKIT demo
I'm trying to update a camera project to Android N and in consequence I'm moving my old CameraCaptureSession to a ReprocessableCaptureSession. I did it and it is working fine, but with this new feature I can use the CameraDevice.TEMPLATE_ZERO_SHUTTER_LAG template in my device and I can reprocess frames with the reprocessCaptureRequest.
Here is where my problem appear. Because I don't find any example, and I don't really understand the little documentation about how to use a reprocessCaptureRequest:
Each reprocess CaptureRequest processes one buffer from CameraCaptureSession's input Surface to all output Surfaces included in the reprocess capture request. The reprocess input images must be generated from one or multiple output images captured from the same camera device. The application can provide input images to camera device via queueInputImage(Image). The application must use the capture result of one of those output images to create a reprocess capture request so that the camera device can use the information to achieve optimal reprocess image quality. For camera devices that support only 1 output Surface, submitting a reprocess CaptureRequest with multiple output targets will result in a CaptureFailure.
I tried to have a look to the CTS tests about the camera in google.sources but they do the same than me. Using multiples imageReaders, saving the TotalCaptureResult of the pictures in a LinkedBlockingQueue<TotalCaptureResult>. And later just calling:
TotalCaptureResult totalCaptureResult = state.captureCallback.getTotalCaptureResult();
CaptureRequest.Builder reprocessCaptureRequest = cameraStore.state().cameraDevice.createReprocessCaptureRequest(totalCaptureResult);
reprocessCaptureRequest.addTarget(state.yuvImageReader.getSurface());
sessionStore.state().session.capture(reprocessCaptureRequest.build(), null, this.handlers.bg());
But it always throw me a RuntimeException:
java.lang.RuntimeException: Capture failed: Reason 0 in frame 170,
I just want to know which is the right way to work with the ReprocessableCaptureSession because I already tried everything and I don't know what I'm doing wrong.
Finally I found the solution to make my reprocessableCaptureSession work.
I use with Flux architecture so don't be confused when you see Dispatcher.dispatch(action), just see it as a callback. So, here is my code:
First How the session is created:
//Configure preview surface
Size previewSize = previewState.previewSize;
previewState.previewTexture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight());
ArrayList<Surface> targets = new ArrayList<>();
for (SessionOutputTarget outputTarget : state.outputTargets) {
Surface surface = outputTarget.getSurface();
if (surface != null) targets.add(surface);
}
targets.add(previewState.previewSurface);
CameraCharacteristics cameraCharacteristics = cameraStore.state().availableCameras.get(cameraStore.state().selectedCamera);
Size size = CameraCharacteristicsUtil.getYuvOutputSizes(cameraCharacteristics).get(0);
InputConfiguration inputConfiguration = new InputConfiguration(size.getWidth(),
size.getHeight(), ImageFormat.YUV_420_888);
CameraCaptureSession.StateCallback sessionStateCallback = new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession session) {
if (sessionId != currentSessionId) {
Timber.e("Session opened for an old open request, skipping. Current %d, Request %d", currentSessionId, sessionId);
//performClose(session);
return;
}
try {
session.getInputSurface();
//This call is irrelevant,
//however session might have closed and this will throw an IllegalStateException.
//This happens if another camera app (or this one in another PID) takes control
//of the camera while its opening
} catch (IllegalStateException e) {
Timber.e("Another process took control of the camera while creating the session, aborting!");
}
Dispatcher.dispatchOnUi(new SessionOpenedAction(session));
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession session) {
if (sessionId != currentSessionId) {
Timber.e("Configure failed for an old open request, skipping. Current %d, request %d", currentSessionId, sessionId);
return;
}
Timber.e("Failed to configure the session");
Dispatcher.dispatchOnUi(new SessionFailedAction(session, new IllegalStateException("onConfigureFailed")));
}
};
if (state.outputMode == OutputMode.PHOTO) {
cameraState.cameraDevice.createReprocessableCaptureSession(inputConfiguration, targets, sessionStateCallback, handlers.bg());
} else if (state.outputMode == OutputMode.VIDEO) {
cameraState.cameraDevice.createCaptureSession(targets, sessionStateCallback, handlers.bg());
}
} catch (IllegalStateException | IllegalArgumentException e) {
Timber.e(e, "Something went wrong trying to start the session");
} catch (CameraAccessException e) {
//Camera will throw CameraAccessException if another we try to open / close the
//session very fast.
Timber.e("Failed to access camera, it was closed");
}
Photo session as been created with 4 surfaces(Preview, YUV(input), JPEG and RAW). After it, I configure my imageWriter:
Dispatcher.subscribe(Dispatcher.VERY_HIGH_PRIORITY, SessionOpenedAction.class)
.filter(a -> isInPhotoMode())
.subscribe(action -> {
PhotoState newState = new PhotoState(state());
newState.zslImageWriter = ImageWriter.newInstance(action.session.getInputSurface(), MAX_REPROCESS_IMAGES);
setState(newState);
});
Ok, now we have the ImageWriter and the session created. No we start the streaming with the repeating request:
CaptureRequest.Builder captureRequestBuilder =
cameraStore.state().cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_ZERO_SHUTTER_LAG);
captureRequestBuilder.addTarget(previewStore.state().previewSurface);
captureRequestBuilder.addTarget(photoStore.state().yuvImageReader.getSurface());
state.session.setRepeatingRequest(captureRequestBuilder.build(), state.zslCaptureCallback, handlers.bg());
To don't add a lot of code, just say that the zslCaptureCallback is a custom callback which save in a LinkedBlockingQueue<TotalCaptureRequest> the X last TotalCaptureRequests. Also, I do the same with the yuvImageReader(input one) saving the last X images in a queue.
Finally here is my "take photo" method:
try {
//Retrieve the last image stored by the zslImageReader
Image image = zslImageReaderListener.getImage();
//Retrieve the last totalCaptureResult from the zslCaptureCallback and create a reprocessableCaptureRequest with it
TotalCaptureResult captureResult = sessionStore.state().zslCaptureCallback.getCaptureResult(image.getTimestamp());
CaptureRequest.Builder captureRequest = cameraStore.state().cameraDevice.createReprocessCaptureRequest(captureResult);
//Add the desired target and values to the captureRequest
captureRequest.addTarget(state().jpegImageReader.getSurface());
//Queued back to ImageWriter for future consumption.
state.zslImageWriter.queueInputImage(image);
//Drain all the unused and queued CapturedResult from the CaptureCallback
sessionStore.state().zslCaptureCallback.drain();
//Capture the desired frame
CaptureRequest futureCaptureResult = captureRequest.build();
sessionStore.state().session.capture(futureCaptureResult, new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(#NonNull CameraCaptureSession session,
#NonNull CaptureRequest request,
#NonNull TotalCaptureResult result) {
Dispatcher.dispatchOnUi(new PhotoStatusChangedAction(PhotoState.Status.SUCCESS));
}
#Override
public void onCaptureFailed(#NonNull CameraCaptureSession session,
#NonNull CaptureRequest request,
#NonNull CaptureFailure failure) {
super.onCaptureFailed(session, request, failure);
Exception captureFailedException = new RuntimeException(
String.format("Capture failed: Reason %s in frame %d, was image captured? -> %s",
failure.getReason(),
failure.getFrameNumber(),
failure.wasImageCaptured()));
Timber.e(captureFailedException, "Cannot take mediaType, capture failed!");
Dispatcher.dispatchOnUi(new PhotoStatusChangedAction(PhotoState.Status.ERROR, captureFailedException));
}
}, this.handlers.bg());
//Capture did not blow up, we are taking the photo now.
newState.status = PhotoState.Status.TAKING;
} catch (CameraAccessException | InterruptedException| IllegalStateException | IllegalArgumentException | SecurityException e) {
Timber.e(e, "Cannot take picture, capture error!");
newState.status = PhotoState.Status.ERROR;
}
I have a requirement in my project, where video is being recorded and uploaded to the server, but since mobile networks are not reliable, at the beginning what I decided to do was every 30 secs
stop the recorder
reset the recorder state
retrieve the file written to by the recorder and upload (multipart form data) it in a different thread.
change the outfile of the recorder to a new file based on the hash of the current timestamp.
repeat process every 30 secs
Doing this suits my needs perfectly as each of the 30sec video file sizes are not more than 1MB and upload happens smoothly.
But the problem I am facing is that every time the media recorder stops and starts again there is a delay of about 500ms, so the video that I receive at the server has these 500ms breaks every 30secs which is really bad for my current situation, so I was thinking if it would be possible to just change the file that the recorder is writing to on the fly?
Relevant code:
GenericCallback onTickListener = new GenericCallback() {
#Override
public void execute(Object data) {
int timeElapsedInSecs = (int) data;
if (timeElapsedInSecs % pingIntervalInSecs == 0) {
new API(getActivity().getApplicationContext()).pingServer(objInterviewQuestion.getCurrentAccessToken(),
new NetworkCallback() {
#Override
public void execute(int response_code, Object result) {
// TODO: HANDLE callback
}
});
}
if (timeElapsedInSecs % uploadIntervalInSecs == 0 && timeElapsedInSecs < maxTimeInSeconds) {
if (timeElapsedInSecs / uploadIntervalInSecs >= 1) {
if(stopAndResetRecorder()) {
openConnectionToUploadQueue();
uploadQueue.add(
new InterviewAnswer(0,
objInterviewQuestion.getQid(),
objInterviewQuestion.getAvf(),
objInterviewQuestion.getNext(),
objInterviewQuestion.getCurrentAccessToken()));
objInterviewQuestion.setAvf(MiscHelpers.getOutputMediaFilePath());
initializeAndStartRecording();
}
}
}
}
};
here is initializeAndStartRecording() :
private boolean initializeAndStartRecording() {
Log.i("INFO", "initializeAndStartRecording");
if (mCamera != null) {
try {
mMediaRecorder = CameraHelpers.initializeRecorder(mCamera,
mCameraPreview,
desiredVideoWidth,
desiredVideoHeight);
mMediaRecorder.setOutputFile(objInterviewQuestion.getAvf());
mMediaRecorder.prepare();
mMediaRecorder.start();
img_recording.setVisibility(View.VISIBLE);
is_recording = true;
return true;
} catch (Exception ex) {
MiscHelpers.showMsg(getActivity(),
getString(R.string.err_cannot_start_recorder),
AppMsg.STYLE_ALERT);
return false;
}
} else {
MiscHelpers.showMsg(getActivity(), getString(R.string.err_camera_not_available),
AppMsg.STYLE_ALERT);
return false;
}
}
Here is stopAndResetRecorder:
boolean stopAndResetRecorder() {
boolean success = false;
try {
if (mMediaRecorder != null) {
try {
//stop recording
mMediaRecorder.stop();
mMediaRecorder.reset();
mMediaRecorder.release();
mMediaRecorder = null;
Log.d("MediaRecorder", "Recorder Stopped");
success = true;
} catch (Exception ex) {
if(ex != null && ex.getMessage()!=null && ex.getMessage().isEmpty()){
Crashlytics.log(Log.ERROR, "Failed to stop MediaRecorder", ex.getMessage());
Crashlytics.logException(ex);
}
success = false;
} finally {
mMediaRecorder = null;
is_recording = false;
is_recording = false;
}
}
} catch (Exception ex) {
success = false;
}
Log.d("MediaRecorder", "Success = " + String.valueOf(success));
return success;
}
You can speed it up slightly by not calling the release() method and all of the rest of the destruction that you do in stopAndResetRecorder() (see the documentation for the MediaRecorder state machine).
You also don't need to call both stop() and reset().
You could instead have an intermediate resetRecorder() function which just performed reset() then call initializeAndStartRecording(). When you finish all of your recording, you could then call stopRecorder() which would perform the destruction of your mMediaRecorder.
As I say, this will save you some time, but whether the extra overhead you currently have of destroying and re-initialising the MediaRecorder is a significant portion of the delay I don't know. Give that a try, and if it doesn't fix your problem, I'd be interested to know how much time it did/didn't save.
It seems to me that the setOutputFile calls a native method regarding to MediaRecorder's source, so I don't think there's an easy way to write into seperate files at the same time.
What about uploading it in one chunk at the end, but allow the user to do anything after starting the uploading process? Then the user wouldn't notice how much time the upload takes, and you can notify him later when the upload successed/failed.
[Edit:] Try to stream upload to server, where the server does the chunking mechanism to seperate files. Here you can have a brief explanation how to do so.
Apparently MediaRecorder.setOutputFile() also accepts a FileDescriptor.
So, if you were programming at low level (JNI) you could have represented a process's input stream as a file descriptor and in turn, had that process write to different files when desired. But that would involve managing that native "router" process from java.
Unfortunately, on java API side, you are out of luck.
I'm just starting on android development.
I've been trying for some time to extract frames from video files I have on my phone like this:
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(getApplicationContext(),linkToVideo); // linkToVideo is Uri
ImageView frameView = (ImageView) findViewById(R.id.video_frame);
frameView.setImageBitmap(retriever.getFrameAtTime(frameTime,MediaMetadataRetriever.OPTION_CLOSEST)); // frameTime in microseconds
This works for 640x480 .mp4 videos but not for the 1280x720 .3gp files my camera has recorded. It just takes an awfull long time and eventually the app stops responding. When I use OPTION_CLOSEST_SYNC everything runs smoothly, however I'm interested in getting more than the sync frames.
Any ideas on how I can solve this? I was trying to avoid video encoding but if there is no other option I'll resort to that.
Thanks in advance for the time you take to help me.
Try FFmpegMediaMetadataRetriever:
import wseemann.media.FFmpegMediaMetadataRetriever;
...
FFmpegMediaMetadataRetriever retriever = new FFmpegMediaMetadataRetriever();
retriever.setDataSource(getApplicationContext(),linkToVideo); // linkToVideo is Uri
ImageView frameView = (ImageView) findViewById(R.id.video_frame);
frameView.setImageBitmap(retriever.getFrameAtTime(frameTime,MediaMetadataRetriever.OPTION_CLOSEST)); // frameTime in microseconds
Multiply the MediaPlayer.currentPosition by 1000
SNAPSHOT_DURATION_IN_MILLIS * 1000
Example
val mediaMetadataRetriever = MediaMetadataRetriever()
return try {
println("mediaPlayer.currentPosition.toLong(): ${mediaPlayer?.currentPosition?.toLong()}")
mediaMetadataRetriever.setDataSource(context, uri)
val bitmap = mediaPlayer?.let { mediaMetadataRetriever.getFrameAtTime(it.currentPosition.toLong()*1000, MediaMetadataRetriever.OPTION_CLOSEST) }
bitmap
} catch (t: Throwable) {
// TODO log
null
} finally {
try {
mediaMetadataRetriever.release()
} catch (e: RuntimeException) {
//
}
}
I have a problem with the Android MediaPlayer when changing the dataSource of the player. According the specification of the MediaPlayer (http://developer.android.com/reference/android/media/MediaPlayer.html) I have to reset the player when changing the dataSource. This works fine, but as soon as the channelChanged method is called twice in quick succession the MediaPlayer.reset freezes the UI. I profile the code as seen here:
public void channelChanged(String streamingUrl)
{
long m1 = System.currentTimeMillis();
mMediaPlayer.reset();
long m2 = System.currentTimeMillis();
try
{
mMediaPlayer.setDataSource(streamingUrl);
}
catch (IOException e)
{
e.printStackTrace();
}
long m3 = System.currentTimeMillis();
mMediaPlayer.prepareAsync();
long m4 = System.currentTimeMillis();
Log.d("MEDIAPLAYER", "reset: " + (m2 - m1));
Log.d("MEDIAPLAYER", "setDataSource: " + (m3 - m2));
Log.d("MEDIAPLAYER", "preparing: " + (m4 - m3));
}
reset: 3
setDataSource: 1
preparing: 0
reset: 3119
setDataSource: 2
preparing: 1
So apparently the reset is blocked by the asynchronous preparing of the first call (when I wait until the first stream starts and then call channelChanged() again, everything is fine).
Any ideas how to solve the problems? Should I execute the whole method in a separate thread? Basically I want to avoid that, because it seems not to be a good coding style and can possibly cause some further issues, e.g. when the user tries to start the player again, but the player is still in the reset method, which on the other hand seems to wait for the asyncPrepare method. It is not clear how the player would behave...
Is there any other good solution?
MediaPlayer is a tricky bastard. I recommend you take a look at the sample app where the MediaPlayer bad design is made evident by looking at the mess of code you have to write around it to have a consistent media playback experience.
If anything, after looking at the sample, you see that when they want to skip a track, they essentially reset and release…
mPlayer.reset();
mPlayer.release();
…and later when they are ready to load a new track…
try {
mPlayer.reset();
mPlayer.setDataSource(someUrl);
mPlayer.setOnPreparedListener(new MediaPlayer.OnPreparedListener() {
#Override
public void onPrepared(MediaPlayer mediaPlayer) {
//bam!
}
});
mPlayer.prepareAsync();
} catch (IllegalStateException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (IllegalArgumentException e) {
e.printStackTrace();
}
I have added the try/catch because on some devices/OS versions, the MediaPlayer is worse than others and sometimes it just does weird stuff. You should have an Interface/Listener that is capable of reacting to these situations
UPDATE:
This is a method I use when I stop (or pause) my Music Playback (mostly taken from the sample app, this is running in a service and it has been modified to suit my own app but still).
The first method is used by both stop and pause, the former passes true, the later false
/**
* Releases resources used by the service for playback. This includes the "foreground service"
* status and notification, the wake locks and possibly the MediaPlayer.
*
* #param releaseMediaPlayer Indicates whether the Media Player should also be released or not
*/
void relaxResources(boolean releaseMediaPlayer) {
stopForeground(true);
stopMonitoringPlaybackProgress();
// stop and release the Media Player, if it's available
if (releaseMediaPlayer && mPlayer != null) {
mPlayer.reset();
mPlayer.release();
mPlayer = null;
}
// we can also release the Wifi lock, if we're holding it
if (mWifiLock.isHeld()) {
mWifiLock.release();
}
}
This is part of the processPauseRequest():
if (mState == State.Playing) {
// Pause media player and cancel the 'foreground service' state.
mState = State.Paused;
mPlayer.pause();
dispatchBroadcastEvent(ServiceConstants.EVENT_AUDIO_PAUSE);//notify broadcast receivers
relaxResources(false); // while paused, we always retain the mp and notification
And this is part of the processStopRequest() (simplified):
void processStopRequest(boolean force, final boolean stopSelf) {
if (mState == State.Playing || mState == State.Paused || force) {
mState = State.Stopped;
// let go of all resources...
relaxResources(true);
currentTrackNotification = null;
giveUpAudioFocus();
}
}
Now the core part is the next/skip…
This is what I do…
void processNextRequest(final boolean isSkipping) {
processStopRequest(true, false); // THIS IS IMPORTANT, WE RELEASE THE MP HERE
mState = State.Retrieving;
dispatchBroadcastEvent(ServiceConstants.EVENT_TRACK_INFO_LOAD_START);
// snipped but here you retrieve your next track and when it's ready…
// you just processPlayRequest() and "start from scratch"
This is how the MediaPlayer sample does it (found in the samples folder) and I haven't had problems with it.
That being said, i know what you mean when you say you get the whole thing blocked, I've seen it and it's the MP buggyness. If you get an ANR I'd like to see the log for it.
For the record here's how I "begin playing" (a lot of custom code has been omited but you get to see the MP stuff):"
/**
* Starts playing the next song.
*/
void beginPlaying(Track track) {
mState = State.Stopped;
relaxResources(false); // release everything except MediaPlayer
try {
if (track != null) {
createMediaPlayerIfNeeded();
mPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC);
mPlayer.setDataSource(track.audioUrl);
} else {
processStopRequest(true, false); // stop everything!
return;
}
mState = State.Preparing;
setUpAsForeground(); //service
/* STRIPPED ALL CODE FROM REMOTECONTROLCLIENT, AS IT ADDS A LOT OF NOISE :) */
// starts preparing the media player in the background. When it's done, it will call
// our OnPreparedListener (that is, the onPrepared() method on this class, since we set
// the listener to 'this').
// Until the media player is prepared, we *cannot* call start() on it!
mPlayer.prepareAsync();
// We are streaming from the internet, we want to hold a Wifi lock, which prevents
// the Wifi radio from going to sleep while the song is playing.
if (!mWifiLock.isHeld()) {
mWifiLock.acquire();
}
} catch (IOException ex) {
Log.e("MusicService", "IOException playing next song: " + ex.getMessage());
ex.printStackTrace();
}
}
As a final note, I've noticed that the "media player blocking everything" happens when the audio stream or source is unavailable or unreliable.
Good luck! Let me know if there's anything specific you'd like to see.
The newest phones and Android API works much butter, reset method takes only 5-20 ms when fast switching between songs (next or prev)
So there is no solution for older phones, it just how it works