I have tried everything and I don't find a reason for why my Camera app is throwing me a dead service exception.
Here is the case. I'm using a HDR jni library, which I already check and it works fine, It's not a memory lead of native memory, and it's not a jni problem. So, the problem must to be in my code:
I'm just waiting to the CaptureResult to return me a AE_CONVERGED_STATE to check if the sensor already take the correct exposure and then I call my method:
Log.performanceEnd("YUV capture");
Log.d(TAG, "[onImageAvailable] YUV capture, mBurstCount: " + mBurstCount);
Image image = imageReader.acquireNextImage();
if (mBackgroundHandler != null) {
mBackgroundHandler.post(new YuvCopy(image, mBurstCount));
}
mBurstCount++;
if (mBurstState == BURST_STATE_HDR) {
switch (mBurstCount) {
case 1:
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION, HDR_EXPOSURE_COMPENSATION_VALUE_HIGH);
break;
case 2:
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION, HDR_EXPOSURE_COMPENSATION_VALUE_LOW);
break;
case 3:
//Restore exposure compensation value
mCaptureCallback = mPhotoCaptureCallback;
mSettingsManager.setExposureCompensation(mPreviewRequestBuilder);
mActivity.runOnUiThread(new Runnable() {
#Override
public void run() {
onPictureCaptured();
}
});
unlockFocus();
break;
}
if (mBurstCount != 3) {
updatePreviewSession();
}
//Finish HDR session
if (mBurstCount < YUV_BURST_LIMIT) mHdrState = STATE_PICTURE_TAKEN;
}
Here is my YUV method:
/**
* Transform YUV420 to NV21 readable frames
*/
private class YuvCopy implements Runnable {
private final Image mImage;
private final int mPictureIndex;
public YuvCopy(Image image, int index) {
mImage = image;
mPictureIndex = index;
}
#Override
public void run() {
if (mImage != null) {
if (mImage.getWidth() * mImage.getHeight() > 0) {
Image.Plane[] planes = mImage.getPlanes();
long startCopy = System.currentTimeMillis();
int width = mImage.getWidth();
int height = mImage.getHeight();
int ySize = width * height;
ByteBuffer yBuffer = mImage.getPlanes()[0].getBuffer();
ByteBuffer uvBuffer = mImage.getPlanes()[1].getBuffer();
ByteBuffer vuBuffer = mImage.getPlanes()[2].getBuffer();
byte[] mData = new byte[ySize + (ySize / 2)];
yBuffer.get(mData, 0, ySize);
vuBuffer.get(mData, ySize, (ySize / 2) - 1);
mData[mData.length - 1] = uvBuffer.get(uvBuffer.capacity() - 1);
mImage.close();
mHdrCaptureArray[mPictureIndex] = mData;
Log.i(TAG, "[YuvCopy|run] Time to Copy data: " + (System.currentTimeMillis() - startCopy) + "ms");
if (mPictureIndex == YUV_BURST_LIMIT - 1) {
startHdrProcessing();
} else {
mImage.close();
}
}
}
}
I pick a total of three photos and then I call my merge method of my JNI library. I tried to comment all the jni code and it still happening, so I think that possibly the problem must to be here, in my YUV method or maybe in the Burst HDR call.
Finally here is my log error when it happends:
01-01 12:30:27.531 21945-21957/com.myCamera W/AudioSystem: AudioFlinger server died!
01-01 12:30:27.532 21945-22038/com.myCamera W/AudioSystem: AudioPolicyService server died!
1-01 12:30:27.903 21945-21978/com.myCamera I/CameraManagerGlobal: Connecting to camera service
01-01 12:30:27.903 21945-21978/com.myCamera E/CameraManagerGlobal: Camera service is unavailable
01-01 12:30:27.903 21945-21978/com.myCamera W/System.err: android.hardware.camera2.CameraAccessException: Camera service is currently unavailable
01-01 12:30:29.103 21945-21945/com.myCamera W/System.err: android.hardware.camera2.CameraAccessException: Process hosting the camera service has died unexpectedly
Sometimes it take just 2 photos, and sometimes 300, but in the end, it still happening. Also, a lot of times all my device is almost dead and anything work's really fine, so I need to reboot my phone.
Finally the problem was caused because I had a wrong configuration of my ImageReaders, depending of the Hardware level of the Phone, the camera can allow different types of imageReaders with different sizes for each one.
For example, a INFO_SUPPORTED_HARDWARE_LEVEL == FULL doesn't support a JPEG image reader configurated to the max size of the device and another one with YUV format over the preview size in that moment. Anyway, sometimes it can work, and sometimes fail.
If an application tries to create a session using a set of targets that exceed the limits described in the below tables, one of three possibilities may occur. First, the session may be successfully created and work normally. Second, the session may be successfully created, but the camera device won't meet the frame rate guarantees as described in getOutputMinFrameDuration(int, Size). Or third, if the output set cannot be used at all, session creation will fail entirely, with onConfigureFailed(CameraCaptureSession) being invoked.
Quote from: https://developer.android.com/reference/android/hardware/camera2/CameraDevice.html
That means that my device can't have a YUV image reader configurated to 4608x3456 size when my JPEG imageReader is configurated to the same size too. It can only support my preview size(1920x1080). You can check all the possible configurations in this link.
Related
I am taking a series of pictures using Android Camera2 API for real time pose estimation and environment reconstruction (the SLAM problem). Currently I simply save all of these pictures in my SD card for off-line processing.
I setup the processing pipeline according to google's Camera2Basic using a TextureView as well as an ImageReader, where they are both set as target surfaces for a repeat preview request.
mButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
if(mIsShooting){
try {
mCaptureSession.stopRepeating();
mPreviewRequestBuilder.removeTarget(mImageReader.getSurface());
mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), mCaptureCallback, mBackgroundHandler);
mIsShooting = false;
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
else{
try {
mCaptureSession.stopRepeating();
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), mCaptureCallback, mBackgroundHandler);
mIsShooting = true;
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
}
});
The ImageReader is added/removed when pressing the button. The ImageReader's OnImageAvailableListener is implemented as follow:
private ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image img = reader.acquireLatestImage();
if(null == img){
return;
}
if(img.getTimestamp() <= mLatestFrameTime){
Log.i(Tag, "disorder detected!");
return;
}
mLatestFrameTime = img.getTimestamp();
ImageSaver saver = new ImageSaver(img, img.getTimestamp());
saver.run();
}
};
I use acquireLatestImage (with buffer size set to 2) to discard old frames and have also checked the image's timestamp to make sure they are monotonously increasing.
The reader does receive images at an acceptable rate (about 25fps). However a closer look at the saved image sequence show they are not
always saved in chronological order.
The following pictures come from a long sequence shot by the program (sorry for not being able to post pictures directly :( ):
Image 1:
Image 2:
Image 3:
Such disorder does not occur very often but they can occur any time and seems not to be an initialization problem. I suppose it has something to do with the ImageReader's buffer size as with larger buffer less "flash backs" are occurred. Does anyone have the same problem?
I finally find that such disorder disappears when setting ImageReader's format to be YUV_420_888 in its constructor. Originally I set this field as JPEG.
Using JPEG format incurs not only large processing delay but also disorder. I guess the conversion from image sensor data to desired format utilizes other hardware such as DSP or GPU which does not guarantee chronological order.
Are you using TEMPLATE_STILL_CAPTURE for the capture requests when you enable the ImageReader, or just TEMPLATE_PREVIEW? What devices are you seeing issues with?
If you're using STILL_CAPTURE, make sure you check if the device supports the ENABLE_ZSL flag, and set it to false. When it is set to true (generally the default on devices that support it, for the STILL_CAPTURE template), images may be returned out of order since there's a zero-shutter-lag queue in place within the camera device.
I'm building a sort of custom version of wireshark with jnetpcap v1.4r1425. I just want to open offline pcap files and display them in my tableview, which works great except for the speed.
The files I open are around 100mb with 700k packages.
public ObservableList<Frame> readOfflineFiles1(int numFrames) {
ObservableList<Frame> frameData = FXCollections.observableArrayList();
if (numFrames == 0){
numFrames = Pcap.LOOP_INFINITE;
}
final StringBuilder errbuf = new StringBuilder();
final Pcap pcap = Pcap.openOffline(FileAddress, errbuf);
if (pcap == null) {
System.err.println(errbuf); // Error is stored in errbuf if any
return null;
}
JPacketHandler<StringBuilder> packetHandler = new JPacketHandler<StringBuilder>() {
public void nextPacket(JPacket packet, StringBuilder errbuf) {
if (packet.hasHeader(ip)){
sourceIpRaw = ip.source();
destinationIpRaw = ip.destination();
sourceIp = org.jnetpcap.packet.format.FormatUtils.ip(sourceIpRaw);
destinationIp = org.jnetpcap.packet.format.FormatUtils.ip(destinationIpRaw);
}
if (packet.hasHeader(tcp)){
protocol = tcp.getName();
length = tcp.size();
int payloadOffset = tcp.getOffset() + tcp.size();
int payloadLength = tcp.getPayloadLength();
buffer.peer(packet, payloadOffset, payloadLength); // No copies, by native reference
info = buffer.toHexdump();
} else if (packet.hasHeader(udp)){
protocol = udp.getName();
length = udp.size();
int payloadOffset = udp.getOffset() + udp.size();
int payloadLength = udp.getPayloadLength();
buffer.peer(packet, payloadOffset, payloadLength); // No copies, by native reference
info = buffer.toHexdump();
}
if (packet.hasHeader(payload)){
infoRaw = payload.getPayload();
length = payload.size();
}
frameData.add(new Frame(packet.getCaptureHeader().timestampInMillis(), sourceIp, destinationIp, protocol, length, info ));
//System.out.print(i+"\n");
//i=i+1;
}
};
pcap.loop(numFrames, packetHandler , errbuf);
pcap.close();
return frameData;
}
This code is very fast for the first maybe 400k packages, but after that it slows down a lot. It needs around 1 minute for the first 400k packages and around 10 minutes for the rest. What is the issue here?
It's not that the list is getting too timeconsuming to work with is it? the listmethod add is O(1), isnt it?
I asked about this on the official jnetpcap forums too but it's not very active.
edit:
turn out it slows down massively because of the heap usage. Is there a way to reduce this?
As the profiler showed you, you're running low on memory and it starts to slow down.
Either give more memory with -Xmx or don't load all the packets into memory at once.
I am relatively new at Android (to give context to my skill level).
Summary (Question stated at the bottom): My app uses a lot of CPU (sounds basic, sorry) and I think it is because I'm using 3000 datapoints.
Context: I am making an app that takes bytes from a Bluetooth LE service
private void broadcastUpdate(final String action,final BluetoothGattCharacteristic characteristic) {
final Intent intent = new Intent(action);
// For all other profiles, writes the data formatted in HEX.
final byte[] data = characteristic.getValue();
Log.i(TAG, "data"+characteristic.getValue());
if (data != null && data.length > 0) {
final StringBuilder stringBuilder = new StringBuilder(data.length);
for(byte byteChar : data)
stringBuilder.append(String.format("%02X ", byteChar));
Log.d(TAG, String.format("%s", new String(data)));
// getting cut off when longer, need to push on new line, 0A
intent.putExtra(EXTRA_DATA,String.format("%s", new String(data)));
}
sendBroadcast(intent);
}
In another activity, I append the data to the graph using datapoints. (displayData just displays the data I'm getting onto a textview).
else if (BluetoothLeService.ACTION_DATA_AVAILABLE.equals(action)) {
//cnt starts at 0
String data = intent.getStringExtra(mBluetoothLeService.EXTRA_DATA);
double y = Double.parseDouble(data); //create type:double version of data
if(cnt == 3000) { //if graph maxes out regenerate new data
cnt = 0; //set the count back to zero
clearUI(); //clear textView that displays string version
series.resetData(new DataPoint[] {new DataPoint(cnt + 1, y)});
}
else //if there is no need to reset the map
{
series.appendData(new DataPoint(cnt + 1, y), false, 3000); //append that data to the series
}
displayData(data, y); //display cnt, data(string), and y(double)
cnt++;
}
Using a serial port tester, sending the number "12.5" every 60ms, the graphing process works fine until I reach about 300 datapoints. After that, the app becomes laggy. Logcat states that 30-34 frames are being skipped. By 3000 datapoints, Logcat states that 62-74 frames are being skipped.
Question: What can I do to structure my data better and/or make the app run smoother?
I have created an application which uses the camera intent to capture photographs. The photographs are being captured fine and saved to respective folders. The issue is that only on Samsung S Series devices, the images ate always in Portrait mode, even if the image is captured in landscape mode. Due to this issue, I have tried to get the orientation of the captured images and then change them to my requirements accordingly, but the orientation always returns zero.
I am trying to use this method:
public static int getRotation(Context context,Uri selectedImage) {
int rotation =0;
ContentResolver content = context.getContentResolver();
Cursor mediaCursor = content.query(MediaStore.Images.Media.EXTERNAL_CONTENT_URI,
new String[] { "orientation", "date_added" },null, null,"date_added desc");
if (mediaCursor != null && mediaCursor.getCount() !=0 ) {
while(mediaCursor.moveToNext()){
rotation = mediaCursor.getInt(0);
break;
}
}
mediaCursor.close();
return rotation;
}
The method always returns 0 no matter what is the orientation of the image. Where am I going wrong? What is to be done to address the issue?
Samsung likes to break stuff. I have a similar experience before.
But the camera intent should have saved the orientation info as Exif in the jpg file.
As you are using camera Intent, you should have the File Uri. Then you can use ExifInterface to extra the rotation directly from the file.
http://developer.android.com/reference/android/media/ExifInterface.html
Sample code
ExifInterface exif = new ExifInterface(file);
int oridentation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL);
switch(orientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
//do what you want
break;
case ExifInterface.ORIENTATION_ROTATE_180:
//do what you want
break;
....
}
I have a strange phenomenon when resuming a file transfer.
Look at the picture below you see the bad section.
This happens apparently random, maybe every 10:th time.
Im sending the picture from my Android phone to java server over ftp.
What is it that i forgot here.
I see the connection is killed due to java.net.SocketTimeoutException:
The transfer is resuming like this
Resume at : 287609 Sending 976 bytes more
The bytes are always correct when file is completely received.
Even for the picture below.
Dunno where to start debug this since its working most of the times.
Any suggestions or ideas would be grate i think i totally missed something here.
The device Sender code (only send loop):
int count = 1;
//Sending N files, looping N times
while(count <= max) {
String sPath = batchFiles.get(count-1);
fis = new FileInputStream(new File(sPath));
int fileSize = bis.available();
out.writeInt(fileSize); // size
String nextReply = in.readUTF();
// if the file exist,
if(nextReply.equals(Consts.SERVER_give_me_next)){
count++;
continue;
}
long resumeLong = 0; // skip this many bytes
int val = 0;
buffer = new byte[1024];
if(nextReply.equals(Consts.SERVER_file_exist)){
resumeLong = in.readLong();
}
//UPDATE FOR #Justin Breitfeller, Thanks
long skiip = bis.skip(resumeLong);
if(resumeLong != -1){
if(!(resumeLong == skiip)){
Log.d(TAG, "ERROR skip is not the same as resumeLong ");
skiip = bis.skip(resumeLong);
if(!(resumeLong == skiip)){
Log.d(TAG, "ERROR ABORTING skip is not the same as resumeLong);
return;
}
}
}
while ((val = bis.read(buffer, 0, 1024)) > 0) {
out.write(buffer, 0, val);
fileSize -= val;
if (fileSize < 1024) {
val = (int) fileSize;
}
}
reply = in.readUTF();
if (reply.equals(Consts.SERVER_file_receieved_ok)) {
// check if all files are sent
if(count == max){
break;
}
}
count++;
}
The receiver code (very truncated):
//receiving N files, looping N times
while(count < totalNrOfFiles){
int ii = in.readInt(); // File size
fileSize = (long)ii;
String filePath = Consts.SERVER_DRIVE + Consts.PTPP_FILETRANSFER;
filePath = filePath.concat(theBatch.getFileName(count));
File path = new File(filePath);
boolean resume = false;
//if the file exist. Skip if done or resume if not
if(path.exists()){
if(path.length() == fileSize){ // Does the file has same size
logger.info("File size same skipping file:" + theBatch.getFileName(count) );
count++;
out.writeUTF(Consts.SERVER_give_me_next);
continue; // file is OK don't upload it again
}else {
// Resume the upload
out.writeUTF(Consts.SERVER_file_exist);
out.writeLong(path.length());
resume = true;
fileSize = fileSize-path.length();
logger.info("Resume at : " + path.length() +
" Sending "+ fileSize +" bytes more");
}
}else
out.writeUTF("lets go");
byte[] buffer = new byte[1024];
// ***********************************
// RECEIVE FROM PHONE
// ***********************************
int size = 1024;
int val = 0;
bos = new BufferedOutputStream(new FileOutputStream(path,resume));
if(fileSize < size){
size = (int) fileSize;
}
while (fileSize >0) {
val = in.read(buffer, 0, size);
bos.write(buffer, 0, val);
fileSize -= val;
if (fileSize < size)
size = (int) fileSize;
}
bos.flush();
bos.close();
out.writeUTF("file received ok");
count++;
}
Found the error and the problem was bad logic from my part.
say no more.
I was sending pictures that was being resized just before they where sent.
The problem was when the resume kicked in after a failed transfer
the resized picture was not used, instead the code used the original
pictured that had a larger scale size.
I have now setup a short lived cache that holds the resized temporary pictures.
In the light of the complexity of the app im making I simply forgot that the files during resume was not the same as original.
With a BufferedOutputStream, BufferedInputStream, you need to watch out for the following
Create BufferedOutputStream before BuffererdInputStream (on both client and server)
And flush just after create.
Flush after every write (not just before close)
That worked for me.
Edited
Add sentRequestTime, receivedRequestTime, sentResponseTime, receivedResponseTime to your packet payload. Use System.nanoTime() on these, run your server and client on the same host, use ExecutorService to run multiple clients for that server, and plot your (received-sent) for both request and response packets, time delay on a excel chart (some csv format). Do this before bufferedIOStream and afterIOStream. You will be pleased to know that your performance has boosted by 100%. Made me very happy to plot that graph, took about 45 mins.
I have also heard that using custom buffer's further improves performance.
Edited again
In my case I am using Object IOStreams, I have added a payload of 4 long variables to the object, and initialize sentRequestTime when I send the packet from the client, initialize receivedRequestTime when the server receives the response, so and so forth for the response from server to client too. I then find the difference between received and sent time to find out the delay in response and request. Be careful to run this test on localhost. If you run it between different hardware/devices, their actual time difference may interfere with your test results. Since requestReceivedTime is time stamped at the server end and the requestSentTime is time stamped at the client end. In other words, their own local time is stamped (obviously). And both of these devices running the exact same time to the nano second is not possible. If you must run it between different devices atleast make sure that you have ntp running (to keep them time synchronized). That said, you hare comparing the performance before and after bufferedio (you dont really care about the actual time delays right ?), so time drift should not really matter. Comparing a set of results before buffered and after buffered is your actual interest.
Enjoy!!!