Failing Android MediaCodec Decoding from Unity3d - java

I've got a problem similar to the question/answer posed here: https://stackoverflow.com/a/22461014. However, the difference is that the I'm trying to decode from the UnityMain thread (Unity's main loop). Calling from Update(), I pass a byte array and a textureId to MediaCodec's decoder.
public void decodeFrameToTexture(BytePointer pixels, int len, int textureID) {
if ( this.textureID != textureID) {
Log.d(TAG, "TextureID changed: " + textureID);
this.textureID = textureID;
SurfaceTexture surfaceTexture = new SurfaceTexture(textureID);
mSurface = new Surface(surfaceTexture);
outputSurface = new CodecOutputSurface(width, height, textureID);
}
... then we do the decoding (basically a copy of the code at http://bigflake.com/mediacodec/ExtractMpegFramesTest.java.txt but without the while loop, as this is frame-by-frame. Also copied the CodecOutputSurface and supporting classes basically verbatim).
Finally we have this code:
decoder.releaseOutputBuffer(decoderStatus, info.size != 0 && outputSurface != null /*render*/);
if ( outputSurface != null ) {
outputSurface.awaitNewImage();
outputSurface.drawImage(true);
}
The trouble is, awaitNewImage() always times out without getting a frame, leading back to the problem referenced here, that the onFrameAvailable() callback is never getting called.
For reference, UnityMain does not have a Looper component. When running this code:
Looper looper;
if ((looper = Looper.myLooper()) != null) {
mEventHandler = new EventHandler(looper);
} else if ((looper = Looper.getMainLooper()) != null) {
mEventHandler = new EventHandler(looper);
} else {
mEventHandler = null;
}
the assigned looper from that thread is the MainLooper looper. Any ideas would be appreciated. As stated by #fadden, "the trick is to make sure that frame-available events arrive on a different thread from the one sitting in awaitNewImage()". Given that we're running
mSurfaceTexture.setOnFrameAvailableListener(this);
from UnityMain, I think this satisfies this requirement? The callback should be called from the "main" thread?

Related

Error of FFmpeg on Java in "av_image_copy_to_buffer" method during decoding H.264 stream

I'm trying to decode H.264 stream, which is sent over Socket from an Android application to a computer. And I also want to show the decoded stream using JavaFX. I searched for a long time, and decided to use JavaCV / FFmpeg. However I got error from FFmpeg. (I was inspired by this code.)
Questions:
Why does FFmpeg make error?
Is it a correct way to convert AVFrame to javafx.scene.image.Image?
I'm using:
javacv-platform 1.4.4
ffmpeg-platform 4.1-1.4.4
Code:
This is a part of import and class fields, and method which runs once at the first time. (Actually the content of initialize() is wrapped by try~catch.)
import javafx.scene.image.Image;
private avcodec.AVCodec avCodec;
private avcodec.AVCodecContext avCodecContext;
private avutil.AVDictionary avDictionary;
private avutil.AVFrame avFrame;
public void initialize() {
avCodec = avcodec_find_decoder(AV_CODEC_ID_H264);
if (avCodec == null) {
throw new RuntimeException("Can't find decoder");
}
avCodecContext = avcodec_alloc_context3(avCodec);
if (avCodecContext == null) {
throw new RuntimeException("Can't allocate decoder context");
}
int result = avcodec_open2(avCodecContext, avCodec, (AVDictionary) null);
if (result < 0) {
throw new RuntimeException("Can't open decoder");
}
avFrame = av_frame_alloc();
if (avFrame == null) {
throw new RuntimeException("Can't allocate frame");
}
}
And this is a method which is called every time when I receive a packet from Android. byte[] data is the packet data starting with 0x00, 0x00, 0x00, 0x01.
The place where I get error is number_of_written_bytes. It always gets <0.
private void decode(byte[] data) {
AVPacket avPacket = new AVPacket();
av_init_packet(avPacket);
avPacket.pts(AV_NOPTS_VALUE);
avPacket.dts(AV_NOPTS_VALUE);
BytePointer bytePointer = new BytePointer(data);
bytePointer.capacity(data.length);
avPacket.data(bytePointer);
avPacket.size(data.length);
avPacket.pos(-1);
avcodec_send_packet(avCodecContext, avPacket);
int result = avcodec_receive_frame(avCodecContext, avFrame);
if (result >= 0) {
int bufferOutputSize = av_image_get_buffer_size(avFrame.format(), avFrame.width(), avFrame.height(), 16);
Pointer pointer = av_malloc(bufferOutputSize);
BytePointer outputPointer = new BytePointer(pointer);
int number_of_written_bytes = av_image_copy_to_buffer(outputPointer, bufferOutputSize, avFrame.data(), avFrame.linesize(), avFrame.chroma_location(), avFrame.width(), avFrame.height(), 1);
if (number_of_written_bytes < 0) {
//The process always come here.
throw new RuntimeException("Can't copy image to buffer");
}
System.out.println("decode success");
Image image = new Image(new ByteArrayInputStream(outputPointer.asBuffer().array()));
} else {
System.out.println("decode failed");
}
}
Anything is helpful for me. Thanks.

Processing: Label text slow to update via network events

I'm working on a sketch that is receiving network events from an external program (specifically, an OpenFrameworks sketch), using the processing.net library.
Inside the draw method, I have the following code to parse the incoming data, and assign it appropriately to display a value of text in a text label:
void draw()
{
// check for incoming data
Client client = server.available();
if (client != null) {
// check for a full line of incoming data
String line = client.readStringUntil('\n');
if (line != null) {
//println(line);
int val = int(trim(line)); // extract the predicted class
//println(val);
if (val == 1) {
messageText = "EVENT 1";
} else if (val == 2) {
messageText = "EVENT 2";
} else if (val == 3) {
messageText = "EVENT 3";
}
}
}
// draw
background(0);
textFont(f,64);
fill(255);
textAlign(CENTER);
text(messageText, width/2, height/2);
}
Through logging, I have verified that the data is being received properly
However, I'm experiencing a very annoying bug - the text of my messageText label is VERY slow to update...after a new event has occurred (and is shown as such through logging), the messageText will still display the value of the last event for several seconds.
Anyone have any pointers on how to speed up performance here?
Thanks!
EDIT: Below is the full, complete sketch code:
import processing.net.*; // include the networking library
Server server; // will receive predictions
String messageText;
PFont f;
void setup()
{
fullScreen();
//size(200,200);
server = new Server(this, 5204); // listen on port 5204
messageText = "NO HAND";
f = createFont("Arial",16,true); // Arial, 16 point, anti-aliasing on
}
void draw()
{
// check for incoming data
Client client = server.available();
if (client != null) {
// check for a full line of incoming data
String line = client.readStringUntil('\n');
if (line != null) {
//println(line);
int val = int(trim(line)); // extract the predicted class
//println(val);
if (val == 1) {
messageText = "EVENT 1";
} else if (val == 2) {
messageText = "EVENT 2";
} else if (val == 3) {
messageText = "EVENT 3";
}
}
}
// draw
background(0);
textFont(f,64);
fill(255);
textAlign(CENTER);
text(messageText, width/2, height/2);
}
EDIT2 As Kevin pointed out below, my solution is rather hacky. I'm attempting to use the Message Events methods from the Networking library, rather than stuffing all my networking code inside of the draw() method.
So, I tried implementing the clientEvent method as such. However, I think I may be misunderstanding something...even though my original, hacky code seems to work OK now, my new code below using this delegate method doesn't seem to work at all. Basically, I have to run my sketch first, which creates a server, that my external program connects to. That program then sends out data that's received by my Processing sketch.
Here's what my full sketch looks like - anyone know where my misunderstanding may be coming from?
import processing.net.*; // include the networking library
Server server; // will receive predictions
Client client;
String messageText;
int dataIn;
PFont f;
void setup() {
fullScreen(P3D);
frameRate(600);
server = new Server(this, 5204); // listen on port 5204
client = server.available();
messageText = "NO HAND";
textAlign(CENTER);
fill(255);
f = createFont("Arial",48,true); // Arial, 16 point, anti-aliasing on
textFont(f, 120);
}
void draw() {
// draw
background(0);
text(messageText, width/2, height/2);
}
// If there is information available to read
// this event will be triggered
void clientEvent(Client client) {
String msg = client.readStringUntil('\n');
// The value of msg will be null until the
// end of the String is reached
if (msg != null) {
int val = int(trim(line)); // extract the predicted class
println(val);
if (val == 1) {
messageText = "A";
} else if (val == 2) {
messageText = "B";
} else if (val == 3) {
messageText = "C";
} else if (val == 4) {
messageText = "D";
}
}
}
}
So, the answer ended up having to do with the framerate and renderer used in the project. Since the network update code was being called in my sketch's draw method, the speed at which it was called was dependent on the framerate/renderer used.
After a bit of experimentation / trial-and-error, changing the sketch to use the FX2D renderer, and a framerate of 600, significantly improved performance to the degree at which I needed.
void setup()
{
fullScreen(FX2D);
frameRate(600);
...
}
EDIT:
After a conversation with one of the Processing core team members, I'm considering my networking code correct and complete. Changing the renderer to FX2D significantly improved performance.
In my very specific use case, I'm running the sketch full-screen on a MacBookPro with Retina display. Bumping the framerate value high, and changing the renderer, gave me the performance I required for my quick prototype sketches.

Process hosting the camera service has died unexpectedly

I have tried everything and I don't find a reason for why my Camera app is throwing me a dead service exception.
Here is the case. I'm using a HDR jni library, which I already check and it works fine, It's not a memory lead of native memory, and it's not a jni problem. So, the problem must to be in my code:
I'm just waiting to the CaptureResult to return me a AE_CONVERGED_STATE to check if the sensor already take the correct exposure and then I call my method:
Log.performanceEnd("YUV capture");
Log.d(TAG, "[onImageAvailable] YUV capture, mBurstCount: " + mBurstCount);
Image image = imageReader.acquireNextImage();
if (mBackgroundHandler != null) {
mBackgroundHandler.post(new YuvCopy(image, mBurstCount));
}
mBurstCount++;
if (mBurstState == BURST_STATE_HDR) {
switch (mBurstCount) {
case 1:
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION, HDR_EXPOSURE_COMPENSATION_VALUE_HIGH);
break;
case 2:
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION, HDR_EXPOSURE_COMPENSATION_VALUE_LOW);
break;
case 3:
//Restore exposure compensation value
mCaptureCallback = mPhotoCaptureCallback;
mSettingsManager.setExposureCompensation(mPreviewRequestBuilder);
mActivity.runOnUiThread(new Runnable() {
#Override
public void run() {
onPictureCaptured();
}
});
unlockFocus();
break;
}
if (mBurstCount != 3) {
updatePreviewSession();
}
//Finish HDR session
if (mBurstCount < YUV_BURST_LIMIT) mHdrState = STATE_PICTURE_TAKEN;
}
Here is my YUV method:
/**
* Transform YUV420 to NV21 readable frames
*/
private class YuvCopy implements Runnable {
private final Image mImage;
private final int mPictureIndex;
public YuvCopy(Image image, int index) {
mImage = image;
mPictureIndex = index;
}
#Override
public void run() {
if (mImage != null) {
if (mImage.getWidth() * mImage.getHeight() > 0) {
Image.Plane[] planes = mImage.getPlanes();
long startCopy = System.currentTimeMillis();
int width = mImage.getWidth();
int height = mImage.getHeight();
int ySize = width * height;
ByteBuffer yBuffer = mImage.getPlanes()[0].getBuffer();
ByteBuffer uvBuffer = mImage.getPlanes()[1].getBuffer();
ByteBuffer vuBuffer = mImage.getPlanes()[2].getBuffer();
byte[] mData = new byte[ySize + (ySize / 2)];
yBuffer.get(mData, 0, ySize);
vuBuffer.get(mData, ySize, (ySize / 2) - 1);
mData[mData.length - 1] = uvBuffer.get(uvBuffer.capacity() - 1);
mImage.close();
mHdrCaptureArray[mPictureIndex] = mData;
Log.i(TAG, "[YuvCopy|run] Time to Copy data: " + (System.currentTimeMillis() - startCopy) + "ms");
if (mPictureIndex == YUV_BURST_LIMIT - 1) {
startHdrProcessing();
} else {
mImage.close();
}
}
}
}
I pick a total of three photos and then I call my merge method of my JNI library. I tried to comment all the jni code and it still happening, so I think that possibly the problem must to be here, in my YUV method or maybe in the Burst HDR call.
Finally here is my log error when it happends:
01-01 12:30:27.531 21945-21957/com.myCamera W/AudioSystem: AudioFlinger server died!
01-01 12:30:27.532 21945-22038/com.myCamera W/AudioSystem: AudioPolicyService server died!
1-01 12:30:27.903 21945-21978/com.myCamera I/CameraManagerGlobal: Connecting to camera service
01-01 12:30:27.903 21945-21978/com.myCamera E/CameraManagerGlobal: Camera service is unavailable
01-01 12:30:27.903 21945-21978/com.myCamera W/System.err: android.hardware.camera2.CameraAccessException: Camera service is currently unavailable
01-01 12:30:29.103 21945-21945/com.myCamera W/System.err: android.hardware.camera2.CameraAccessException: Process hosting the camera service has died unexpectedly
Sometimes it take just 2 photos, and sometimes 300, but in the end, it still happening. Also, a lot of times all my device is almost dead and anything work's really fine, so I need to reboot my phone.
Finally the problem was caused because I had a wrong configuration of my ImageReaders, depending of the Hardware level of the Phone, the camera can allow different types of imageReaders with different sizes for each one.
For example, a INFO_SUPPORTED_HARDWARE_LEVEL == FULL doesn't support a JPEG image reader configurated to the max size of the device and another one with YUV format over the preview size in that moment. Anyway, sometimes it can work, and sometimes fail.
If an application tries to create a session using a set of targets that exceed the limits described in the below tables, one of three possibilities may occur. First, the session may be successfully created and work normally. Second, the session may be successfully created, but the camera device won't meet the frame rate guarantees as described in getOutputMinFrameDuration(int, Size). Or third, if the output set cannot be used at all, session creation will fail entirely, with onConfigureFailed(CameraCaptureSession) being invoked.
Quote from: https://developer.android.com/reference/android/hardware/camera2/CameraDevice.html
That means that my device can't have a YUV image reader configurated to 4608x3456 size when my JPEG imageReader is configurated to the same size too. It can only support my preview size(1920x1080). You can check all the possible configurations in this link.

Delay on loop without locking the screen

I've been working on an app where I need to get the location of the device. The thing is, I want to try at least 5 or 10 times and I want a delay in between each try, in order to show a error message in the screen after each try (FailedLocationMSG). The reason why I want this delay is because some times it takes some time to the gps start working and get the actual location. I've tried so many things and I can't make it work. The idea is to have an interface similar to a 'terminal' where I will display a message (Error, trying again 1/5.. Error 2/5....) after each try. The problem is, I've tried using Handler and Thread.sleep but I always get my screen locked and I can't see the error message displayed on the screen after each try.
This is the method where I get the location:
int breakloop=0;
private void GetLocation(){
locationmanager=(LocationManager)getSystemService(Context.LOCATION_SERVICE);
Criteria cri=new Criteria();
String provider=locationmanager.getBestProvider(cri,false);
if(provider!=null & !provider.equals("")){
//Get location
final Location location=locationmanager.getLastKnownLocation(provider);
locationmanager.requestLocationUpdates(provider,2000,10,this);
while(breakloop<10){
breakloop++;
if(location!=null)
onLocationChanged(location);
else
FailedLocationMSG(breakloop);
}
}else
Toast.makeText(getApplicationContext(),"Provider is null",Toast.LENGTH_LONG).show();
}
You could use a FutureTask and a Callable for your location check and wait for its return value.
public class LocationChecker implements Callable<Location> {
#Override
public Location call() throws Exception {
//do your stuff for location check
return location;
}
}
in your method you do something like
LocationChecker lc = new LocationChecker();
FutureTask<Location> ft = new FutureTask<Location>(lc);
ExecutorService es = Executors.newCachedThreadPool();
es.execute(ft);
Location loc;
int attempts = 0;
while(attempts < 10) {
if(ft.isDone() && loc != null) {
es.shutdown();
break;
}
else if(ft.isDone() && loc == null) {
ft = new FutureTask<Location>(lc);
es.execute(ft);
attempts++;
}
loc = ft.get();
}
This will check your location and will wait for the result (ten times).

Running multiple url at app start blackberry

My requirement is to parse two urls at application start point, these two urls have data that is required to be displayed in my application. I am doing this by keeping two urls in a array and running a for loop in the background thread and then insert the values into database in background thread, is it correct way of approaching the problem?
I have posted my code below, help of any kind is welcomed :)
public StartConnecton(SplashScreen splashScreen)
{
urls = new String[2];
urls[0] = "http:xxxxxx.com";
urls[1] = "http:yyy.com";
_dbIRef = new ClassDatabase(1);
_dbIRef.setSID(46);
_splashScreen = (SplashScreen)splashScreen;
_classDatabase = new ClassDatabase();
}
public void run()
{
int size = urls.length;
for(int i = 0; i < size;i++)
{
if(i==0)
{
_id= 1;
}else if(i==1)
{
_id = 0;
}
try{
String conn = this.getConnectionString();
con = (HttpConnection)Connector.open(urls[i]+getConnectionString());
con.setRequestMethod(HttpConnection.GET);
con.setRequestProperty("User-Agent","Profile/MIDP-1.0 Confirguration/CLDC- 1.0");
System.out.println("CONNECTION!!!!!!!!!!!"+con);
code = con.getResponseCode();
System.out.println("CODE!!!!!!!!!!!"+code+"ID"+_id);
if ( code == HttpConnection.HTTP_OK)
{
is = con.openInputStream();
int length = (int) con.getLength();
new Parser(is,_id);
is.close();
con.close();
}
}catch(Exception e)
{
System.out.println("EXCEPTION!!!!!!!!!!"+e);
}
}
_classDatabase.delete("Delete from topnews where sid = 46");
_classDatabase.insertTopNews();
_classDatabase.insertTabBar();
_classDatabase.insertGalleryInfo();
_topNewsScreen = new TopNewsScreen("TopNews");
_splashScreen.swapScreen(_topNewsScreen);
}
Help of any kind is welcomed
A Y
The problems you have at the moment are:
1. The connections are instantiated sequentially.
If the first one fails (server not there, BlackBerry MDS servers down, etc) then you'll have to wait around 30 seconds for the connection.open request to timeout before the second connection is tried.
2. The UI will freeze during connection attempts. I'm guessing you're doing this on the event thread as well, which means the app will freeze whilst the Connection.open is running because this method blocks.
The solution to both of the above problems is to wrap each connection attempt into a separate Thread. Here's a nice example: http://mnarinsky.blogspot.com/2011/03/blackberry-sending-http-request-in.html
3. Redundant code What is that if(i==0) block of code doing? If all you're trying to do is make _id = 1 when i == 0, then just do _id = (i==0) ? 1 : 0;. Alternatively reverse the order which you put the URLs into your array and just use i, and remove the _id variable entirely.

Categories

Resources