Efficiently Implementing Java Native Interface Webcam Feed - java

I'm working on a project that takes video input from a webcam and displays regions of motion to the user. My "beta" attempt at this project was to use the Java Media Framework to retrieve the webcam feed. Through some utility functions, JMF conveniently returns webcam frames as BufferedImages, which I built a significant amount of framework around to process. However, I soon realized that JMF isn't well supported by Sun/Oracle anymore, and some of the higher webcam resolutions (720p) are not accessible through the JMF interface.
I'd like to continue processing frames as BufferedImages, and use OpenCV (C++) to source the video feed. Using OpenCV's framework alone, I've found that OpenCV does a good job of efficiently returning high-def webcam frames and painting them to screen.
I figured it would be pretty straightforward to feed this data into Java and achieve the same efficiency. I just finished writing the JNI DLL to copy this data into a BufferedImage and return it to Java. However, I'm finding that the amount of data copying I'm doing is really hindering performance. I'm targeting 30 FPS, but it takes roughly 100 msec alone to even copy the data from the char array returned by OpenCV into a Java BufferedImage. Instead, I'm seeing about 2-5 FPS.
When returning a frame capture, OpenCV provides a pointer to a 1D char array. This data needs to be provided to Java, and apparently I don't have the time to copy any of it.
I need a better solution to get these frame captures into a BufferedImage. A few solutions I'm considering, none of which I think are very good (fairly certain they would also perform poorly):
(1) Override BufferedImage, and return pixel data from various BufferedImage methods by making native calls to the DLL. (Instead of doing the array copying at once, I return individual pixels as requested by the calling code). Note that calling code typically needs all pixels in the image to paint the image or process it, so this individual pixel-grab operation would be implemented in a 2D for-loop.
(2) Instruct the BufferedImage to use a java.nio.ByteBuffer to somehow directly access data in the char array returned by OpenCV. Would appreciate any tips as to how this is done.
(3) Do everything in C++ and forget Java. Well well, yes this does sound like the most logical solution, however I will not have time to start this many-month project from scratch.
As of now, my JNI code has been written to return the BufferedImage, however at this point I'm willing to accept the return of a 1D char array and then put it into a BufferedImage.
By the way... the question here is: What is the most efficient way to copy a 1D char array of image data into a BufferedImage?
Provided is the (inefficient) code that I use to source image from OpenCV and copy into BufferedImage:
JNIEXPORT jobject JNICALL Java_graphicanalyzer_ImageFeedOpenCV_getFrame
(JNIEnv * env, jobject jThis, jobject camera)
{
//get the memory address of the CvCapture device, the value of which is encapsulated in the camera jobject
jclass cameraClass = env->FindClass("graphicanalyzer/Camera");
jfieldID fid = env->GetFieldID(cameraClass,"pCvCapture","I");
//get the address of the CvCapture device
int a_pCvCapture = (int)env->GetIntField(camera, fid);
//get a pointer to the CvCapture device
CvCapture *capture = (CvCapture*)a_pCvCapture;
//get a frame from the CvCapture device
IplImage *frame = cvQueryFrame( capture );
//get a handle on the BufferedImage class
jclass bufferedImageClass = env->FindClass("java/awt/image/BufferedImage");
if (bufferedImageClass == NULL)
{
return NULL;
}
//get a handle on the BufferedImage(int width, int height, int imageType) constructor
jmethodID bufferedImageConstructor = env->GetMethodID(bufferedImageClass,"<init>","(III)V");
//get the field ID of BufferedImage.TYPE_INT_RGB
jfieldID imageTypeFieldID = env->GetStaticFieldID(bufferedImageClass,"TYPE_INT_RGB","I");
//get the int value from the BufferedImage.TYPE_INT_RGB field
jint imageTypeIntRGB = env->GetStaticIntField(bufferedImageClass,imageTypeFieldID);
//create a new BufferedImage
jobject ret = env->NewObject(bufferedImageClass, bufferedImageConstructor, (jint)frame->width, (jint)frame->height, imageTypeIntRGB);
//get a handle on the method BufferedImage.getRaster()
jmethodID getWritableRasterID = env->GetMethodID(bufferedImageClass, "getRaster", "()Ljava/awt/image/WritableRaster;");
//call the BufferedImage.getRaster() method
jobject writableRaster = env->CallObjectMethod(ret,getWritableRasterID);
//get a handle on the WritableRaster class
jclass writableRasterClass = env->FindClass("java/awt/image/WritableRaster");
//get a handle on the WritableRaster.setPixel(int x, int y, int[] rgb) method
jmethodID setPixelID = env->GetMethodID(writableRasterClass, "setPixel", "(II[I)V"); //void setPixel(int, int, int[])
//iterate through the frame we got above and set each pixel within the WritableRaster
jintArray rgbArray = env->NewIntArray(3);
jint rgb[3];
char *px;
for (jint x=0; x < frame->width; x++)
{
for (jint y=0; y < frame->height; y++)
{
px = frame->imageData+(frame->widthStep*y+x*frame->nChannels);
rgb[0] = abs(px[2]); // OpenCV returns BGR bit order
rgb[1] = abs(px[1]); // OpenCV returns BGR bit order
rgb[2] = abs(px[0]); // OpenCV returns BGR bit order
//copy jint array into jintArray
env->SetIntArrayRegion(rgbArray,0,3,rgb); //take values in rgb and move to rgbArray
//call setPixel() this is a copy operation
env->CallVoidMethod(writableRaster,setPixelID,x,y,rgbArray);
}
}
return ret; //return the BufferedImage
}

There is another option if you wish to make your code really fast and still use Java. The AWT windowing toolkit has a direct native interface you can use to draw to an AWT surface using C or C++. Thus, there would be no need to copy anything to Java, as you could render directly from the buffer in C or C++. I am not sure of the specifics on how to do this because I have not looked at it in a while, but I know that it is included in the standard JRE distribution. Using this method, you could probably approach the FPS limit of the camera if you wished, rather than struggling to reach 30 FPS.
If you want to research this further I would start here and here.
Happy Programming!

I would construct the RGB int array required by BufferedImage and then use a single call to
void setRGB(int startX, int startY, int w, int h, int[] rgbArray, int offset, int scansize)
to set the entire image data array at once. Or at least, large portions of it.
Without having timed it, I would suspect that it's the per-pixel calls to
env->SetIntArrayRegion(rgbArray,0,3,rgb);
env->CallVoidMethod(writableRaster,setPixelID,x,y,rgbArray);
which are taking the lion's share of the time.
EDIT: It will be likely the method invocations rather than manipulation of memory, per se, that is taking the time. So build data in your JNI code and copy it in blocks or a single hit to the Java image. Once you create and pin a Java int[] you can access it via native pointers. Then one call to setRGB will copy the array into your image.
Note: You do still have to copy the data at least once, but doing all pixels in one hit via 1 function call will be vastly more efficient than doing them individually via 2 x N function calls.
EDIT 2:
Reviewing my JNI code, I have only ever used byte arrays, but the principles are the same for int arrays. Use:
NewIntArray
to create an int array, and
GetIntArrayElements
to pin it and get a pointer, and when you are done,
ReleaseIntArrayElements
to release it, remembering to use the flag to copy data back to Java's memory heap.
Then, you should be able to use your Java int array handle to invoke the setRGB function.
Remember also that this is actually setting RGBA pixels, so 4 channels, including alpha, not just three (the RGB names in Java seem to predate alpha channel, but most of the so-named methods are compatible with a 32 bit value).

As a secondary consideration, if the only difference between the image data array returned by OpenCV and what is required by Java is the BGR vs RGB, then
px = frame->imageData+(frame->widthStep*y+x*frame->nChannels);
rgb[0] = abs(px[2]); // OpenCV returns BGR bit order
rgb[1] = abs(px[1]); // OpenCV returns BGR bit order
rgb[2] = abs(px[0]); // OpenCV returns BGR bit order
is a relatively inefficient way to convert them. Instead you could do something like:
uint32 px = frame->imageData+(frame->widthStep*y+x*frame->nChannels);
javaArray[ofs]=((px&0x00FF0000)>>16)|(px&0x0000FF00)|((px&0x000000FF)<<16);
(note my C code is rusty, so this might not be entirely valid, but it shows what is needed).

Managed to speed up the process using an NIO ByteBuffer.
On the C++ JNI side...
JNIEXPORT jobject JNICALL Java_graphicanalyzer_ImageFeedOpenCV_getFrame
(JNIEnv * env, jobject jThis, jobject camera)
{
//...
IplImage *frame = cvQueryFrame(pCaptureDevice);
jobject byteBuf = env->NewDirectByteBuffer(frame->imageData, frame->imageSize);
return byteBuf;
}
and on the Java side...
void getFrame(Camera cam)
{
ByteBuffer frameData = cam.getFrame(); //NATIVE call
byte[] imgArray = new byte[frame.data.capacity()];
frameData.get(imgArray); //although it seems like an array copy, this call returns very quickly
DataBufferByte frameDataBuf = new DataBufferByte(imgArray,imgArray.length);
//determine image sample model characteristics
int dataType = DataBuffer.TYPE_BYTE;
int width = cam.getFrameWidth();
int height = cam.getFrameHeight();
int pixelStride = cam.getPixelStride();
int scanlineStride = cam.getScanlineStride();
int bandOffsets = new int[] {2,1,0}; //BGR
//create a WritableRaster with the DataBufferByte
PixelInterleavedSampleModel pism = new PixelInterleavedSampleModel
(
dataType,
width,
height,
pixelStride,
scanlineStride,
bandOffsets
);
WritableRaster raster = new ImgFeedWritableRaster( pism, frameDataBuf, new Point(0,0) );
//create the BufferedImage
ColorSpace cs = ColorSpace.getInstance(ColorSpace.CS_sRGB);
ComponentColorModel cm = new ComponentColorModel(cs, false, false, Transparency.OPAQUE, DataBuffer.TYPE_BYTE);
BufferedImage newImg = new BufferedImage(cm,raster,false,null);
handleNewImage(newImg);
}
Using the java.nio.ByteBuffer, I can quickly address the char array returned by the OpenCV code without (apparently) doing much gruesome array copying.

Related

Fast transformation of BGR BufferedImage to YUV using FFMpeg

I wanted to transform a TYPE_3BYTE_BGR BufferedImage in Java to yuv using the sws_scale function of FFMpeg through JNI. I first extract the data of my image from the BufferedImage as
byte[] imgData = ((DataBufferByte) myImage.getRaster().getDataBuffer()).getData();
byte[] output = processImage(toSend,0);
Then I pass it to the processImage function which is a native function. The C++ side looks like this:
JNIEXPORT jbyteArray JNICALL Java_jni_JniExample_processData
(JNIEnv *env, jobject obj, jbyteArray data, jint index)
{
jboolean isCopy;
uint8_t *test = (uint8_t *)env->GetPrimitiveArrayCritical(data, &isCopy);
uint8_t *inData[1]; // RGB24 have one plane
inData[0] = test;
SwsContext * ctx = sws_getContext(width,height,AV_PIX_FMT_BGR24, (int)width, (int)width,
AV_PIX_FMT_YUV420P, 0, 0, 0, 0);
int lumaPlaneSize = width *height;
uint8_t *yuv[3];
yuv[0] = new uint8_t[lumaPlaneSize];
yuv[1] = new uint8_t[lumaPlaneSize/4];
yuv[2] = new uint8_t[lumaPlaneSize/4];
int inLinesize[1] = { 3*nvEncoder->width }; // RGB stride
int outLinesize[3] = { 3*width ,3*width ,3*width }; // YUV stride
sws_scale(ctx, inData, inLinesize, 0, height , yuv, outLinesize);
However, after running the code, I get the warning: [swscaler # 0x7fb598659480] Warning: data is not aligned! This can lead to a speedloss, everything crashes., and everything crashes on the last line. Am I doing things properly in terms of passing the correct arguments to sws_scale? (specially the strides).
Update:
There was a separate bug here: SwsContext * ctx = sws_getContext(width,height,AV_PIX_FMT_BGR24, (int)width, (int)width,0,NULL,NULL,NULL) which should be changed to: SwsContext * ctx = sws_getContext(width,height,AV_PIX_FMT_BGR24, (int)height, (int)width,0,NULL,NULL,NULL)
The first problem I see - wrong strides for output image:
yuv[0] = new uint8_t[lumaPlaneSize];
yuv[1] = new uint8_t[lumaPlaneSize/4];
yuv[2] = new uint8_t[lumaPlaneSize/4];
int inLinesize[1] = { 3*nvEncoder->width }; // RGB stride
int outLinesize[3] = { 3*width ,3*width ,3*width }; // YUV stride
// ^^^^^^^ ^^^^^^^ ^^^^^^^
Allocated planes are not large enough for passed strides. YUV420 uses one byte for each channel, so 3 is redundant and leads to bound violation. due rescaler skips a lot of space when goes to next line. Next, actual chroma width is a half of luma width, so if you want tight-packed luma and chroma planes without gaps at line ends use next:
int outLinesize[3] = { width , width / 2 , width / 2 }; // YUV stride
Allocation sizes remain the same.
Looking at the source, in particular around line 321, you get that warning message if your system supports AVX2 instructions and the various pointers and sizes are not multiples of 16. The crash is probably occurring because the arrays you pass in, inData, inLineSize, and outLinesize, are not the right size. The pointer arrays need to have at least 3 elements, and the stride arrays need 4. Somewhere in sws_scale it is accessing inData[1] which is outside the bounds of your array resulting in a bad pointer.

How to use ScriptIntrinsic3DLUT with a .cube file?

first, I'm new to image processing in Android. I have a .cube file that was "Generated by Resolve" that is LUT_3D_SIZE 33. I'm trying to use android.support.v8.renderscript.ScriptIntrinsic3DLUT to apply the lookup table to process an image. I assume that I should use ScriptIntrinsic3DLUT and NOT android.support.v8.renderscript.ScriptIntrinsicLUT, correct?
I'm having problems finding sample code to do this so this is what I've pieced together so far. The issue I'm having is how to create an Allocation based on my .cube file?
...
final RenderScript renderScript = RenderScript.create(getApplicationContext());
final ScriptIntrinsic3DLUT scriptIntrinsic3DLUT = ScriptIntrinsic3DLUT.create(renderScript, Element.U8_4(renderScript));
// How to create an Allocation from .cube file?
//final Allocation allocationLut = Allocation.createXXX();
scriptIntrinsic3DLUT.setLUT(allocationLut);
Bitmap bitmapIn = selectedImage;
Bitmap bitmapOut = selectedImage.copy(bitmapIn.getConfig(),true);
Allocation aIn = Allocation.createFromBitmap(renderScript, bitmapIn);
Allocation aOut = Allocation.createTyped(renderScript, aIn.getType());
aOut.copyTo(bitmapOut);
imageView.setImageBitmap(bitmapOut);
...
Any thoughts?
Parsing the .cube file
First, what you should do is to parse the .cube file.
OpenColorIO shows how to do this in C++. It has some ways to parse the LUT files like .cube, .lut, etc.
For example, FileFormatIridasCube.cpp shows how to
process a .cube file.
You can easily get the size through
LUT_3D_SIZE. I have contacted an image processing algorithm engineer.
This is what he said:
Generally in the industry a 17^3 cube is considered preview, 33^3 normal and 65^3 for highest quality output.
Note that in a .cube file, we can get 3*LUT_3D_SIZE^3 floats.
The key point is what to do with the float array. We cannot set this array to the cube in ScriptIntrinsic3DLUT with the Allocation.
Before doing this we need to handle the float array.
Handle the data in .cube file
As we know, each RGB component is an 8-bit int if it is 8-bit depth.
R is in the high 8-bit, G is in the middle, and B is in the low 8-bit. In this way, a 24-bit int can contain these
three components at the same time.
In a .cube file, each data line contains 3 floats.
Please note: the blue component goes first!!!
I get this conclusion from trial and error. (Or someone can give a more accurate explanation.)
Each float represents the coefficient of the component according to 255. Therefore, we need to calculate the real
value with these three components:
int getRGBColorValue(float b, float g, float r) {
int bcol = (int) (255 * clamp(b, 0.f, 1.f));
int gcol = (int) (255 * clamp(g, 0.f, 1.f));
int rcol = (int) (255 * clamp(r, 0.f, 1.f));
return bcol | (gcol << 8) | (rcol << 16);
}
So we can get an integer from each data line, which contains 3 floats.
And finally, we get the integer array, the length of which is LUT_3D_SIZE^3. This array is expected to be
applied to the cube.
ScriptIntrinsic3DLUT
RsLutDemo shows how to apply ScriptIntrinsic3DLUT.
RenderScript mRs;
Bitmap mBitmap;
Bitmap mLutBitmap;
ScriptIntrinsic3DLUT mScriptlut;
Bitmap mOutputBitmap;
Allocation mAllocIn;
Allocation mAllocOut;
Allocation mAllocCube;
...
int redDim, greenDim, blueDim;
int[] lut;
if (mScriptlut == null) {
mScriptlut = ScriptIntrinsic3DLUT.create(mRs, Element.U8_4(mRs));
}
if (mBitmap == null) {
mBitmap = BitmapFactory.decodeResource(getResources(),
R.drawable.bugs);
mOutputBitmap = Bitmap.createBitmap(mBitmap.getWidth(), mBitmap.getHeight(), mBitmap.getConfig());
mAllocIn = Allocation.createFromBitmap(mRs, mBitmap);
mAllocOut = Allocation.createFromBitmap(mRs, mOutputBitmap);
}
...
// get the expected lut[] from .cube file.
...
Type.Builder tb = new Type.Builder(mRs, Element.U8_4(mRs));
tb.setX(redDim).setY(greenDim).setZ(blueDim);
Type t = tb.create();
mAllocCube = Allocation.createTyped(mRs, t);
mAllocCube.copyFromUnchecked(lut);
mScriptlut.setLUT(mAllocCube);
mScriptlut.forEach(mAllocIn, mAllocOut);
mAllocOut.copyTo(mOutputBitmap);
Demo
I have finished a demo to show the work.
You can view it on Github.
Thanks.
With a 3D LUT yes, you have to use the core framework version as there is no support library version of 3D LUT at this time. Your 3D LUT allocation would have to be created by parsing the file appropriately, there is no built in support for .cube files (or any other 3D LUT format).

Image J 16 bit Signed Buffered Image

So my question is how to get 16 bit bufferedImage from ij.ImagePlus...?
If am trying to get using ShortProcessor it change my signed image to unsigned so i am not getting original image...Thanks in advance can any one provide solution.
How ImageJ display 16 bit signed image in their viewer..and we only get 8 bit bufferedImage or 16 bit unsigned bufferedImage So how can i get 16 bit signed BufferedImage..?
ImageJ can represent a signed 16-bit type using a special Calibration function. The isSigned16Bit() method indicates when that specific calibration function is in use—it is a linear m*x+b calibration where m=1 and b=-32768; this can be seen in the ImageJ source code.
ImageJ provides a way to obtain a BufferedImage from an ImagePlus via the getImage() method. However, this always returns an 8-bit BufferedImage.
So the next approach is to create your own BufferedImage with type DataBuffer.TYPE_SHORT, which wraps the same short[] array that backs the original ImagePlus object. Unfortunately, due to ImageJ's internal representation of signed 16-bit data, the values will be off by a constant offset of 32768—e.g., a raw value of -444 will be stored in ImageJ's short[] array as 32324. Due to this fact, you must manually adjust all your values before wrapping as a BufferedImage.
Here is some example code:
import io.scif.gui.AWTImageTools;
...
final ImagePlus imp =
IJ.openImage("http://imagej.net/images/ct.dcm.zip");
// get pixels array reference
final short[] pix = (short[]) imp.getProcessor().getPixels();
final int w = imp.getWidth();
final int h = imp.getHeight();
final boolean signed = imp.getCalibration().isSigned16Bit();
if (signed) {
// adjust raw pixel values
for (int i=0; i<pix.length; i++) {
pix[i] -= 32768;
}
}
// convert to BufferedImage
final BufferedImage image = AWTImageTools.makeImage(pix, w, h, signed);
For the actual conversion of short[] to BufferedImage, this code makes use of the SCIFIO library's AWTImageTools.makeImage utility methods. SCIFIO is included with the Fiji distribution of ImageJ. Alternately, it is only a few lines of code which would be easy to copy and paste from the routine in question.

Using DataBuffer of BufferedImage to set pixels

I am trying to use the underlying DataBufferByte of a BufferedImage of type TYPE_3BYTE_BGR to set pixel values as quick as possible.
Perhaps I am not understanding, but when I do the following...
byte[] imgBytes = ((DataBufferByte) img.getData().getDataBuffer()).getData();
... it seems as though I am getting a copy of the byte[] and not a reference. For example, if I run...
System.out.println(System.identityHashCode(imgBytes));
System.out.println(System.identityHashCode((DataBufferByte) img.getData().getDataBuffer()).getData());
... I get two clearly different object hashes. If I'm not mistaken, this indicates that I am not getting a reference to the underlying byte[] but rather a copy. If this is the case, how am I supposed to edit the DataBufferByte directly???
Or perhaps I am just setting the pixels wrong... When I set pixels in the imgBytes it doesn't seem to do anything to the BufferedImage. Once I get the byte[], I set each pixel value like so:
imgBytes[intOffset] = byteBlue;
imgBytes[intOffset+1] = byteGreen;
imgBytes[intOffset+2] = byteRed;
To me, this all seems fine. I can read pixels just fine this way so it seems I should be able to write them the same way!
I had the same problem. You may not use getData() but use getRaster() which gives you an array you can use to write to.
I once played around with pixel manipulations for Images in Java. Instead of directly answering your question I will offer an alternative solution to your problem. You can do the following to create an array of pixels to manipulate:
final int width = 800;
final int height = 600;
final int[] pixels = new int[width * height]; // 0xAARRGGBB
MemoryImageSource source = new MemoryImageSource(width, height, pixels, 0, width);
source.setAnimated(true);
source.setFullBufferUpdates(true);
Image image = Toolkit.getDefaultToolkit().createImage(source);
image.setAccelerationPriority(1f);
Then to draw the image, you can simply call the drawImage method from the Graphics class.
There are a few other ways to achieve what you are looking for, but this method was the simplest to me.
Here is how it's implemented in JDK7. You may have an error somewhere else if the stuff doesn't work for you.
public byte[] getData() {
theTrackable.setUntrackable();
return data;
}

Android memory management calling Java class from native JNI and declaring data (for image conversion)

I am writing native code for Android where I wish to uncompress a block of data. I am calling a Java method from a native JNI function. This Java method calls BitmapFactory and then tries to allocate some memory with:
int[] pixels = new int[width * height];
when the program seems to crash or stall or something, giving me " spin on suspend ", followed by much spew in logcat before the VM shuts down.
Just to give the big picture before diving into details, I'm trying to turn a byte array of a compressed image into a width*height*bpp uncompressed image to give back to native code. I am trying to use the Android Java BitmapFactory to do the decompression which seems to work ok. The compressed block of bytes is of a JPG or PNG image that was loaded into native code earlier (and not readable from disk in my system).
The sequence is a bit complicated, I'm not sure what's relevant so I'll say it all:
class MyRenderer implements GLSurfaceView.Renderer method onSurfaceCreated calls native init() which is defined in the MyRenderer class:
static {System.loadLibrary("glprog");
}
public native void init(int dummy_variable);
down inside native code this is the function called:
JNIEXPORT void JNICALL Java_com_pitransviewersingleimageres_MyRenderer_init(JNIEnv * env, jobject obj, jint dummy_variable) {
glprog_init(dummy_variable);
}
which then calls another native function char glprog_init(int dummy_variable)which itself calls native
wrapper_uncompress_image_by_os(overlay_compressed_data,overlay_compressed_data_len,...
here is the native function:
char wrapper_uncompress_image_by_os(unsigned char *compressed_data, int compressed_len, //input compressed data
char *data_name_string, char *suffix, //name and type
int slot_num) {
jstring jnamestring = (*preset_env)->NewStringUTF(preset_env,data_name_string);
jstring jsuffix = (*preset_env)->NewStringUTF(preset_env,suffix);
//create byte[] and copy data in
jbyteArray retArray = (*preset_env)->NewByteArray(preset_env, compressed_len);
if(retArray==NULL) {__android_log_write(ANDROID_LOG_INFO, "NATIVE","ERROR wrapper_uncompress_image_by_os: calling NewByteArray()");return-1;}
jbyte *javaptr = (*preset_env)->GetPrimitiveArrayCritical(preset_env, (jarray)retArray, 0);
if(javaptr==NULL) {__android_log_write(ANDROID_LOG_INFO, "NATIVE","ERROR wrapper_uncompress_image_by_os: calling GetPrimitiveArrayCritical()");return-1;}
memcpy(javaptr,compressed_data,compressed_len);
(*preset_env)->ReleasePrimitiveArrayCritical(preset_env, retArray, javaptr, 0);
if((preset_javaobject!=NULL)&&(preset_method_id_convertimagefromnative!=NULL))
{
jthrowable exception;
//call java function
(*preset_env)->CallVoidMethod(preset_env, preset_javaobject,preset_method_id_convertimagefromnative,
retArray,compressed_len,
jnamestring,jsuffix,slot_num);
Which calls a method convertImageFromNative() in the same Java class MyRenderer that called native init().
public void convertImageFromNative(final byte[] data, int data_len, final String data_name_string, final String suffix, final int slot_num) {
//writeFile(data,data_name_string);
Bitmap bitmapqq=bytes2bitmap(data);
}
This appears to work so far, if I uncomment writeFile() it writes all the data ok to my SDcard. It is inside bytes2bitmap() that I have problems.
The bytes2bitmap() class can be called from in 'regular' Java ok, but when called from this that is called from native it crashes at the int[] pixels = new int[width * height]; line.
So finally here is the code that causes the crash:
Bitmap bytes2bitmap(byte[] bytes) {
Log.d("startup", "Entered MyRenderer.bytes2bitmap()");
BitmapFactory.Options opt = new BitmapFactory.Options();
opt.inDither = true;
opt.inPreferredConfig = Bitmap.Config.ARGB_8888;
Bitmap bitmap=BitmapFactory.decodeByteArray(bytes, 0, bytes.length, opt);
if(bitmap==null) Log.d("running", "ERROR: GLESActivity.java: BitmapFactory() returns null");
int width = bitmap.getWidth();
int height = bitmap.getHeight();
bytes=null; //not sure if I should be freeing memory here, but it seems to work
int[] pixels = new int[width * height]; //crash happens here
//crashes before doing the following
bitmap.getPixels(pixels, 0, width, 0, 0, width, height);
pixels=null;
return bitmap;
}
I have not yet got to the part of passing the uncompressed data properly back down into native code, but thought I should address this first.
I could create an int[] array down in native code to pass up, the problem is that I don't know the uncompressed image size.
Thanks in advance,
Mark
I fixed my problem, it was a mistake in the code I didn't show.
The problem was in the (*preset_env)->CallVoidMethod(preset_env, preset_javaobject,preset_method_id_convertimagefromnative, call in native code. The variables preset_env,_javaobject,_method_id_, were set incorrectly. I set them at the startup once to be re-used later using the native function _setupnative2java(). I have three classes in my project. My mistake was in calling _setupnative2java() from my first Activity class, and not the MyRenderer class that actually contained the java method bytes2bitmap(). And during debugging I put another copy of this function in the first Activity class with the same name. So it sort of worked but was had its pointers wrong. It was strange in that modifications and logcat calls to the MyRenderer one would work.
I actually did call it from the MyRenderer class, but from with the startup Activity class with instancename.MyRenderer.setupnative2java. But perhaps since it was called from this other class it didn't take the pointers for MyRenderer. Plus I'm not sure how 'initialized' MyRenderer really was. My apologies if you spent brainpower on this.
void Java_com_pitransviewersingleimageres_MyRenderer_setupnative2java(JNIEnv* env, jobject,javaThis, jint unused_variable)
{
jthrowable exception;
jclass class_id;
preset_env=env;
preset_javaobject=javaThis;
class_id = (*env)->GetObjectClass(env, javaThis);
if(class_id!=NULL)
{
preset_method_id_convertimagefromnative = (*env)->GetMethodID(env, class_id, "convertImageFromNative", "([BILjava/lang/String;Ljava/lang/String;I)V");
}
...

Categories

Resources