Launching Java main method from C with JNI causes OpenGL error - java

I'm creating a program in Java that will interface with a C library to take images from hardware and display with OpenGL (using JOGL). So the workflow is this:
Hardware -> C -> disk image file -> Java -> JOGL
I have the Java -> JOGL part working fine. The images are displayed fully and I can load multiple images at a time. I also have the Hardware -> C working as well, and a temporary viewer in C is showing that the images are being created just fine.
The crux of the problem is this: I want to be able to launch the main() method of the Java program in C and then display the image using only JNI code in C (using static methods I've created). However, when I do this, the image is truncated, where I only get the top 20 or so rows. I do know that I'm loading the entire image because I can check the pixel values for every pixel in the image. Only the display is truncated. The same number of pixels are shown for every image I load.
This is the C code in a nutshell:
int main () { ...
HMODULE * jvm_dll;
//HWND hWnd = GetConsoleWindow();
//ShowWindow( hWnd, SW_HIDE );
env = create_vm(&jvm, jvm_dll);
if (env == NULL) { return 1; }
//Create a new String array with one blank String
cls = (*env)->FindClass(env,"java/lang/String");
arg = (*env)->NewStringUTF(env,"");
args = (jobjectArray)(*env)->NewObjectArray(env, 1, cls, arg);
//Call the main method with the String array argument
cls = (*env)->FindClass(env, "path/to/package/program");
mID = (*env)->GetStaticMethodID(env, cls, "main", "([Ljava/lang/String;)V");
(*env)->CallStaticVoidMethod(env, cls, mID, args);
PrintStackTrace(env);
blockAndClose(jvm, env, jvm_dll);
return ret;
}
int blockAndClose() {...
int ret = 0;
if (jvm == 0 || env == 0) {
FreeLibrary(*jvm_dll);
return 1;
}
ret = (*jvm)->DestroyJavaVM(jvm);
if(jvm_dll) {
FreeLibrary(*jvm_dll);
jvm_dll = 0;
}
env = 0;
jvm = 0;
return ret;
}
I know that I've only posted the C portion, but the Java portion is working when I run it purely in Java. At this point, the C portion is simply a "launcher" of sorts, so I'm wondering why it affects the running of the code. Any suggestions?
EDIT:
Here's the code for loading images. I use JAI to load in the image as a PlanarImage type (TiledImage class is a subclass).
tempI = JAI.create("fileload", path);
DataBufferUShort dbOut = null;
ColorModel CM = PlanarImage.getDefaultColorModel(DataBuffer.TYPE_USHORT, tempI.getNumBands());
SampleModel SM = CM.createCompatibleSampleModel(tempI.getWidth(), tempI.getHeight());
//Ensure that the buffer is in the internal format (USHORT)
if (tempI.getData().getDataBuffer().getDataType() != DataBuffer.TYPE_USHORT) {
DataBuffer dBIn = tempI.getData().getDataBuffer();
short [] dBPixels = new short[dBIn.getSize()];
switch(dBIn.getDataType()) {
case DataBuffer.TYPE_BYTE:
DataBufferByte dBByte = (DataBufferByte)dBIn;
byte [] pByte = dBByte.getData();
for (int b = 0; b < pByte.length; b++)
{ dBPixels[b] = (short)((double)pByte[b] / 0xFF * 0xFFFF); }
dbOut = new DataBufferUShort(dBPixels, dBPixels.length);
break;
case DataBuffer.TYPE_SHORT:
DataBufferShort dBShort = (DataBufferShort)dBIn;
dBPixels = dBShort.getData();
dbOut = new DataBufferUShort(dBPixels, dBPixels.length);
break;
} //SWITCH DATA TYPE --END
WritableRaster rs = Raster.createWritableRaster(SM, dbOut, new Point(0,0));
tempI = new TiledImage(0,0,tempI.getWidth(),tempI.getHeight(),0,0,SM,CM);
((TiledImage)tempI).setData(rs);
}

Related

I have to put a 2D array of 30*126 to tflite model. How to convert a 2D array of float to ByteBuffer in java

I’m Nguyen, a Vietnamese high school student who is working on a sign language translation app project using computer vision and AI.
In my app, I used LSTM model, when converted to tflite model I saw this sample code:
try {
SignLangModel model = SignLangModel.newInstance(context);
// Creates inputs for reference.
TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, 30, 126}, DataType.FLOAT32);
inputFeature0.loadBuffer(byteBuffer);
// Runs model inference and gets result.
SignLangModel.Outputs outputs = model.process(inputFeature0);
TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer();
// Releases model resources if no longer used.
model.close();
} catch (IOException e) {
// TODO Handle the exception
}
This is what my 2d array looks like
[[ 0.62733257, 0.44471735, -0.69024068, ..., 0.40363967, 0.28696212, -0.06274992],
[ 0.62688404, 0.4438577 , -0.73676074, ..., 0.40629318, 0.28771287, -0.05781016],
[ 0.62661999, 0.44294813, -0.7216031 , ..., 0.40591961, 0.28609812, -0.06014785],
...
[ 0.62216419, 0.43501934, -0.69985718, ..., 0.38580206, 0.29433241, -0.05569796]]
I wondering how to convert 2D float array to ByteBuffer.
You can try the conversion as it is shown here:
public byte[] ToByteArray(float[,] nmbs)
{
byte[] nmbsBytes = new byte[nmbs.GetLength(0) * nmbs.GetLength(1)*4];
int k = 0;
for (int i = 0; i < nmbs.GetLength(0); i++)
{
for (int j = 0; j < nmbs.GetLength(1); j++)
{
byte[] array = BitConverter.GetBytes(nmbs[i, j]);
for (int m = 0; m < array.Length; m++)
{
nmbsBytes[k++] = array[m];
}
}
}
return nmbsBytes;
}
using of course floats wherever the code has bytes...and then
floar[] array = returnedArrayfromAbove;
ByteBuffer buffer = ByteBuffer.wrap(array);
BUT I think you can follow the TensorFlow Lite guide here using the appropriate dependencies in your build.gradle file and then:
try (Interpreter interpreter = new Interpreter(file_of_a_tensorflowlite_model)) {
interpreter.run(input, output);
}
using Interpreter.run() where you can input directly your 2D array.
Generally the Interpreter.run() method will give you more flexibility than the generated code from AS. You can find many examples for using Interpreter directly there
Tag me if you need more help.

getting 'UnsatisfiedLinkError' using JNI to pass string from C++ to java w/OpenCV

The picture linked below shows the specific exception I'm getting. I'm not quite sure why I'm having this particular issue as I've built everything in the same directory, so the library file is there. From what I understand this has something to do with what I'm returning to my main method from my c++ function.
What I'm essentially trying to do is pass the name (printId) of the recognized person, as a string, from my c++ function to java.
Picture of command line:
Here's my C++ code:
#include <jni.h>
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/opencv.hpp"
#include "opencv2/objdetect.hpp"
#include "opencv2/face.hpp"
#include "opencv2/face/facerec.hpp"
#include <vector>
#include <string>
#include "recognitionJNI.h"
#include <fstream>
#include <sstream>
using namespace cv;
using namespace std;
String face_cascade_name = "/Users/greg/Downloads/opencv-3.4.2/data/haarcascades/haarcascade_frontalface_alt.xml";
CascadeClassifier face_cascade;
String fn_csv = "/Users/greg/Desktop/faceBuild/faceRecognition/faceRecognition/csv.txt";
//User Defined Function for reading csv
static void read_csv(const string& filename, vector<Mat>& images, vector<int>& labels, char separator = ';') {
ifstream file(filename.c_str(), ifstream::in); //opens file for reading
if(!file) {
cout << "ERROR: There was a problem loading the csv file" << endl;
}
string line, path, classlabel;
while(getline(file,line)) {
stringstream liness(line);
getline(liness, path, separator); //read stream object up to the semicolon
getline(liness, classlabel); //read the rest of stream object up to null terminated character
//make sure that the filepath and userID are not empty
if(!path.empty() && !classlabel.empty()) {
images.push_back(imread(path,0)); //appends grayscale image to images vector
labels.push_back(atoi(classlabel.c_str())); //appends userID to labels vector
}
}
}
JNIEXPORT jstring JNICALL Java_testJNIString_userName(JNIEnv *env, jobject thisObj, jstring inJNIStr) {
const char *inCStr = env->GetStringUTFChars(inJNIStr, NULL);
if (NULL == inCStr) return NULL;
string outCppStr;
cout << "In C++, the received string is: " << inCStr << endl;
env->ReleaseStringUTFChars(inJNIStr, inCStr);
string printId;
vector<Mat> images; //This vector will hold the images
vector<int> labels; //This vector will hold the userID
//read the csv file contain image paths and userID's
try {
read_csv(fn_csv, images, labels);
} catch (Exception& e) {
cerr << "Error opening file\"" << fn_csv << "\". Reason: " << e.msg << endl;
exit(1);
}
//we'll need to resize the images to their origal size
//These two lines capture the length and width of the mat object
int im_width = images[0].cols;
int im_height = images[0].rows;
for(int j=0; j < images.size(); j++) {
resize(images[j],images[j],Size(im_width, im_height),1.0,1.0,INTER_CUBIC);
}
//int numComponents = 2;
//double threshold = 10.0;
//creats a faceRecognizer to train with given images
Ptr<cv::face::FisherFaceRecognizer> model = cv::face::FisherFaceRecognizer::create();
model->train(images, labels);
string camera_msg = "No camera found";
Mat webcam; // creates Mat object for to store frames
VideoCapture cap(0); // opens default webcam
if(!cap.isOpened()) {
return env->NewStringUTF(camera_msg.c_str());
}
face_cascade.load(face_cascade_name); //loads xml file into classifier
//load capture device into Mat object
while (cap.read(webcam)) {
vector<Rect> faces;
Mat frame_gray; //will be used to store grayscale copy of webcam
cvtColor(webcam, frame_gray, COLOR_BGR2GRAY); //coverts Mat object frames into grayscale
equalizeHist(frame_gray, frame_gray); //maps input distrubution to more uniform distribution
//locate the faces in the frame
face_cascade.detectMultiScale(frame_gray, faces, 1.1, 5, 0|CASCADE_SCALE_IMAGE,Size(30,30));
for(size_t i=0; i < faces.size(); i++) {
Rect face_i = faces[i]; //process faces by frame
Mat face = frame_gray(face_i); //takes the face from the live images
//resize faces for prediction
Mat face_resized;
resize(face,face_resized,Size(im_width, im_height),1.0,1.0,INTER_CUBIC);
int prediction = model->predict(face_resized); //predict based on resize faces
if(prediction == 1 ) {
printId = "Matthew";
}
else if (prediction == 2) {
printId = "Greg";
return env->NewStringUTF(printId.c_str());
}
else if(prediction != 1 || 2 ){
printId = "Unknown";
}
rectangle(webcam, face_i, CV_RGB(0,255,0), 1); //draws a rectangle around the face
string box_text = "Prediction = " + printId;
int pos_x = std::max(face_i.tl().x - 10, 0);
int pos_y = std::max(face_i.tl().y - 10, 0);
putText(webcam, box_text, Point(pos_x,pos_y), FONT_HERSHEY_PLAIN, 1.0, CV_RGB(0,255,0), 1);
}
imshow("Webcam", webcam);
waitKey(1);
destroyAllWindows();
}
return env->NewStringUTF(printId.c_str());
}
Here's my Java code:
public class recognitionJNI{
static {
System.loadLibrary("recogjni");
}
private native String userName(String msg);
public static void main(String args[]) {
String result = new recognitionJNI().userName("Pass arg from c++ function");
System.out.println(result);
}
}
Try regenerating the header file, it looks like you changed your class name in the meantime and the name is no longer up to date. The name I get from that java class is:
Java_recognitionJNI_userName
But you have
Java_testJNIString_userName

what is the best way override image parsing on com.google.Volley?

I developed my own rest api using c# servicestack on mono. It works as expected except when it comes to file download. I noticed it appends some bits at the start of file. for example see the image below:
I filed bug to mono bugzilla, Meanwhile, I want to override image response on my client to remove first appended stuff to make image work. I tried it on c# client by editing received stream before saving it to file and it works fine.
I need to know how and where I can override volley library to get clean images not malformed ones with the best performance.
Update 04:37 PM:
I believe I need to modify com.android.volley.toolbox.ImageRequest >> I will try it and post solution if it works with me.
Regards,
Shaheen
I modified the method doParse in com.android.volley.toolbox.ImageRequest
private Response<Bitmap> doParse(NetworkResponse response){
byte[] data = response.data;
byte [] pattern = fromHexString("FFD8FFE000");
int position = matchPosition(data,pattern);
if(position>0)
data = Arrays.copyOfRange(data, position, data.length-position);
....
....
....
....
..}
here is the helper methods I used:
public static int matchPosition(byte [] a, byte [] b){
int matchLength=0;
int maxSearch = 30>a.length?a.length:30;
for (int i =0;i<maxSearch;i++) {
if (a[i]==b[0] && i+b.length<a.length){
for(int j = 0;j< b.length;j++ )
{
if((i+j)==a.length-1)
return -1;
if(a[i+j]==b[j])
matchLength++;
}
if(matchLength == b.length)
return i;
else
matchLength = 0;
}
}
return -1;
}
private static byte[] fromHexString(final String encoded) {
if ((encoded.length() % 2) != 0)
throw new IllegalArgumentException("Input string must contain an even number of characters");
final byte result[] = new byte[encoded.length()/2];
final char enc[] = encoded.toCharArray();
for (int i = 0; i < enc.length; i += 2) {
StringBuilder curr = new StringBuilder(2);
curr.append(enc[i]).append(enc[i + 1]);
result[i/2] = (byte) Integer.parseInt(curr.toString(), 16);
}
return result;
}
and this work around resolved the issue explained above!

I cant get any output from the Sample Code of Face Detection and recognition code using JavaCV on Eclipse(Juno).

I was practicing on some face recognition and detection codes using Java on JavaCv on Eclpise Juno. The Thing is i was trying to run the sample code below but i cant get the expected result or output. The sample code is as follows
import com.googlecode.javacpp.Loader;
import com.googlecode.javacv.*;
import com.googlecode.javacv.cpp.*;
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_calib3d.*;
import static com.googlecode.javacv.cpp.opencv_objdetect.*;
public class Demo {
public static void main(String[] args) throws Exception {
String classifierName = null;
if (args.length > 0) {
classifierName = args[0];
} else {
System.err.println("C://opencv/data/haarcascades\"haarcascade_frontalface_alt.xml\".");
System.exit(1);
}
// Preload the opencv_objdetect module to work around a known bug.
Loader.load(opencv_objdetect.class);
// We can "cast" Pointer objects by instantiating a new object of the desired class.
CvHaarClassifierCascade classifier = new CvHaarClassifierCascade(cvLoad(classifierName));
if (classifier.isNull()) {
System.err.println("Error loading classifier file \"" + classifierName + "\".");
System.exit(1);
}
// CanvasFrame is a JFrame containing a Canvas component, which is hardware accelerated.
// It can also switch into full-screen mode when called with a screenNumber.
CanvasFrame frame = new CanvasFrame("Some Title");
// OpenCVFrameGrabber uses opencv_highgui, but other more versatile FrameGrabbers
// include DC1394FrameGrabber, FlyCaptureFrameGrabber, OpenKinectFrameGrabber,
// PS3EyeFrameGrabber, VideoInputFrameGrabber, and FFmpegFrameGrabber.
FrameGrabber grabber = new OpenCVFrameGrabber(0);
grabber.start();
// FAQ about IplImage:
// - For custom raw processing of data, getByteBuffer() returns an NIO direct
// buffer wrapped around the memory pointed by imageData.
// - To get a BufferedImage from an IplImage, you may call getBufferedImage().
// - The createFrom() factory method can construct an IplImage from a BufferedImage.
// - There are also a few copy*() methods for BufferedImage<->IplImage data transfers.
IplImage grabbedImage = grabber.grab();
int width = grabbedImage.width();
int height = grabbedImage.height();
IplImage grayImage = IplImage.create(width, height, IPL_DEPTH_8U, 1);
IplImage rotatedImage = grabbedImage.clone();
// Let's create some random 3D rotation...
CvMat randomR = CvMat.create(3, 3), randomAxis = CvMat.create(3, 1);
// We can easily and efficiently access the elements of CvMat objects
// with the set of get() and put() methods.
randomAxis.put((Math.random()-0.5)/4, (Math.random()-0.5)/4, (Math.random()-0.5)/4);
cvRodrigues2(randomAxis, randomR, null);
double f = (width + height)/2.0; randomR.put(0, 2, randomR.get(0, 2)*f);
randomR.put(1, 2, randomR.get(1, 2)*f);
randomR.put(2, 0, randomR.get(2, 0)/f); randomR.put(2, 1, randomR.get(2, 1)/f);
System.out.println(randomR);
// Objects allocated with a create*() or clone() factory method are automatically released
// by the garbage collector, but may still be explicitly released by calling release().
// You shall NOT call cvReleaseImage(), cvReleaseMemStorage(), etc.
//on objects allocated this way.
CvMemStorage storage = CvMemStorage.create();
// We can allocate native arrays using constructors taking an integer as argument.
CvPoint hatPoints = new CvPoint(3);
// Again, FFmpegFrameRecorder also exists as a more versatile alternative.
FrameRecorder recorder = new OpenCVFrameRecorder("output.avi", width, height);
recorder.start();
while (frame.isVisible() && (grabbedImage = grabber.grab()) != null) {
cvClearMemStorage(storage);
// Let's try to detect some faces! but we need a grayscale image...
cvCvtColor(grabbedImage, grayImage, CV_BGR2GRAY);
CvSeq faces = cvHaarDetectObjects(grayImage, classifier, storage,
1.1, 3, CV_HAAR_DO_CANNY_PRUNING);
int total = faces.total();
for (int i = 0; i < total; i++) {
CvRect r = new CvRect(cvGetSeqElem(faces, i));
int x = r.x(), y = r.y(), w = r.width(), h = r.height();
cvRectangle(grabbedImage, cvPoint(x, y), cvPoint(x+w, y+h), CvScalar.RED, 1, CV_AA, 0);
// To access the elements of a native array, use the position() method.
hatPoints.position(0).x(x-w/10) .y(y-h/10);
hatPoints.position(1).x(x+w*11/10).y(y-h/10);
hatPoints.position(2).x(x+w/2) .y(y-h/2);
cvFillConvexPoly(grabbedImage, hatPoints.position(0), 3, CvScalar.GREEN, CV_AA, 0);
}
// Let's find some contours! but first some thresholding...
cvThreshold(grayImage, grayImage, 64, 255, CV_THRESH_BINARY);
// To check if an output argument is null we may call either isNull() or equals(null).
CvSeq contour = new CvSeq(null);
cvFindContours(grayImage, storage, contour, Loader.sizeof(CvContour.class),
CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
while (contour != null && !contour.isNull()) {
if (contour.elem_size() > 0) {
CvSeq points = cvApproxPoly(contour, Loader.sizeof(CvContour.class),
storage, CV_POLY_APPROX_DP, cvContourPerimeter(contour)*0.02, 0);
cvDrawContours(grabbedImage, points, CvScalar.BLUE, CvScalar.BLUE, -1, 1, CV_AA);
}
contour = contour.h_next();
}
cvWarpPerspective(grabbedImage, rotatedImage, randomR);
frame.showImage(rotatedImage);
recorder.record(rotatedImage);
}
recorder.stop();
grabber.stop();
frame.dispose();
}
}
The Output i am getting is a line printed in red and its like.
C://opencv/data/haarcascades"haarcascade_frontalface_alt.xml".
Can anybody show what i missed?
I am new to image processing and so please can anyone indicate me where i could get good tutorials and sample source codes that could teach me how to master all the in-built functions in JavaCv and their functionalities? I was working on my final year project and really need your hand on this one.
With lots of respect
Sisay
haarcascade_frontalface_alt.xml is trained classifier for detecting frontal face. It is usually present in opencv_installation_folder/opencv/data/haarcascade folder. you can give the direct path of your classifier instead of taking it from command line as
classifierName = opencv_installation_folder/opencv/data/harcascade/haarcascade_frontalface_alt.xml
that demo expects you to give it the cascade-file as an argument. it just stops, if it does not get one.
maybe you want to change the beginning like this:
public class Demo {
public static void main(String[] args) throws Exception {
String classifierName = "C:/opencv/data/haarcascades/haarcascade_frontalface_alt.xml";
if (args.length > 0) {
classifierName = args[0];
}
like that, it takes an arg from cmdline if present, else it takes the default-value

Creating opaque datasets in a group

I fail to understand why my code is giving me HDF5 Library Exceptions. It points at the createScalarDS method as the source of the error. But I believe this method does exist. Can anyone tell me why this code is unable to create an opaque dataset? What should the modification(s) be? Thanks.
public static void createFile(Message message) throws Exception {
// retrieve an instance of H5File
FileFormat fileFormat = FileFormat
.getFileFormat(FileFormat.FILE_TYPE_HDF5);
if (fileFormat == null) {
System.err.println("Cannot find HDF5 FileFormat.");
return;
}
// create a new file with a given file name.
H5File testFile = (H5File) fileFormat.create(fname);
if (testFile == null) {
System.err.println("Failed to create file:" + fname);
return;
}
// open the file and retrieve the root group
testFile.open();
Group root = (Group) ((javax.swing.tree.DefaultMutableTreeNode) testFile
.getRootNode()).getUserObject();
Group g1 = testFile.createGroup("byte arrays", root);
// obtaining the serialized object
byte[] b = serializer.serialize(message);
int len = b.length;
byte[] dset_data = new byte[len + 1];
// Initialize data.
int indx = 0;
for (int jndx = 0; jndx < len; jndx++)
dset_data[jndx] = b[jndx];
dset_data[len] = (byte) (indx);
// create opaque dataset ---- error here…
Datatype dtype = testFile.createDatatype(Datatype.CLASS_OPAQUE,
(len * 4), Datatype.NATIVE, Datatype.NATIVE);
Dataset dataset = testFile.createScalarDS("byte array", g1, dtype,
dims1D, null, null, 0, dset_data);// error shown in this line
// close file resource
testFile.close();
}
I don't have grip on HDF5.
but, you can not directly use CLASS_OPAQUE
An opaque data type is a user-defined data type that can be used in the same way as a built-in data type. To create an opaque type check links:
http://idlastro.gsfc.nasa.gov/idl_html_help/Opaque_Datatypes.html
To create array datatype object:
Result = H5T_ARRAY_CREATE(Datatype_id, Dimensions)
example:
http://idlastro.gsfc.nasa.gov/idl_html_help/H5F_CREATE.html

Categories

Resources