I got a 3x3 matrix in OpenCV format (org.opencv.core.Mat) that I want to copy into android.graphics.Matrix. Any idea how?
[EDIT]
Here's the final version as inspired by #elmiguelao. The source matrix is from OpenCV and the destination matrix is from Android.
static void transformMatrix(Mat src, Matrix dst) {
int columns = src.cols();
int rows = src.rows();
float[] values = new float[columns * rows];
int index = 0;
for (int x = 0; x < columns; x++)
for (int y = 0; y < rows; y++) {
double[] value = src.get(x, y);
values[index] = (float) value[0];
index++;
}
dst.setValues(values);
}
Something along these lines:
cv.Mat opencv_matrix;
Matrix android_matrix;
if (opencv_matrix.isContiguous()) {
android_matrix.setValues(cv.MatOfFloat(opencv_matrix.ptr()).toArray());
} else {
float[] opencv_matrix_values = new float[9];
// Undocumented .get(row, col, float[]), but seems to be bulk-copy.
opencv_matrix.get(0, 0, opencv_matrix_values);
android_matrix.setValues(opencv_matrix_values);
}
This function also respects the Mat's data type (float or double):
static Matrix cvMat2Matrix(Mat source) {
if (source == null || source.empty()) {
return null;
}
float[] matrixValuesF = new float[source.cols()*source.rows()];
if (CvType.depth(source.type()) == CvType.CV_32F) {
source.get(0,0, matrixValuesF);
} else {
double[] matrixValuesD = new double[matrixValuesF.length];
source.get(0, 0, matrixValuesD);
//will throw an java.lang.UnsupportedOperationException if type is not CvType.CV_64F
for (int i=0; i<matrixValuesD.length; i++) {
matrixValuesF[i] = (float) matrixValuesD[i];
}
}
Matrix result = new Matrix();
result.setValues(matrixValuesF);
return result;
}
Related
Good morning. I'm a developer trying to put a tensorflow model into Android.
I've encountered an error that I've never seen before while trying to fix it with multiple errors.
The java.nio.BufferOverFlowException error i'm facing now is that it didn't happen before, but it happened suddenly.
My code uses a byte array, but i cannot specify which part is the problem.
This source that takes a float array as input and returns an array with 10 classes after passing through the model.
The returned values have softmax value.
public float[] hypothesis(float[] inputFloats, int nFeatures, int nClasses, Context context)
{
try {
int nInstance = inputFloats.length / nFeatures;
// FloatBuffer.wrap(inputFloats);
Toast.makeText(context, "", Toast.LENGTH_LONG).show();
inferenceInterface.feed(INPUT_NODE, FloatBuffer.wrap(inputFloats), INPUT_SIZE);
inferenceInterface.run(OUTPUT_NODES_HYPO);
float[] result = new float[nInstance * nClasses];
inferenceInterface.fetch(OUTPUT_NODE_HYPO, result);
return result;
}
catch(Exception e){
Toast.makeText(context, e+" ...", Toast.LENGTH_LONG).show();
return null;
}
}
The length of the inputfloats is 720 and the nFeatures is 720. nClasses is 10.
Although the value is not correct, it worked before.
e in the catch statement prints java.nio.BufferOverFlowException.
Could there be a problem in the middle of converting a byte array to a float array?
Related source.
public float[] bytetofloat(byte[] array){
int[] returnArr = new int[array.length/4];
float[] returnArr1 = new float[array.length/4];
for(int i = 0 ; i < returnArr.length; i++){
//array[i] = 0;
returnArr[i] = array[i*4] & 0xFF;
if(returnArr[i] < 0 || returnArr[i]>255)
Log.d("ARRAY", returnArr[i]+" ");
returnArr1[i] = (float)returnArr[i];
}
return returnArr1;
}
public Bitmap RGB2GRAY(Bitmap image){
int width = image.getWidth();
int height = image.getHeight();
Bitmap bmOut;
bmOut = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_4444);
for(int x = 0; x < width; x++){
for(int y = 0 ; y < height; y++){
int pixel = image.getPixel(x, y);
int A = Color.alpha(pixel);
int R = Color.red(pixel);
int G = Color.green(pixel);
int B = Color.blue(pixel);
R = G = B = (int)(0.2126 * R + 0.7152 * G + 0.0722 * B);
bmOut.setPixel(x, y, Color.argb(A, R, G, B));
}
}
return bmOut;
}
private void activityPrediction(float[] inputArray){
try {
float[] result = activityInference.hypothesis(inputArray, 20*36, 10, getApplicationContext());
predictionView.setText(Arrays.toString(result));
}
catch (Exception e){
Toast.makeText(getApplicationContext(), e.getMessage(), Toast.LENGTH_LONG).show();
}
}
private byte[] bitmapToByteArray(Bitmap bitmap)
{
int chunkNumbers = 10;
int bitmapSize = bitmap.getRowBytes() * bitmap.getHeight();
byte[] imageBytes = new byte[bitmapSize];
int rows, cols;
int chunkHeight, chunkWidth;
rows = cols = (int) Math.sqrt(chunkNumbers);
chunkHeight = bitmap.getHeight() / rows;
chunkWidth = bitmap.getWidth() / cols;
int yCoord = 0;
int bitmapsSizes = 0;
for (int x = 0; x < rows; x++)
{
int xCoord = 0;
for (int y = 0; y < cols; y++)
{
Bitmap bitmapChunk = Bitmap.createBitmap(bitmap, xCoord, yCoord, chunkWidth, chunkHeight);
byte[] bitmapArray = getBytesFromBitmapChunk(bitmapChunk);
System.arraycopy(bitmapArray, 0, imageBytes, bitmapsSizes, bitmapArray.length);
bitmapsSizes = bitmapsSizes + bitmapArray.length;
xCoord += chunkWidth;
bitmapChunk.recycle();
bitmapChunk = null;
}
yCoord += chunkHeight;
}
return imageBytes;
}
private byte[] getBytesFromBitmapChunk(Bitmap bitmap)
{
int bitmapSize = bitmap.getRowBytes() * bitmap.getHeight();
ByteBuffer byteBuffer = ByteBuffer.allocate(bitmapSize);
bitmap.copyPixelsToBuffer(byteBuffer);
byteBuffer.rewind();
return byteBuffer.array();
}
'e.printStackTrace()' result
at com.example.leehanbeen.platerecognize.ActivityInference.hypothesis(ActivityInference.java:58)
at com.example.leehanbeen.platerecognize.MainActivity.activityPrediction(MainActivity.java:148)
at com.example.leehanbeen.platerecognize.MainActivity.access$100(MainActivity.java:28)
at com.example.leehanbeen.platerecognize.MainActivity$2.onClick(MainActivity.java:69)
around MainActivity.java:69
byte[] byteArrayRes = bitmapToByteArray(image_bitmap);
float[] inputArray = bytetofloat(byteArrayRes);
activityPrediction(inputArray);
MainActivity.java:28
public class MainActivity extends AppCompatActivity {
MainActivity.java:148
float[] result = activityInference.hypothesis(inputArray, 20*36, 10, getApplicationContext());
around ActivityInference.java:58
float[] result = new float[nInstance * nClasses];
inferenceInterface.fetch(OUTPUT_NODE_HYPO, result);
I have created a sample SWT application. I am uploading few images into the application. I have to resize all the images which are above 16x16 (Width*Height) resolution and save those in separate location.
For this reason I am scaling the image and saving the scaled image to my destination location. Below is the piece of code which I am using to do that.
Using getImageData() to get the image data and to save I am using ImageLoader save() method.
final Image mySampleImage = ImageResizer.scaleImage(img, 16, 16);
final ImageLoader imageLoader = new ImageLoader();
imageLoader.data = new ImageData[] { mySampleImage.getImageData() };
final String fileExtension = inputImagePath.substring(inputImagePath.lastIndexOf(".") + 1);
if ("GIF".equalsIgnoreCase(fileExtension)) {
imageLoader.save(outputImagePath, SWT.IMAGE_GIF);
} else if ("PNG".equalsIgnoreCase(fileExtension)) {
imageLoader.save(outputImagePath, SWT.IMAGE_PNG);
}
ImageLoader imageLoader.save(outputImagePath, SWT.IMAGE_GIF); is throwing the below exeception when I am trying to save few specific images (GIF or PNG format).
org.eclipse.swt.SWTException: Unsupported color depth
at org.eclipse.swt.SWT.error(SWT.java:4533)
at org.eclipse.swt.SWT.error(SWT.java:4448)
at org.eclipse.swt.SWT.error(SWT.java:4419)
at org.eclipse.swt.internal.image.GIFFileFormat.unloadIntoByteStream(GIFFileFormat.java:427)
at org.eclipse.swt.internal.image.FileFormat.unloadIntoStream(FileFormat.java:124)
at org.eclipse.swt.internal.image.FileFormat.save(FileFormat.java:112)
at org.eclipse.swt.graphics.ImageLoader.save(ImageLoader.java:218)
at org.eclipse.swt.graphics.ImageLoader.save(ImageLoader.java:259)
at mainpackage.ImageResizer.resize(ImageResizer.java:55)
at mainpackage.ImageResizer.main(ImageResizer.java:110)
Let me know If there is any other way to do the same (or) there is any way to resolve this issue.
Finally I got a solution by referring to this existing eclipse bug Unsupported color depth eclipse bug.
In the below code i have created a PaletteData with RGB values and updated my Image Data.
My updateImagedata() method will take the scaled image and will return the proper updated imageData if the image depth is 32 or more.
private static ImageData updateImagedata(Image image) {
ImageData data = image.getImageData();
if (!data.palette.isDirect && data.depth <= 8)
return data;
// compute a histogram of color frequencies
HashMap<RGB, ColorCounter> freq = new HashMap<>();
int width = data.width;
int[] pixels = new int[width];
int[] maskPixels = new int[width];
for (int y = 0, height = data.height; y < height; ++y) {
data.getPixels(0, y, width, pixels, 0);
for (int x = 0; x < width; ++x) {
RGB rgb = data.palette.getRGB(pixels[x]);
ColorCounter counter = (ColorCounter) freq.get(rgb);
if (counter == null) {
counter = new ColorCounter();
counter.rgb = rgb;
freq.put(rgb, counter);
}
counter.count++;
}
}
// sort colors by most frequently used
ColorCounter[] counters = new ColorCounter[freq.size()];
freq.values().toArray(counters);
Arrays.sort(counters);
// pick the most frequently used 256 (or fewer), and make a palette
ImageData mask = null;
if (data.transparentPixel != -1 || data.maskData != null) {
mask = data.getTransparencyMask();
}
int n = Math.min(256, freq.size());
RGB[] rgbs = new RGB[n + (mask != null ? 1 : 0)];
for (int i = 0; i < n; ++i)
rgbs[i] = counters[i].rgb;
if (mask != null) {
rgbs[rgbs.length - 1] = data.transparentPixel != -1 ? data.palette.getRGB(data.transparentPixel)
: new RGB(255, 255, 255);
}
PaletteData palette = new PaletteData(rgbs);
ImageData newData = new ImageData(width, data.height, 8, palette);
if (mask != null)
newData.transparentPixel = rgbs.length - 1;
for (int y = 0, height = data.height; y < height; ++y) {
data.getPixels(0, y, width, pixels, 0);
if (mask != null)
mask.getPixels(0, y, width, maskPixels, 0);
for (int x = 0; x < width; ++x) {
if (mask != null && maskPixels[x] == 0) {
pixels[x] = rgbs.length - 1;
} else {
RGB rgb = data.palette.getRGB(pixels[x]);
pixels[x] = closest(rgbs, n, rgb);
}
}
newData.setPixels(0, y, width, pixels, 0);
}
return newData;
}
To find minimum index:
static int closest(RGB[] rgbs, int n, RGB rgb) {
int minDist = 256*256*3;
int minIndex = 0;
for (int i = 0; i < n; ++i) {
RGB rgb2 = rgbs[i];
int da = rgb2.red - rgb.red;
int dg = rgb2.green - rgb.green;
int db = rgb2.blue - rgb.blue;
int dist = da*da + dg*dg + db*db;
if (dist < minDist) {
minDist = dist;
minIndex = i;
}
}
return minIndex;
}
ColourCounter Class:
class ColorCounter implements Comparable<ColorCounter> {
RGB rgb;
int count;
public int compareTo(ColorCounter o) {
return o.count - count;
}
}
Edit: Updated the code, the code below now correctly draws rectangles around multiple shapes, but still has a minor issue of sometimes creating multiple rectangles on one single shape.
I have 2 Images, that i compare pixel by pixel with each other and i want my programm to create rectangles around the area of difference (multiple rectangles with multiple instances of differences). So far i managed to do this with a single rectangle, so if i had multiple "instances", they'd all be in one big rectangle. Now i'm trying to make the programm create multiple rectangles, but run into an IndexOutOfBoundsException.
The Programm itself overlays the 2 Images being compared with opacity and outputs the resulting overlaid image along with the rectangles into a new File. Both Images being compared have a consistent equal width and height.
I'm calling the rectangles i want to be drawn "regions" within the code.
The Region List is being continiously updated while the comparison is running.
The first question i asked myself was, when does a point of difference (pixel difference) belong to a region?
My attempt was to define a "tolerance", so as long as the pixel being compared is within the tolerance of the last found point of difference, it belongs to the same region. I quickly realized that this doesn't work as soon as i have a shape in form of a giant U on my image, with the top points being far enough apart to be not within the tolerance. And now i'm kind-of stuck, because i feel like i'm on the wrong path.
Below is the code i have so far:
private void compareImages() throws IOException{
BufferedImage img1;
BufferedImage img2;
try {
img1 = ImageIO.read(new File(path_to_img1));
img2 = ImageIO.read(new File(path_to_img2));
} catch (Throwable e) {
System.out.println("Unable to load the Images!");
return;
}
BufferedImage dest = new BufferedImage(img1.getWidth(), img1.getHeight(), BufferedImage.TYPE_INT_ARGB);
Graphics2D gfx = dest.createGraphics();
gfx.setComposite(AlphaComposite.getInstance(AlphaComposite.SRC_OVER, 0.65f));
//Compare Images pixel by pixel
int sX = 9999; //Start X
int sY = 9999; //Start Y
int eX = 0; //End X
int eY = 0; //End Y
boolean isDrawable = false;
boolean loadedRegion = false;
List<Rectangle> regions = new ArrayList<>();
List<Rectangle> check_regions = new ArrayList<>();
Rectangle tmp_comparison;
int regionID = 0;
int tolerance = 25;
for (int i = 0; i < img1.getHeight(); i++) {
for (int j = 0; j < img1.getWidth(); j++) {
loadedRegion = false;
regionID = 0;
sX = 9999;
sY = 9999;
eX = 0;
eY = 0;
if ( img1.getRGB(j, i) != img2.getRGB(j, i) ){
isDrawable = true;
if (regions.size() != 0){
//Attempting to locate a matching existing Region
tmp_comparison = new Rectangle(j, i, 1, 1);
for (int trID = 0; trID<regions.size(); trID++){
if (tmp_comparison.intersects(check_regions.get(trID).getBounds()) == true) {
// Region found
sX = (int) regions.get(trID).getX();
sY = (int) regions.get(trID).getY();
eX = (int) regions.get(trID).getWidth();
eY = (int) regions.get(trID).getHeight();
regionID = trID;
loadedRegion = true;
break;
}
}
}
//Update Region Dimension
if (j<sX){
sX = j;
}
if (j>eX){
eX = j;
}
if (i<sY){
sY = i;
}
if (i>eY){
eY = i;
}
if (regions.size() == 0 || loadedRegion == false){
regions.add(new Rectangle(sX, sY, eX, eY));
check_regions.add(new Rectangle(sX-tolerance, sY-tolerance, eX-sX+(tolerance*2), eY-sY+(tolerance*2)));
} else {
regions.set(regionID, new Rectangle(sX, sY, eX, eY));
check_regions.set(regionID, new Rectangle(sX-tolerance, sY-tolerance, eX-sX+(tolerance*2), eY-sY+(tolerance*2)));
}
}
}
}
// If there are any differences, draw the Regions
// Regions are 10px bigger in all directions as compared to the actual rectangles of difference
if (isDrawable == true){
gfx.setPaint(Color.red);
for (int i = 0; i<regions.size(); i++) {
int dsX = 0;
int dsY = 0;
int deX = 0;
int deY = 0;
sX = (int) regions.get(i).getX();
sY = (int) regions.get(i).getY();
eX = (int) regions.get(i).getWidth();
eY = (int) regions.get(i).getHeight();
if (sX>=10){dsX = sX-10;}
if (eX<=img1.getWidth()-10){deX = eX-sX+20;}
if (sY>=10){dsY = sY-10;}
if (eY<=img1.getHeight()-10){deY = eY-sY+20;}
gfx.draw(new Rectangle2D.Double(dsX, dsY, deX, deY));
}
}
gfx.drawImage(img1, 0, 0, null);
gfx.drawImage(img2, 0, 0, null);
gfx.dispose();
File out = new File("C:\\output.png");
ImageIO.write(dest, "PNG", out);
}
Below is the code that creates one big rectangle around all the differences found in the images being compared.
private void oneRectangle() throws IOException{
BufferedImage img1;
BufferedImage img2;
try {
img1 = ImageIO.read(new File(path_to_img1));
img2 = ImageIO.read(new File(path_to_img2));
} catch (Throwable e) {
System.out.println("Unable to load the Images!");
return;
}
BufferedImage dest = new BufferedImage(img1.getWidth(), img1.getHeight(), BufferedImage.TYPE_INT_ARGB);
Graphics2D gfx = dest.createGraphics();
gfx.setComposite(AlphaComposite.getInstance(AlphaComposite.SRC_OVER, 0.65f));
//Compare Images pixel by pixel
boolean isDrawable = false;
int sX = 9999;
int sY = 9999;
int eX = 0;
int eY = 0;
for (int i = 0; i < img1.getHeight(); i++) {
for (int j = 0; j < img1.getWidth(); j++) {
if ( img1.getRGB(j, i) != img2.getRGB(j, i) ){
isDrawable = true;
if (j<sX){
sX = j;
}
if (j>eX){
eX = j;
}
if (i<sY){
sY = i;
}
if (i>eY){
eY = i;
}
}
}
}
// Draw rectangle if there are any differences
if (isDrawable == true){
gfx.setPaint(Color.red);
int dsX = 0;
int dsY = 0;
int deX = 0;
int deY = 0;
if (sX>=10){dsX = sX-10;}
if (eX<=img1.getWidth()-10){deX = eX-sX+20;}
if (sY>=10){dsY = sY-10;}
if (eY<=img1.getHeight()-10){deY = eY-sY+20;}
gfx.fill(new Rectangle2D.Double(dsX, dsY, deX, deY));
}
gfx.drawImage(img1, 0, 0, null);
gfx.drawImage(img2, 0, 0, null);
gfx.dispose();
File out = new File("C:\\output.png");
ImageIO.write(dest, "PNG", out);
}
I have to create an android app for image registration. I have created a 2D array for each image after cropping images and i made an fft using jtrasform, then i tryed to create a cross correlation matrix. serching in this matrix for the max value coordinates i expected to have the X e Y values for shifting my image but this values are wrong and i can't find the error.
public void Registration(Bitmap image,Bitmap image2) {
int square,x,y;
int Min2,Min1,Min;
Min1=min(image.getHeight(),image2.getHeight());
Min2=min(image.getWidth(),image2.getWidth());
if(Min1<Min2)
Min=Min1;
else
Min=Min2;
if (Min>1024)
square =1024;
else{
if (Min>512)
square =512;
else{
if (Min<256)
square=128;
else
square=256;
}}
Bitmap crop=Bitmap.createBitmap(image, 0,0,square, square);
Bitmap crop2=Bitmap.createBitmap(image2, 0,0,square, square);*/
float[][] array = new float[square-1][square-1];
float[][] array2 = new float[square-1][square-1];
float[][] array3 = new float[square-1][square-1];
for (x = 0; x < square-1; x++) {
int p = crop.getPixel(x,x);
int p1=crop2.getPixel(x,x);
array[x][x] = (Color.red(p) + Color.green(p) + Color.blue(p)) / 3;
array2[x][x] = (Color.red(p1) + Color.green(p1) + Color.blue(p1)) / 3;
}
for (y = square-1; y < (2*square)-1; y++) {
for (x = 0; x < square-1; x++){
array[x][y] = 0;
array2[x][y] = 0;
}}
FloatFFT_2D a = new FloatFFT_2D(square,square);
FloatFFT_2D b = new FloatFFT_2D(square,square);
a.complexForward(array);
b.complexForward(array2);
for (y = 0; y < (2*square)-1; y++) {
for (x = 0; x < square-1; x++){
if(y>=square){
array2[x][y] =-array2[x][y];}
array3[x][y]=array[x][y]*array2[x][y];}}
FloatFFT_2D c = new FloatFFT_2D(square-1,square-1);
c.complexInverse(array3,false);
Max(array3,(square),(2*square));
when I rotate a model matrix in opengl es with the function I created, the model matrix makes the model smaller while rotating, and I don't understand why.
Here is the code of the rotating function(Rotating around the z axis only).
public void rotateZ(float angle){
float cos = (float) (Math.cos(Math.toRadians(angle)));
float sin = (float) (Math.sin(Math.toRadians(angle)));
Matrix4x4 ret = IdentityM();
ret.setValue(cos, 0, 0);
ret.setValue(-sin, 0, 1);
ret.setValue(sin, 1, 0);
ret.setValue(cos, 1, 1);
Multiply(ret);
}
And here is the code of the multiplication function:
public void Multiply(Matrix4x4 m){
float[][] m1 = m.toFloatMat();
for(int i = 0; i < 4; i++){
for(int j = 0; j < 4; j++){
float value = 0f;
for(int t = 0; t < 4; t++){
value += matrix[i][t] * m1[t][j];
}
matrix[i][j] = value;
}
}
}
And the setValue function:
public void setValue(float v, int i, int j){
matrix[i][j] = v;
}
The object is only getting smaller and I don't understand why ><
In your Multiply function, you are overwriting the original matrix while calculating the product. Make a temporary matrix to store the result, and then write it back to the class member matrix.