Plotting mesh grid surface in Java - java

I have a 40x40 array filled with double values that correspond to a mesh grid composed of 2 matrices in Java.
I would like to plot a surface out of those values in 3D, and found JZY3D library that seems appropriate, but I don't know where to start and how to code this kind of plot.
Anyone worked with this library and can give a good advice on where to start ?

It seems like jzy3D's SurfaceDemo.
You need to create surface rather than buildOrthonormal(Line 36 in SurfaceDemo.java).
ans: https://stackoverflow.com/a/8339474
Algorithms: https://www.mathworks.com/help/matlab/ref/surf.html
double[][] Z = new double[40][40];
...
List<Polygon> polygons = new ArrayList<Polygon>();
for(int i = 0; i < zq.length -1; i++){
for(int j = 0; j < zq[0].length -1; j++){
Polygon polygon = new Polygon();
polygon.add(new Point(new Coord3d(i, j, Z[i][j])));
polygon.add(new Point(new Coord3d(i, j+1, Z[i][j+1])));
polygon.add(new Point(new Coord3d(i+1, j+1, Z[i+1][j+1])));
polygon.add(new Point(new Coord3d(i+1, j, Z[i+1][j])));
polygons.add(polygon);
}
}
final Shape surface = new Shape(polygons);
surface.setColorMapper(new ColorMapper(new ColorMapRainbow(), surface.getBounds().getZmin(), surface.getBounds().getZmax(), new Color(1, 1, 1, .5f)));
surface.setFaceDisplayed(true);
surface.setWireframeDisplayed(true);
// Create a chart and add it
Chart chart = new Chart();
chart.getAxeLayout().setMainColor(Color.WHITE);
chart.getView().setBackgroundColor(Color.BLACK);
chart.getScene().add(surface);
ChartLauncher.openChart(chart);
result

Related

Raytracing from scratch

I made a 3D-renderer that parses .obj files (ASCII) and projects them on to a 2d plane.
At first glance the projection model seems to be fine except one thing.
I noticed that the projection model looks a bit odd:
[1]: https://i.stack.imgur.com/iaLOu.png
All polygons are being drawn including the ones in the back of the model, which I
should definitely not be able to see.
I made a quick recherche in Wikipedia to see what this is about and I think I found something called "Sichtbarkeitsproblem" (Hidden-surface determination).
(DE): https://de.wikipedia.org/wiki/Sichtbarkeitsproblem
(EN):
https://en.wikipedia.org/wiki/Hidden-surface_determination
The article mentions that this is a common thing in computer graphics and that there are many different ways to perform a "Verdeckungsberechnung" (cover up calculation).
It mentions things like using a z-Buffer and Raytracing.
Now I don't really know a lot about Raytracing but It seems to be quite applicable as I later want to add a light source.
I am not sure how Raytracing works but If I just send out rays in an angle that matches the slope from the camera to every pixel on screen and check which polygon hits it first I would only end up having some polygons completely missing only due to one vertex being potentially covered.
How do other Raytracers work? Do they remove the entire polygon when not getting a hit? Remove only one or more vertecies? (which I belief would cause massive distortion in shape) or do they just render all the Polygons and arrange them in a way that they are sorted by the minimum distance to the camera? (I guess this would made it very bad at performance)
Please help me implement this into my code or give me a hint, it would mean a lot to me.
My code is as followed, and the link for the projection model (see Image no. 1) I put here:
https://drive.google.com/file/d/10dpjcL2d2QB15qqTSu5p6kQ534hNOzCz/view?usp=sharing
(Note that the 3d-model and code must be in same folder in order to work)
// 12.11.2022
// Siehe Rotation Matrix in Wikipedia
// View Space: The world space vertex positions relative to the view of the camera
/* Die Verdeckungsberechnung ist zum korrekten Rendern einer 3D-Szene notwendig, weil Oberflächen,
die für den Betrachter nicht sichtbar sind, auch nicht dargestellt werden sollten
*/
// -> https://de.wikipedia.org/wiki/Sichtbarkeitsproblem
// TODO: Raytracing/Verdeckungsberechnung
// TODO: Texture Mapping
import java.util.Arrays;
import java.awt.Robot;
import java.nio.ByteBuffer;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import java.util.ArrayList;
byte b[];
int amount = 0;
String lines[];
PVector[][] vertices;
int[] faces;
float a = 0;
PVector cam, cam_angle, cam_move, cam_speed;
float angle = 0.0;
void setup() {
size(800,600);
frameRate(60);
noCursor();
cam = new PVector(0, 100, -500);
cam_angle = new PVector(0, 0, 0);
cam_move = new PVector(0, 0, 0);
cam_speed = new PVector(50, 50, 50);
lines = loadStrings("UM2_SkullPile28mm.obj");
println("File loaded. Now scanning contents...");
println();
Pattern numbers = Pattern.compile("(-?\\d+)");
ArrayList<PVector> vertices_ = new ArrayList<PVector>();
ArrayList<ArrayList> faces_ = new ArrayList<ArrayList>();
int parsed_lines = 0;
for(String i:lines) {
switch(i.charAt(0)) {
// Find faces
case 'f':
ArrayList<Integer> values = new ArrayList<Integer>();
for(Matcher m = numbers.matcher(i); m.find(); values.add(Integer.parseInt(m.group())));
faces_.add(values);
break;
// Find Vectors
case 'v':
String s[] = i.trim().split("\\s+");
vertices_.add(new PVector(Float.parseFloat(s[1])*20, Float.parseFloat(s[2])*20, Float.parseFloat(s[3])*20));
break;
};
if(++parsed_lines % (lines.length/6) == 0 || parsed_lines == lines.length) println((int)(map(parsed_lines, 0, lines.length, 0, 100)), "%");
}
println();
println("Done. Found", vertices_.size(), "Vertices and", faces_.size(), "faces");
int i=0;
vertices = new PVector[faces_.size()][];
for(ArrayList<Integer> f_:faces_) {
vertices[i] = new PVector[f_.size()];
int j = 0;
for(int f: f_) {
PVector v = vertices_.get(f-1);
vertices[i][j] = Rotate3d_x(v, -90);
j++;
}
i++;
}
}
PVector Rotate2d(PVector p, float a) {
// a = angle
float[][] m2 = {
{cos(a), -sin(a)},
{sin(a), cos(a)}
};
float[][] rotated = matmul(m2, new float[][] {
{ p.x },
{ p.y }
});
return new PVector(rotated[0][0], rotated[1][0]);
}
PVector Rotate3d(PVector p, float[][] m2) {
float[][] rotated = matmul(m2, new float[][] {
{ p.x },
{ p.y },
{ p.z }
});
return new PVector(rotated[0][0], rotated[1][0], rotated[2][0]);
}
PVector Rotate3d_x(PVector p, float a) {
return Rotate3d(p,
new float[][] {
{1, 0, 0},
{0, cos(a), -sin(a)},
{0, sin(a), cos(a)}
});
};
PVector Rotate3d_y(PVector p, float a) {
return Rotate3d(p,
new float[][] {
{cos(a), 0, sin(a)},
{0, 1, 0},
{-sin(a), 0, cos(a)}
});
}
PVector Rotate3d_z(PVector p, float a) {
return Rotate3d(p,
new float[][] {
{cos(a), -sin(a), 0},
{sin(a), cos(a), 0},
{0, 0, 1}
});
}
PVector Rotate3d(PVector p, PVector a) {
return Rotate3d_z( Rotate3d_y(Rotate3d_x(p, a.x), a.y), a.z );
}
// Matrixmultiplikation
float[][] matmul(float[][] m1, float[][] m2) {
int cols_m1 = m1.length,
rows_m1 = m1[0].length;
int cols_m2 = m2.length,
rows_m2 = m2[0].length;
try {
if (rows_m1 != cols_m2) throw new Exception("Rows of m1 must match Columns of m2!");
}
catch(Exception e) {
println(e);
}
float[][] res = new float[cols_m2][rows_m2];
for (int c=0; c < cols_m1; c++) {
for (int r2=0; r2 < rows_m2; r2++) {
float sum = 0;
float[] buf = new float[rows_m1];
// Multiply rows of m1 with columns of m2 and store in buf
for (int r=0; r < rows_m1; r++) {
buf[r] = m1[c][r]* m2[r][r2];
}
// Add up all entries into sum
for (float entry : buf) {
sum += entry;
}
res[c][r2] = sum;
}
}
return res;
}
PVector applyPerspective(PVector p) {
PVector d = applyViewTransform(p);
return applyPerspectiveTransform(d);
}
PVector applyViewTransform(PVector p) {
// c = camera position
// co = camera orientation / camera rotation
PVector c = cam;
PVector co = cam_angle;
// dx, dy, dz https://en.wikipedia.org/wiki/3D_projection : Mathematical Formula
float[][] dxyz = matmul(
matmul(new float[][]{
{1, 0, 0},
{0, cos(co.x), sin(co.x)},
{0, -sin(co.x), cos(co.x)}
}, new float[][]{
{cos(co.y), 0, -sin(co.y)},
{0, 1, 0},
{sin(co.y), 0, cos(co.y)}
}),
matmul(new float[][]{
{cos(co.z), sin(co.z), 0},
{-sin(co.z), cos(co.z), 0},
{0, 0, 1}
}, new float[][]{
{p.x - c.x},
{p.y - c.y},
{p.z - c.z},
}));
PVector d = new PVector(dxyz[0][0], dxyz[1][0], dxyz[2][0]);
return d;
}
PVector applyPerspectiveTransform(PVector d) {
// e = displays surface pos relative to camera pinhole c
PVector e = new PVector(0, 0, 300);
return new PVector((e.z / d.z) * d.x + e.x, (e.z / d.z) * d.y + e.y);
}
void draw() {
background(255);
translate(width/2, height/2);
scale(1,-1);
noStroke();
fill(0, 100, 0, 50);
PVector[][] points_view = new PVector[vertices.length][];
for(int i=0; i < vertices.length; i++) {
points_view[i] = new PVector[vertices[i].length];
for(int j=0; j < vertices[i].length; j++)
points_view[i][j] = applyViewTransform(Rotate3d_y(vertices[i][j], angle));
}
// The following snippet I got from: https://stackoverflow.com/questions/74443149/3d-projection-axis-inversion-problem-java-processing?noredirect=1#comment131433616_74443149
float nearPlane = 1.0;
for (int c = 0; c < points_view.length; c++) {
beginShape();
for (int r = 0; r < points_view[c].length-1; r++) {
// Alle Punkte verbinden
//if (i == a) continue;
PVector p0 = points_view[c][r];
PVector p1 = points_view[c][r+1];
if(p0.z < nearPlane && p1.z < nearPlane){ continue; };
if(p0.z >= nearPlane && p1.z < nearPlane)
p1 = PVector.lerp(p0, p1, (p0.z - nearPlane) / (p0.z - p1.z));
if(p0.z < nearPlane && p1.z >= nearPlane)
p0 = PVector.lerp(p1, p0, (p1.z - nearPlane) / (p1.z - p0.z));
// project
p0 = applyPerspectiveTransform(p0);
p1 = applyPerspectiveTransform(p1);
vertex(p0.x, p0.y);
vertex(p1.x, p1.y);
}
endShape();
}
}
Ray tracing doesn't determine whether or not a polygon is visible. It determines what point (if any) on what polygon is visible in a given direction.
As a simplification: rasterisation works by taking a set of geometry and for each one determining what pixels it affects. Ray tracing works by taking a set of pixels and, for each one determining what geometry is visible along that direction.
With rasterisation, there are many ways of making sure that polygons don't draw in the wrong order. One approach is to sort them by distance to the camera, but that doesn't work with polygons that overlap. The usual approach is to use a z-buffer: when a polygon is rasterised, calculate the distance to the camera in each pixel, and only update the buffer if the new value is nearer to the camera than the old value.
With ray tracing, each ray returns the nearest hit location along a direction, along with what it hit. Since each pixel will only be visited once, you don't need to worry about triangles drawing on top of each other.
If you just want to project a piece of 3D geometry onto a plane, rasterisation will likely be much, much faster. At a very high level, do this:
create an RGBA buffer of size X*Y
create a z buffer of size X*Y and fill it with 'inf'
for each triangle:
project the triangle onto the projection plane
for each pixel the triangle might affect:
calculate distance from camera to the corresponding position on the triangle
if the distance is lower than the current value in the z buffer:
replace the value in the RGBA and z buffers with the new values

How to rotate a polygon around it's center?

I've found other questions asking how to do this but I haven't gotten any of them to work. I'm trying to write a method that rotates a polygon around it's center a number of degrees. My current code makes it disappear from the screen when it moves a degree.
public void rotate(double angle) {
AffineTransform at = new AffineTransform();
int xCoords [] = {(int)line1.getX1(), (int)line2.getX1(), (int)line3.getX1()};
int yCoords [] = {(int)line1.getY1(), (int)line2.getY1(), (int)line3.getY1()};
Polygon polygon = new Polygon(xCoords, yCoords, 3);
at.rotate(Math.toRadians(angle), getX(), getY());
for (int i = 0; i < polygon.npoints; i++){
Point p = new Point(polygon.xpoints[i], polygon.ypoints[i]);
at.transform(p, p);
//System.out.println(p.x);
//System.out.println(p.y);
poly.addPoint(p.x, p.y);
}
setA(poly.xpoints[1], poly.ypoints[1]);
setB(poly.xpoints[2], poly.ypoints[2]);
setC(poly.xpoints[3], poly.ypoints[3]);
}
edit: include attempt

Opencv - detecting whether the eye is closed or open

I am working on a project where we are trying to detect whether the eye is closed or open in a picture. What we have done so far is that we detected the face, then the eyes. Then we applied hough transform, hoping that the iris would be the only circle when the eye is open. The problem is that when the eye is closed, it produces a circle as well:
Here is the code:
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfRect;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.highgui.Highgui;
import org.opencv.objdetect.CascadeClassifier;
import org.opencv.imgproc.Imgproc;
public class FaceDetector {
public static void main(String[] args) {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
System.out.println("\nRunning FaceDetector");
CascadeClassifier faceDetector = new CascadeClassifier("D:\\CS\\opencv\\sources\\data\\haarcascades\\haarcascade_frontalface_alt.xml");
CascadeClassifier eyeDetector = new CascadeClassifier("D:\\CS\\opencv\\sources\\data\\haarcascades\\haarcascade_eye.xml");
Mat image = Highgui.imread("C:\\Users\\Yousra\\Desktop\\images.jpg");
Mat gray = Highgui.imread("C:\\Users\\Yousra\\Desktop\\eyes\\E7.png");
String faces;
String eyes;
MatOfRect faceDetections = new MatOfRect();
MatOfRect eyeDetections = new MatOfRect();
Mat face;
Mat crop = null;
Mat circles = new Mat();
faceDetector.detectMultiScale(image, faceDetections);
for (int i = 0; i< faceDetections.toArray().length; i++){
faces = "Face"+i+".png";
face = image.submat(faceDetections.toArray()[i]);
crop = face.submat(4, (2*face.width())/3, 0, face.height());
Highgui.imwrite(faces, face);
eyeDetector.detectMultiScale(crop, eyeDetections, 1.1, 2, 0,new Size(30,30), new Size());
if(eyeDetections.toArray().length ==0){
System.out.println(" Not a face" + i);
}else{
System.out.println("Face with " + eyeDetections.toArray().length + "eyes" );
for (int j = 0; j< eyeDetections.toArray().length ; j++){
System.out.println("Eye" );
Mat eye = crop.submat(eyeDetections.toArray()[j]);
eyes = "Eye"+j+".png";
Highgui.imwrite(eyes, eye);
}
}
}
Imgproc.cvtColor(gray, gray, Imgproc.COLOR_BGR2GRAY);
System.out.println("1 Hough :" +circles.size());
float circle[] = new float[3];
for (int i = 0; i < circles.cols(); i++)
{
circles.get(0, i, circle);
org.opencv.core.Point center = new org.opencv.core.Point();
center.x = circle[0];
center.y = circle[1];
Core.circle(gray, center, (int) circle[2], new Scalar(255,255,100,1), 4);
}
Imgproc.Canny( gray, gray, 200, 10, 3,false);
Imgproc.HoughCircles( gray, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 100, 80, 10, 10, 50 );
System.out.println("2 Hough:" +circles.size());
for (int i = 0; i < circles.cols(); i++)
{
circles.get(0, i, circle);
org.opencv.core.Point center = new org.opencv.core.Point();
center.x = circle[0];
center.y = circle[1];
Core.circle(gray, center, (int) circle[2], new Scalar(255,255,100,1), 4);
}
Imgproc.Canny( gray, gray, 200, 10, 3,false);
Imgproc.HoughCircles( gray, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 100, 80, 10, 10, 50 );
System.out.println("3 Hough" +circles.size());
//float circle[] = new float[3];
for (int i = 0; i < circles.cols(); i++)
{
circles.get(0, i, circle);
org.opencv.core.Point center = new org.opencv.core.Point();
center.x = circle[0];
center.y = circle[1];
Core.circle(gray, center, (int) circle[2], new Scalar(255,255,100,1), 4);
}
String hough = "afterhough.png";
Highgui.imwrite(hough, gray);
}
}
How to make it more accurate?
Circular Hough transform is unlikely to work well in the majority of cases i.e. where the eye is partially open or closed. You'd be better off isolating rectangular regions (bounding boxes) around the eyes and computing a measure based on pixel intensities (grey levels). For example the variance of pixels within the region would be a good discriminator between open and closed eyes. Obtaining a bounding box around the eyes can be done quite reliably using relative position from the bounding box detected around the face using OpenCV Haar cascades. Figure 3 in this paper gives some idea of the location process.
http://personal.ee.surrey.ac.uk/Personal/J.Collomosse/pubs/Malleson-IJCV-2012.pdf
You can check circles.cols() value if it is 2 then the eyes are open and if the value is 0 then the eyes are closed. You can also detect the blinking of eye if the value of circles.cols() changes from 2 to 0. Hough transform wil not detect a circle if the eyes are closed.

How to plot a dual Y-Axis (secondary Y-axis) using AChartEngine - Android

I have two data series with two different Y-axis and a single X-axis. I am trying to plot a Dual Y-Axis (or in Excel known as Secondary Y-Axis) so that the chart are scaled. But I only get a single Y-axis for both data series. Note: I am using AChartEngine 1.1.0
Can anyone please advise.
My code is given below with a screenshot.
public class LineChart {
public Intent getIntent(Context context){
int[] x = {1,2,3,4,5,6,7,8,9,10};
int[] y = {22,45,34,45,55,65,74,85,93,100};
TimeSeries series = new TimeSeries("Data1");
for(int i = 0; i<x.length; i++){
series.add(x[i], y[i]);
}
int[] x2 = {1,2,3,4,5,6,7,8,9,10};
int[] y2 = {223,454,334,454,554,655,745,855,935,510};
TimeSeries series2 = new TimeSeries("Data2");
for(int i = 0; i<x.length; i++){
series2.add(x2[i], y2[i]);
}
//Multiple Series Data Set
XYMultipleSeriesDataset dataset = new XYMultipleSeriesDataset();
dataset.addSeries(series); //First Data Series
dataset.addSeries(series2); //Second Data Series
//Multiple Series Renderer
XYMultipleSeriesRenderer mRenderer = new XYMultipleSeriesRenderer(2);
//Background
mRenderer.setApplyBackgroundColor(true);
mRenderer.setBackgroundColor(Color.BLACK);
//mRenderer.setMarginsColor(Color.parseColor("#F5F5F5"));
//Grid
mRenderer.setShowGridY(true);
mRenderer.setShowGridX(true);
mRenderer.setGridColor(Color.WHITE);
//Label
mRenderer.setLabelsTextSize(14);
mRenderer.setXLabelsColor(Color.GREEN);
//Min and Max
mRenderer.setXAxisMax(series.getMaxX());
mRenderer.setXAxisMin(series.getMinX());
//Dual yaxis
mRenderer.setYLabelsColor(0, Color.GREEN);
mRenderer.setYLabelsColor(1, Color.RED);
mRenderer.setYTitle("Y-AXIS1", 0);
mRenderer.setYTitle("Y-AXIS2", 1);
mRenderer.setYAxisAlign(Align.LEFT, 0);
mRenderer.setYAxisAlign(Align.RIGHT, 1);
mRenderer.setYLabelsAlign(Align.LEFT, 0);
mRenderer.setYLabelsAlign(Align.RIGHT, 1);
//First Series - Single Series Renderer
XYSeriesRenderer renderer = new XYSeriesRenderer();
renderer.setColor(Color.RED);
renderer.setPointStyle(PointStyle.CIRCLE);
renderer.setFillPoints(true);
//Second Series - Single Series Renderer
XYSeriesRenderer renderer2 = new XYSeriesRenderer();
renderer2.setColor(Color.GREEN);
renderer2.setPointStyle(PointStyle.CIRCLE);
renderer2.setFillPoints(true);
//Add renderers to multiple series Renderer
mRenderer.addSeriesRenderer(renderer);
mRenderer.addSeriesRenderer(renderer2);
Intent intent = ChartFactory.getLineChartIntent(context, dataset, mRenderer, "Line Graph Title");
return intent;
}
}
Finally, I have done it using this example - here

Convex Hull on Java Android Opencv 2.3

Please help me,
I have a problem for Convex Hull on Android. I use Java and OpenCV 2.3.
Before I made it on Java, I made it on C++ with Visual Studio 2008.
This code can running successfully on C++.
Now, i want to convert it from C++ to Java on Android. And I found error like "force close" when i run it on SDK Android simulator.
This is my code on C++:
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
drawing = Mat::zeros( canny_output.size(), CV_64F );
/// Find the convex hull object for each contour
vector<vector<Point> > hull ( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ convexHull( Mat(contours[i]), hull[i], false );
}
for(size_t i = 0; i < contours.size(); i++){
drawContours( drawing, hull, i, Scalar(255, 255, 255), CV_FILLED ); // FILL WHITE COLOR
}
And this is my code on Android:
Mat hierarchy = new Mat(img_canny.rows(),img_canny.cols(),CvType.CV_8UC1,new Scalar(0));
List<Mat> contours =new ArrayList<Mat>();
List<Mat> hull = new ArrayList<Mat>(contours.size());
drawing = Mat.zeros(img_canny.size(), im_gray);
Imgproc.findContours(img_dilasi, contours, hierarchy,Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE, new Point(0, 0));
for(int i=0; i<contours.size(); i++){
Imgproc.convexHull(contours.get(i), hull.get(i), false);
}
for(int i=0; i<contours.size(); i++){
Imgproc.drawContours(drawing, hull, i, new Scalar(255.0, 255.0, 255.0), 5);
}
For your info, I did a little modification on Convex Hull at my code. I fill a color inside contour.
Anyone can help me to solve my problem?
I'm very grateful for your help.
Don't have the rep to add comment, just wanted to say the two answers above helped me get Imgproc.convexHull() working for my use case with something like this (2.4.8):
MatOfPoint mopIn = ...
MatOfInt hull = new MatOfInt();
Imgproc.convexHull(mopIn, hull, false);
MatOfPoint mopOut = new MatOfPoint();
mopOut.create((int)hull.size().height,1,CvType.CV_32SC2);
for(int i = 0; i < hull.size().height ; i++)
{
int index = (int)hull.get(i, 0)[0];
double[] point = new double[] {
mopIn.get(index, 0)[0], mopIn.get(index, 0)[1]
};
mopOut.put(i, 0, point);
}
// do something interesting with mopOut
This code works well in my application. In my case, I had multiple contours to work with, so you will notice a lot of Lists, but if you only have one contour, just adjust it to work without the .get(i) iterations.
This thread explains the process more simply.
android java opencv 2.4 convexhull convexdefect
// Find the convex hull
List<MatOfInt> hull = new ArrayList<MatOfInt>();
for(int i=0; i < contours.size(); i++){
hull.add(new MatOfInt());
}
for(int i=0; i < contours.size(); i++){
Imgproc.convexHull(contours.get(i), hull.get(i));
}
// Convert MatOfInt to MatOfPoint for drawing convex hull
// Loop over all contours
List<Point[]> hullpoints = new ArrayList<Point[]>();
for(int i=0; i < hull.size(); i++){
Point[] points = new Point[hull.get(i).rows()];
// Loop over all points that need to be hulled in current contour
for(int j=0; j < hull.get(i).rows(); j++){
int index = (int)hull.get(i).get(j, 0)[0];
points[j] = new Point(contours.get(i).get(index, 0)[0], contours.get(i).get(index, 0)[1]);
}
hullpoints.add(points);
}
// Convert Point arrays into MatOfPoint
List<MatOfPoint> hullmop = new ArrayList<MatOfPoint>();
for(int i=0; i < hullpoints.size(); i++){
MatOfPoint mop = new MatOfPoint();
mop.fromArray(hullpoints.get(i));
hullmop.add(mop);
}
// Draw contours + hull results
Mat overlay = new Mat(binaryImage.size(), CvType.CV_8UC3);
Scalar color = new Scalar(0, 255, 0); // Green
for(int i=0; i < contours.size(); i++){
Imgproc.drawContours(overlay, contours, i, color);
Imgproc.drawContours(overlay, hullmop, i, color);
}
Example in Java (OpenCV 2.4.11)
hullMat contains the sub mat of gray, as identified by the convexHull method. You may want to filter the contours you really need, for example based on their area.
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
MatOfInt4 hierarchy = new MatOfInt4();
MatOfInt hull = new MatOfInt();
void foo(Mat gray) {
Imgproc.findContours(gray, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
for (int i = 0; i < contours.size(); i++) {
Imgproc.convexHull(contours.get(i), hull);
MatOfPoint hullContour = hull2Points(hull, contours.get(i));
Rect box = Imgproc.boundingRect(hullContour);
Mat hullMat = new Mat(gray, box);
...
}
}
MatOfPoint hull2Points(MatOfInt hull, MatOfPoint contour) {
List<Integer> indexes = hull.toList();
List<Point> points = new ArrayList<>();
MatOfPoint point= new MatOfPoint();
for(Integer index:indexes) {
points.add(contour.toList().get(index));
}
point.fromList(points);
return point;
}
Looking at the documentation of findContours() and convexHull(), it appears that you have declared the variables contours and hull incorrectly.
Try changing the declarations to:
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
List<MatOfInt> hull = new ArrayList<MatOfInt>();
Then, after you call convexHull(), hull contains the indices of the points in contours which comprise the convex hull. In order to draw the points with drawContours(), you will need to populate a new MatOfPoint containing only the points on the convex hull, and pass that to drawContours(). I leave this as an exercise for you.
To add on to what Aurelius said, in your C++ implementation you used a vector of points, therefore the hull matrix contains the actual convex Points:
"In the first case [integer vector of indices], the hull elements are 0-based indices of the convex hull points in the original array (since the set of convex hull points is a subset of the original point set). In the second case [vector of points], hull elements are the convex hull points themselves." - convexHull
This is why you were able to call
drawContours( drawing, hull, i, Scalar(255, 255, 255), CV_FILLED );
In your android version, the hull output is simply an array of indices which correspond to the points in the original contours.get(i) Matrix. Therefore you need to look up the convex points in the original matrix. Here is a very rough idea:
MatOfInt hull = new MatOfInt();
MatOfPoint tempContour = contours.get(i);
Imgproc.convexHull(tempContour, hull, false); // O(N*Log(N))
//System.out.println("hull size: " + hull.size() + " x" + hull.get(0,0).length);
//System.out.println("Contour matrix size: " + tempContour.size() + " x" + tempContour.get(0,0).length);
int index = (int) hull.get(((int) hull.size().height)-1, 0)[0];
Point pt, pt0 = new Point(tempContour.get(index, 0)[0], tempContour.get(index, 0)[1]);
for(int j = 0; j < hull.size().height -1 ; j++){
index = (int) hull.get(j, 0)[0];
pt = new Point(tempContour.get(index, 0)[0], tempContour.get(index, 0)[1]);
Core.line(frame, pt0, pt, new Scalar(255, 0, 100), 8);
pt0 = pt;
}
Use this fillconvexPoly
for( int i = 0; i < contours.size(); i++ ){
Imgproc.fillConvexPoly(image_2, point,new Scalar(255, 255, 255));
}

Categories

Resources