We tried to convert the c++ Code from Detection of coins (and fit ellipses) on an image into Java. When we start the program with the parameters
2 PathToThePicture
it crashes with this error:
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in cvGetSize, file
..\..\..\..\opencv\modules\core\src\array.cpp, line 1238
Exception in thread "main" java.lang.RuntimeException: ..\..\..\..\opencv\modules
\core\src\array.cpp:1238: error: (-5) Array should be CvMat or IplImage in function
cvGetSize
at com.googlecode.javacv.cpp.opencv_core.cvGetSize(Native Method)
at DetectEllipse.main(DetectEllipse.java:65)
Here is the converted Java-Code:
import static com.googlecode.javacv.cpp.opencv_core.CV_FILLED;
import static com.googlecode.javacv.cpp.opencv_core.CV_RGB;
import static com.googlecode.javacv.cpp.opencv_core.CV_WHOLE_SEQ;
import static com.googlecode.javacv.cpp.opencv_core.cvCreateImage;
import static com.googlecode.javacv.cpp.opencv_core.cvCreateMemStorage;
import static com.googlecode.javacv.cpp.opencv_core.cvDrawContours;
import static com.googlecode.javacv.cpp.opencv_core.cvGetSize;
import static com.googlecode.javacv.cpp.opencv_core.cvPoint;
import static com.googlecode.javacv.cpp.opencv_core.cvScalar;
import static com.googlecode.javacv.cpp.opencv_core.cvXorS;
import static com.googlecode.javacv.cpp.opencv_core.cvZero;
import static com.googlecode.javacv.cpp.opencv_highgui.cvLoadImage;
import static com.googlecode.javacv.cpp.opencv_highgui.cvSaveImage;
import static com.googlecode.javacv.cpp.opencv_imgproc.CV_CHAIN_APPROX_SIMPLE;
import static com.googlecode.javacv.cpp.opencv_imgproc.CV_RETR_CCOMP;
import static com.googlecode.javacv.cpp.opencv_imgproc.CV_THRESH_BINARY;
import static com.googlecode.javacv.cpp.opencv_imgproc.cvContourArea;
import static com.googlecode.javacv.cpp.opencv_imgproc.cvDilate;
import static com.googlecode.javacv.cpp.opencv_imgproc.cvFindContours;
import static com.googlecode.javacv.cpp.opencv_imgproc.cvThreshold;
import com.googlecode.javacpp.Loader;
import com.googlecode.javacv.cpp.opencv_core.CvContour;
import com.googlecode.javacv.cpp.opencv_core.CvMemStorage;
import com.googlecode.javacv.cpp.opencv_core.CvRect;
import com.googlecode.javacv.cpp.opencv_core.CvScalar;
import com.googlecode.javacv.cpp.opencv_core.CvSeq;
import com.googlecode.javacv.cpp.opencv_core.IplImage;
public class DetectEllipse{
public static final double M_PI = 3.14159265358979323846;
public static final double MIN_AREA = 100.00;
public static final double MAX_TOL = 100.00;
private static int[] array = { 0 };
//
// We need this to be high enough to get rid of things that are too small
// too
// have a definite shape. Otherwise, they will end up as ellipse false
// positives.
//
//
// One way to tell if an object is an ellipse is to look at the relationship
// of its area to its dimensions. If its actual occupied area can be
// estimated
// using the well-known area formula Area = PI*A*B, then it has a good
// chance of
// being an ellipse.
//
// This value is the maximum permissible error between actual and estimated
// area.
//
public static void main(String[] args) {
IplImage src = cvLoadImage(args[1], 0);
// the first command line parameter must be file name of binary
// (black-n-white) image
if (Integer.parseInt(args[0]) == 2) {
IplImage dst = cvCreateImage(cvGetSize(src), 8, 3);
CvMemStorage storage = cvCreateMemStorage(0);
CvSeq contour = new CvContour();
// maybe: = new CvSeq(0)
cvThreshold(src, src, 1, 255, CV_THRESH_BINARY);
//
// Invert the image such that white is foreground, black is
// background.
// Dilate to get rid of noise.
//
cvXorS(src, cvScalar(255, 0, 0, 0), src, null);
cvDilate(src, src, null, 2);
cvFindContours(src, storage, contour,
Loader.sizeof(CvContour.class), CV_RETR_CCOMP,
CV_CHAIN_APPROX_SIMPLE, cvPoint(0, 0));
cvZero(dst);
for (; contour.flags() != 0; contour = contour.h_next()) {
// if not working: use contour.isNull()
double actual_area = Math.abs(cvContourArea(contour,
CV_WHOLE_SEQ, 0));
if (actual_area < MIN_AREA)
continue;
//
// FIXME:
// Assuming the axes of the ellipse are vertical/perpendicular.
//
CvRect rect = ((CvContour) contour).rect();
int A = rect.width() / 2;
int B = rect.height() / 2;
double estimated_area = Math.PI * A * B;
double error = Math.abs(actual_area - estimated_area);
if (error > MAX_TOL)
continue;
System.out.printf("center x: %d y: %d A: %d B: %d\n", rect.x()
+ A, rect.y() + B, A, B);
CvScalar color = CV_RGB(
tangible.RandomNumbers.nextNumber() % 255,
tangible.RandomNumbers.nextNumber() % 255,
tangible.RandomNumbers.nextNumber() % 255);
cvDrawContours(dst, contour, color, color, -1, CV_FILLED, 8,
cvPoint(0, 0));
}
cvSaveImage("coins.png", dst, array);
}
}
}
Can anyone help use? Thanks in advance !
Probably cvGetSize(src) is doing the crash. That happens when src is null.
In other words, the image was not loaded/found (maybe the path is wrong?).
In the future, you can avoid such problems by testing if the image was loaded successfully:
IplImage src = cvLoadImage(args[1], 0);
if (src == null)
{
System.out.println("!!! Unable to load image: " + args[1]);
return;
}
Related
I am looking for a class in Java that gives me possibility to check if e.g. pixel (x,y) is red or average color from selected screen area is X. Is there a class that supports it? If no, i'd try to create that kind of class on my own. Does Java have tools that support such operations?
You can use the Robot class with a simple calculation to check if the color is close to the color you are looking for
package test;
import java.awt.Color;
import java.awt.Robot;
import java.awt.AWTException;
public class main {
static final Color RED_COLOR = new Color(255, 0, 0);
private static boolean colorsAreClose(Color color1, Color color2, int threshold) {
int r = (int) color1.getRed() - color2.getRed(), g = (int) color1.getGreen() - color2.getGreen(),
b = (int) color1.getBlue() - color2.getBlue();
return (r * r + g * g + b * b) <= threshold * threshold;
}
public static void main(String[] args) {
Color pixelColor = null;
try {
Robot robot = new Robot();
pixelColor = robot.getPixelColor(500, 500);
System.out.println(String.format("Red %d, Green %d, Blue %d", pixelColor.getRed(), pixelColor.getGreen(),
pixelColor.getBlue()));
} catch (AWTException e) {
e.printStackTrace();
return;
}
boolean isRedPixel = colorsAreClose(pixelColor, RED_COLOR, 50);
System.out.println(isRedPixel ? "Pixel is red" : "Pixel is not red");
}
}
I am working on an application that matches a photo that taken from the mobile phone camera and matches with a series of images that saved in a database. Following java code works fine to match 1 image with 1 template. Please help me to develop this program to match with several templates and return the best match. I am using android studio to develop the application.
Thank you
import org.bytedeco.javacv.*;
import org.bytedeco.javacpp.*;
import org.bytedeco.javacpp.indexer.FloatIndexer;
import static org.bytedeco.javacpp.opencv_core.*;
import static org.bytedeco.javacpp.opencv_imgproc.*;
import static org.bytedeco.javacpp.opencv_highgui.*;
import static org.bytedeco.javacpp.opencv_imgcodecs.*;
//import static org.bytedeco.javacpp.opencv_calib3d.*;
//import static org.bytedeco.javacpp.opencv_objdetect.*;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ThreadLocalRandom;
public class TemplateMatching {
public static void main(String[] args) throws Exception {
footPrint(args);
} //main method
public static void footPrint(String[] args){
//read in image default colors
Mat sourceColor=imread("F:\\image_processing\\Foot_Print_Temp_Match\\img.jpg", CV_LOAD_IMAGE_COLOR);//this image should captured by the camera
Mat sourceGrey = new Mat(sourceColor.size(), CV_8UC1);
cvtColor(sourceColor, sourceGrey, COLOR_BGR2GRAY);
//load in template in grey
Mat template1 = imread("F:\\image_processing\\templ.jpg",CV_LOAD_IMAGE_GRAYSCALE);//Template should load from the database
//Size for the result image
Size size = new Size(sourceGrey.cols()-template1.cols()+1, sourceGrey.rows()-template1.rows()+1);
Mat result = new Mat(size, CV_32FC1);//32 bit floating point signed depth in one channel
matchTemplate(sourceGrey, template1, result, TM_CCORR_NORMED) ;//Template matching function
DoublePointer minVal= new DoublePointer();
DoublePointer maxVal= new DoublePointer();
Point min = new Point();
Point max = new Point();
minMaxLoc(result, minVal, maxVal, min, max, null);
rectangle(sourceColor,new Rect(max.x(),max.y(),template1.cols(),template1.rows()), randColor(), 2, 0, 0);
imshow("Original marked", sourceColor);
imshow("Template", template1);
waitKey(0);
destroyAllWindows();
}
public static Scalar randColor(){
int b,g,r;
b= ThreadLocalRandom.current().nextInt(0, 255 + 1);
g= ThreadLocalRandom.current().nextInt(0, 255 + 1);
r= ThreadLocalRandom.current().nextInt(0, 255 + 1);
return new Scalar (b,g,r,0);
}
public static List<Point> getPointsFromMatAboveThreshold(Mat m, float t){
List<Point> matches = new ArrayList<Point>();
FloatIndexer indexer = m.createIndexer();
for (int y = 0; y < m.rows(); y++) {
for (int x = 0; x < m.cols(); x++) {
if (indexer.get(y,x)>t) {
System.out.println("(" + x + "," + y +") = "+ indexer.get(y,x));
matches.add(new Point(x, y));
}
}
}
return matches;
}
i have an application that basically does this.
Look at MatchingService and OpenCVUtils.
basically it should match a template, record the score and match the next template. keep list of scores associated to the template that made that score. then just get the max score.
public static float matchScore(Mat src,Mat tmp){
Size size = new Size(src.cols()-tmp.cols()+1, src.rows()-tmp.rows()+1);
Mat result = new Mat(size, CV_32FC1);
matchTemplate(src, tmp, result, TM_CCORR_NORMED);
DoublePointer minVal= new DoublePointer();
DoublePointer maxVal= new DoublePointer();
Point min = new Point();
Point max = new Point();
FloatIndexer fi = result.createIndexer();
minMaxLoc(result, minVal, maxVal, min, max, null);
return fi.get(max.y(),max.x());
}
Im making project for my uni, and I have a problem with one thing - everything works well, but there is bug with comparing colors of two pixels.
I have to count area of some figure, and I have to use MonteCarlo method. ( generate random points, count points in figure and out, calculete figure area )
And some points are counted well, some dont, I have no idea whats wrong, Im trying to solve that few hours...
import java.awt.*;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Random;
import javax.imageio.ImageIO;
import javax.swing.*;
public class Runner extends JApplet{
private BufferedImage img;
public ArrayList<Point> w;
public ArrayList<Point> poza;
public BufferedImage output;
public void init(){
try{
img = ImageIO.read(new File("figura.gif"));
} catch (IOException e){
e.getStackTrace();
System.err.println("Nie ma obrazkaXD");
}
}
public void paint(Graphics g){
w = new ArrayList<Point>();
poza = new ArrayList<>();
super.paint(g);
Random random = new Random();
int wys = img.getHeight();
int szer = img.getWidth();
g.drawImage(img, 0, 0, wys, szer, null);
for (int i = 0; i < 1000; i++) {
int x = random.nextInt(wys);
int y = random.nextInt(szer);
Point p = new Point(x,y);
g.setColor(Color.GREEN);
g.drawOval(y, x, 1, 1);
Color c = new Color(img.getRGB(y, x));;
if(c.equals(Color.BLACK)){
w.add(p);
g.setColor(Color.RED);
g.drawOval(y, x, 1, 1);
}else{
poza.add(p);
}
}
float a = w.size();
float b = poza.size()+w.size();
float poleProstokata = wys*szer;
float pole = a/b*poleProstokata;
}
I would suggest you to switch the y and y Coordinate, because as described in the Oracle Documentation :
https://docs.oracle.com/javase/7/docs/api/java/awt/image/BufferedImage.html#getRGB(int,%20int)
The method getRGB takes the X first and the Y second, so you'll have to use
Color c = new Color(img.getRGB(x, y));
instead of
Color c = new Color(img.getRGB(y, x));;
and why dont use an int ? I mean, you always convert the returned int from getRGB to a color and compare it. But why dont create a int from the Color Black and compare it to the int returned from getRGB(x,y) ?
Heres what I suggest :
int black=Color.BLACK.getRGB();
at the beginning of your paint method
and in your loop :
int c=img.getRGB(y, x);
and compare them :
if (black==c) {
//Do your stuff...
}
I'm working on program that detects pupil area from the eye using opencv language. Below is the code snippet. Its not compiling , throwing CV exception. I don't know what to do. How can i make it work. (opencv 2.4)
import java.util.ArrayList;
import java.util.List;
import java.lang.Math;
import org.opencv.core.Scalar;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Core;
import org.opencv.core.CvException;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.highgui.Highgui;
import org.opencv.imgproc.Imgproc;
public class Detect {
public static void main(String[] args) throws CvException{
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
// Load image
Mat src = Highgui.imread("tt.jpg");
// Invert the source image and convert to grayscale
Mat gray = new Mat();
Imgproc.cvtColor(src, gray, Imgproc.COLOR_BGR2GRAY);
Highgui.imwrite("gray.jpg", gray);
// Convert to binary image by thresholding it
Imgproc.threshold(gray, gray, 30, 255, Imgproc.THRESH_BINARY_INV);
Highgui.imwrite("binary.jpg", gray);
// Find all contours
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(gray.clone(), contours, new Mat(), Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_NONE);
// Fill holes in each contour
Imgproc.drawContours(gray, contours, -1, new Scalar(255,255,255), -1);
for (int i = 0; i < contours.size(); i++)
{
double area = Imgproc.contourArea(contours.get(i));
Rect rect = Imgproc.boundingRect(contours.get(i));
int radius = rect.width/2;
System.out.println("Area: "+area);
// If contour is big enough and has round shape
// Then it is the pupil
if (area >= 30 &&
Math.abs(1 - ((double)rect.width / (double)rect.height)) <= 0.2 &&
Math.abs(1 - (area / (Math.PI * Math.pow(radius, 2)))) <= 0.2)
{
Core.circle(src, new Point(rect.x + radius, rect.y + radius), radius, new Scalar(255,0,0), 2);
System.out.println("pupil");
}
}
Highgui.imwrite("processed.jpg", src);
}
}
Showing the following error
OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cv::cvtColor, file ..\..\..\..\opencv\modules\imgproc\src\color.cpp, line 3739
Exception in thread "main" CvException [org.opencv.core.CvException: cv::Exception: ..\..\..\..\opencv\modules\imgproc\src\color.cpp:3739: error: (-215) scn == 3 || scn == 4 in function cv::cvtColor
]
at org.opencv.imgproc.Imgproc.cvtColor_1(Native Method)
at org.opencv.imgproc.Imgproc.cvtColor(Imgproc.java:4598)
at Detect.main(Detect.java:24)
I think that OpenCV thinks that "tt.jpg" is already single-channel.
According to the documentation:
The function determines the type of an image by the content, not by the file extension.
To ensure the format, you can use a flag:
Mat src = Highgui.imread("tt.jpg"); // OpenCV decides the type based on the content
Mat src = Highgui.imread("tt.jpg", Highgui.IMREAD_GRAYSCALE); // single-channel image will be loaded, even if it is a 3-channel image
Mat src = Highgui.imread("tt.jpg", Highgui.IMREAD_COLOR); // 3-channel image will be loaded, even if it is a single-channel image
If you need only the grayscale image:
Mat src = Highgui.imread("tt.jpg", Highgui.IMREAD_GRAYSCALE);
I'm working with Areas in Java.
My test program draws three random triangles and combines them to form one or more polygons. After the Areas are .add()ed together, I use PathIterator to trace the edges.
Sometimes, however, the Area objects will not combine as they should... and as you can see in the last image I posted, extra edges will be drawn.
I think the problem is caused by rounding inaccuracies in Java's Area class (when I debug the test program, the Area shows the gaps before the PathIterator is used), but I don't think Java provides any other way to combine shapes.
Any solutions?
Example code and images:
import java.awt.Color;
import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.geom.Area;
import java.awt.geom.Line2D;
import java.awt.geom.Path2D;
import java.awt.geom.PathIterator;
import java.util.ArrayList;
import java.util.Random;
import javax.swing.JFrame;
public class AreaTest extends JFrame{
private static final long serialVersionUID = -2221432546854106311L;
Area area = new Area();
ArrayList<Line2D.Double> areaSegments = new ArrayList<Line2D.Double>();
AreaTest() {
Path2D.Double triangle = new Path2D.Double();
Random random = new Random();
// Draw three random triangles
for (int i = 0; i < 3; i++) {
triangle.moveTo(random.nextInt(400) + 50, random.nextInt(400) + 50);
triangle.lineTo(random.nextInt(400) + 50, random.nextInt(400) + 50);
triangle.lineTo(random.nextInt(400) + 50, random.nextInt(400) + 50);
triangle.closePath();
area.add(new Area(triangle));
}
// Note: we're storing double[] and not Point2D.Double
ArrayList<double[]> areaPoints = new ArrayList<double[]>();
double[] coords = new double[6];
for (PathIterator pi = area.getPathIterator(null); !pi.isDone(); pi.next()) {
// Because the Area is composed of straight lines
int type = pi.currentSegment(coords);
// We record a double array of {segment type, x coord, y coord}
double[] pathIteratorCoords = {type, coords[0], coords[1]};
areaPoints.add(pathIteratorCoords);
}
double[] start = new double[3]; // To record where each polygon starts
for (int i = 0; i < areaPoints.size(); i++) {
// If we're not on the last point, return a line from this point to the next
double[] currentElement = areaPoints.get(i);
// We need a default value in case we've reached the end of the ArrayList
double[] nextElement = {-1, -1, -1};
if (i < areaPoints.size() - 1) {
nextElement = areaPoints.get(i + 1);
}
// Make the lines
if (currentElement[0] == PathIterator.SEG_MOVETO) {
start = currentElement; // Record where the polygon started to close it later
}
if (nextElement[0] == PathIterator.SEG_LINETO) {
areaSegments.add(
new Line2D.Double(
currentElement[1], currentElement[2],
nextElement[1], nextElement[2]
)
);
} else if (nextElement[0] == PathIterator.SEG_CLOSE) {
areaSegments.add(
new Line2D.Double(
currentElement[1], currentElement[2],
start[1], start[2]
)
);
}
}
setSize(new Dimension(500, 500));
setLocationRelativeTo(null); // To center the JFrame on screen
setDefaultCloseOperation(EXIT_ON_CLOSE);
setResizable(false);
setVisible(true);
}
public void paint(Graphics g) {
// Fill the area
Graphics2D g2d = (Graphics2D) g;
g.setColor(Color.lightGray);
g2d.fill(area);
// Draw the border line by line
g.setColor(Color.black);
for (Line2D.Double line : areaSegments) {
g2d.draw(line);
}
}
public static void main(String[] args) {
new AreaTest();
}
}
A successful case:
A failing case:
Here:
for (int i = 0; i < 3; i++) {
triangle.moveTo(random.nextInt(400) + 50, random.nextInt(400) + 50);
triangle.lineTo(random.nextInt(400) + 50, random.nextInt(400) + 50);
triangle.lineTo(random.nextInt(400) + 50, random.nextInt(400) + 50);
triangle.closePath();
area.add(new Area(triangle));
}
you are adding in fact
1 triangle in the first loop
2 triangles in the second loop
3 triangles in the third loop
This is where your inaccuracies come from. Try this and see if your problem still persists.
for (int i = 0; i < 3; i++) {
triangle.moveTo(random.nextInt(400) + 50, random.nextInt(400) + 50);
triangle.lineTo(random.nextInt(400) + 50, random.nextInt(400) + 50);
triangle.lineTo(random.nextInt(400) + 50, random.nextInt(400) + 50);
triangle.closePath();
area.add(new Area(triangle));
triangle.reset();
}
Note the path reset after each loop.
EDIT: to explain more where the inaccuracies come from here the three paths you try to combine. Which makes it obvious where errors might arise.
I've re-factored your example to make testing easier, adding features of both answers. Restoring triangle.reset() seemed to eliminate the artifatcts for me. In addition,
Build the GUI on the event dispatch thread.
For rendering, extend a JComponent, e.g. JPanel, and override paintComponent().
Absent subcomponents having a preferred size, override getPreferredSize().
Use RenderingHints.
SSCCE:
import java.awt.BorderLayout;
import java.awt.Color;
import java.awt.Dimension;
import java.awt.EventQueue;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.RenderingHints;
import java.awt.event.ActionEvent;
import java.awt.geom.AffineTransform;
import java.awt.geom.Area;
import java.awt.geom.Line2D;
import java.awt.geom.Path2D;
import java.awt.geom.PathIterator;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import javax.swing.AbstractAction;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JPanel;
import javax.swing.JSpinner;
import javax.swing.SpinnerNumberModel;
import javax.swing.event.ChangeEvent;
import javax.swing.event.ChangeListener;
/** #see http://stackoverflow.com/q/9526835/230513 */
public class AreaTest extends JPanel {
private static final int SIZE = 500;
private static final int INSET = SIZE / 10;
private static final int BOUND = SIZE - 2 * INSET;
private static final int N = 5;
private static final AffineTransform I = new AffineTransform();
private static final double FLATNESS = 1;
private static final Random random = new Random();
private Area area = new Area();
private List<Line2D.Double> areaSegments = new ArrayList<Line2D.Double>();
private int count = N;
AreaTest() {
setLayout(new BorderLayout());
create();
add(new JPanel() {
#Override
public void paintComponent(Graphics g) {
Graphics2D g2d = (Graphics2D) g;
g2d.setRenderingHint(
RenderingHints.KEY_ANTIALIASING,
RenderingHints.VALUE_ANTIALIAS_ON);
g.setColor(Color.lightGray);
g2d.fill(area);
g.setColor(Color.black);
for (Line2D.Double line : areaSegments) {
g2d.draw(line);
}
}
#Override
public Dimension getPreferredSize() {
return new Dimension(SIZE, SIZE);
}
});
JPanel control = new JPanel();
control.add(new JButton(new AbstractAction("Update") {
#Override
public void actionPerformed(ActionEvent e) {
create();
repaint();
}
}));
JSpinner countSpinner = new JSpinner();
countSpinner.setModel(new SpinnerNumberModel(N, 3, 42, 1));
countSpinner.addChangeListener(new ChangeListener() {
#Override
public void stateChanged(ChangeEvent e) {
JSpinner s = (JSpinner) e.getSource();
count = ((Integer) s.getValue()).intValue();
}
});
control.add(countSpinner);
add(control, BorderLayout.SOUTH);
}
private int randomPoint() {
return random.nextInt(BOUND) + INSET;
}
private void create() {
area.reset();
areaSegments.clear();
Path2D.Double triangle = new Path2D.Double();
// Draw three random triangles
for (int i = 0; i < count; i++) {
triangle.moveTo(randomPoint(), randomPoint());
triangle.lineTo(randomPoint(), randomPoint());
triangle.lineTo(randomPoint(), randomPoint());
triangle.closePath();
area.add(new Area(triangle));
triangle.reset();
}
// Note: we're storing double[] and not Point2D.Double
List<double[]> areaPoints = new ArrayList<double[]>();
double[] coords = new double[6];
for (PathIterator pi = area.getPathIterator(I, FLATNESS);
!pi.isDone(); pi.next()) {
// Because the Area is composed of straight lines
int type = pi.currentSegment(coords);
// We record a double array of {segment type, x coord, y coord}
double[] pathIteratorCoords = {type, coords[0], coords[1]};
areaPoints.add(pathIteratorCoords);
}
// To record where each polygon starts
double[] start = new double[3];
for (int i = 0; i < areaPoints.size(); i++) {
// If we're not on the last point, return a line from this point to the next
double[] currentElement = areaPoints.get(i);
// We need a default value in case we've reached the end of the List
double[] nextElement = {-1, -1, -1};
if (i < areaPoints.size() - 1) {
nextElement = areaPoints.get(i + 1);
}
// Make the lines
if (currentElement[0] == PathIterator.SEG_MOVETO) {
// Record where the polygon started to close it later
start = currentElement;
}
if (nextElement[0] == PathIterator.SEG_LINETO) {
areaSegments.add(
new Line2D.Double(
currentElement[1], currentElement[2],
nextElement[1], nextElement[2]));
} else if (nextElement[0] == PathIterator.SEG_CLOSE) {
areaSegments.add(
new Line2D.Double(
currentElement[1], currentElement[2],
start[1], start[2]));
}
}
}
public static void main(String[] args) {
EventQueue.invokeLater(new Runnable() {
#Override
public void run() {
JFrame f = new JFrame();
f.add(new AreaTest());
f.pack();
f.setLocationRelativeTo(null);
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
f.setResizable(false);
f.setVisible(true);
}
});
}
}
I played around with this, and found a hacky way of getting rid of these. I'm not 100% sure that this will work in all cases, but it might.
After reading that the Area.transform's JavaDoc mentions
Transforms the geometry of this Area using the specified
AffineTransform. The geometry is transformed in place, which
permanently changes the enclosed area defined by this object.
I had a hunch and added possibility of rotating the Area by holding down a key. As the Area was rotating, the "inward" edges started to slowly disappear, until only the outline was left. I suspect that the "inward" edges are actually two edges very close to each other (so they look like a single edge), and that rotating the Area causes very small rounding inaccuracies, so the rotating sort of "melts" them together.
I then added a code to rotate the Area in very small steps for a full circle on keypress, and it looks like the artifacts disappear:
The image on the left is the Area built from 10 different random triangles (I upped the amount of triangles to get "failing" Areas more often), and the one on the right is the same Area, after being rotated full 360 degrees in very small increments (10000 steps).
Here's the piece of code for rotating the area in small steps (smaller amounts than 10000 steps would probably work just fine for most cases):
final int STEPS = 10000; //Number of steps in a full 360 degree rotation
double theta = (2*Math.PI) / STEPS; //Single step "size" in radians
Rectangle bounds = area.getBounds(); //Getting the bounds to find the center of the Area
AffineTransform trans = AffineTransform.getRotateInstance(theta, bounds.getCenterX(), bounds.getCenterY()); //Transformation matrix for theta radians around the center
//Rotate a full 360 degrees in small steps
for(int i = 0; i < STEPS; i++)
{
area.transform(trans);
}
As I said before, I'm not sure if this works in all cases, and the amount of steps needed might be much smaller or larger depending on the scenario. YMMV.