i have the following picture and what i actually want to detect is the circles above the box with letter to the top left of each box. But the result is that it detects also some other circles. I have no idea why.
Image that I want to detect on:
http://imgur.com/8oKmhGp
This is what the result looks like:
http://imgur.com/qBw6YhK
As you can see it can find letters as circles sometimes and also the circles on the lego. Here is my code:
Mat source = Highgui.imread("testar.jpg", Highgui.CV_LOAD_IMAGE_COLOR);
Mat destination = new Mat(source.rows(), source.cols(), source.type());
Imgproc.cvtColor(source, destination, Imgproc.COLOR_RGB2GRAY);
Imgproc.GaussianBlur(destination, destination, new Size(3,3),0,0);
Mat circles = new Mat();
Imgproc.HoughCircles(destination, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 20, 10, 20, 7, 13);
int radius;
Point pt;
for (int x = 0; x < circles.cols(); x++) {
double vCircle[] = circles.get(0,x);
if (vCircle == null)
break;
pt = new Point(Math.round(vCircle[0]), Math.round(vCircle[1]));
radius = (int)Math.round(vCircle[2]);
// draw the found circle
Core.circle(destination, pt, radius, new Scalar(0,255,255), 3);
Core.circle(destination, pt, 3, new Scalar(255,255,255), 3);
}
Highgui.imwrite("foundCircles.jpg", destination);
Well, IMHO, the Hough Circle detection algorithm is working exactly the way it is supposed to be. It IS detecting circles.
However, it seems like you do not want to detect the circles lying outside the area of the mobile phone's screen.
A simple solution can be implemented if you somehow manage to lay your hands on the exact coordinates of the four corners of the phone (or the mobile screen).
You can use the Rect class to define a rectangular block:
Rect cropRect = new Rect(topLeft_X, topLeft_Y, widthOfRectangle, heightOfRectangle);
and then use this rectangle object to reproduce a new image matrix (from the original one) that contains only the desired area:
Mat croppedImage = new Mat(inputImg, cropRect);
Now, with the freshly cropped image by your side, you can have all the fun you want with the algorithm of Mr. Paul Hough.
Now, if for some reason, it turns out that you do not have any clue about how to get the coordinates of the four corners of the phone (i.e, the phone moves around whimsically), OR you're damn irritated with the Hough circle detection reporting the O's and S's as circles, then you may try seeking the help of any good OCR implementation to help ease your pain.
Since you're on Java, you may use Tess4J. Or, you may try tweaking this project to extricate the position of the characters in the mobile screen. (There are many other OCRs which might help, please refer to this website for an exhaustive list)
Once you have the exact position of the characters, you may try running the Hough Circle detection block in the vicinity of the top left corner of the characters only.
One word of caution though, OCRs tend to be a wee bit nasty and unwieldy in Java.
If you're still unhappy with the results (or if OCRs seem to interfere with you metabolism), there's one last approach which you may try.... Hough Line detection.
Detect the lines, from the polar coordinates of the lines, estimate the grid that forms the keypad of the phone and then go around with detecting circles on the top left corner of the grids.
Related
Please note, I am a complete beginner in computer vision and OpenCV(Java).
My objective is to identify parking signs, and to draw bounding boxes around them. My problem is that the four signs from the top (with red borders) were not identified (see last image). I am also noticing that the Canny edge detection does not capture the edges of these four signs (see second image). I have tried with other images, and got the same results. My approach is as follows:
Load the image and convert it to gray scale
Pre-process the image by applying bilateralFilter and Gaussian blur
Execute Canny edge detection
Find all contours
Calculate the perimeter with arcLength and approximate the contour with approxPolyDP
If approximated figure has 4 points, then assuming it is a rectangle hence adding the contour
Finally, draw the contours that has 4 points exactly.
Mat filtered = new Mat();
Mat edges = new Mat(src.size(), CvType.CV_8UC1);
Imgproc.cvtColor(src, edges, Imgproc.COLOR_RGB2GRAY);
Imgproc.bilateralFilter(edges, filtered, 11, 17, 17);
org.opencv.core.Size s = new Size(5, 5);
Imgproc.GaussianBlur(filtered, filtered, s, 0);
Imgproc.Canny(filtered, filtered, 170, 200);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(filtered, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
List<MatOfPoint> rectangleContours = new ArrayList<MatOfPoint>();
for (MatOfPoint contour : contours) {
MatOfPoint2f dst = new MatOfPoint2f();
contour.convertTo(dst, CvType.CV_32F);
perimeter = Imgproc.arcLength(dst, true);
approximationAccuracy = 0.02 * perimeter;
MatOfPoint2f approx = new MatOfPoint2f();
Imgproc.approxPolyDP(dst, approx, approximationAccuracy, true);
if (approx.total() == 4) {
rectangleContours.add(contour);
Toast.makeText(reactContext.getApplicationContext(), "Rectangle detected" + approx.total(), Toast.LENGTH_SHORT).show();
}
}
Imgproc.drawContours(src, rectangleContours, -1, new Scalar(0, 255, 0), 5);
Very happy to get advice on how I could resolve this issue, even if it implies changing my stratergy.
What about starting with OCR, Tesseract, in order to recognize big "P" and other parking-related text patterns?
(Toast seems like Android: How can I use Tesseract in Android?
General Tesseract for Java: https://www.geeksforgeeks.org/tesseract-ocr-with-java-with-examples/ )
Another example, in Python, but see the preprocessing and other tricks and ideas for making the letters recognizable when the image has gradients, lower contrast, small fonts etc.: How to obtain the best result from pytesseract?
Also, there could be filtering by color, since the colors of the signs are known. The conversion to grayscale removes that valuable information, so finding the edges is OK, but the colors still can be used. E.g. split the colors to b,g,r and use each channel as grayscale and possibly boost it. The red and blue borders would stand out.
It seems the contrast around the red borders is too low, the blue signs are brighter compared to the black contour. If not splitting, before converting to grayscale, some of the color channels could be amplified anyway, like the red one.
Searching for big yellow/blue regions with low contrast, with text found, "P" etc. Tesseract has a function returning the boxes of the text that was found.
Also once you find a sign somewhere or a bar of signs and their directions, you could search there, vertically/horizontally.
You may search HoughLines as well, that may find the black border around the signs.
Calculate the perimeter with arcLength and approximate the contour
with approxPolyDP
If approximated figure has 4 points, then assuming it is a rectangle
hence adding the contour
IMO finding exactly 4 points (or after simplification of the polygon) is hard and may be not enough of an evidence, also there are round corners etc. if contours are compared directly.
The angles between the vertices and the distances matter - are the lines parallel (with some precision) etc.
The process could be iterative: gradually reducing the polygon detail, checking the area and perimeter, until the vertices reach 4 (or about that). If the area and perimeter don't change much (the ratio has to be found) after polygon aproximation (simplifying the round corners etc.), while the number of points in the contour gets reduced. I'd try also a comparison to the bounding box and the convex hull measurements etc.
If you need to only detect the parking signs, then treat this problem as a classic object detection problem (just like face detection). For the best results, you will need to use deep learning based convolutional neural network models.
To start with you can train the YOLO model which will give you a lot better results that anything you tried with OpenCV. You need at least 500 images. Then you need to annotate them. This tutorial is kick start tutorial on YOLO. Let's give a try.
Like YOLO there are so many models and all of them can be trained using similar process. So if you want to deploy your model on android, I will recommend you to choose a tensorflow based model. Train it on your PC and integrate the trained serialized model in your app.
I'm trying to find a way to identify an archery target and all of its rings on a photo which might be made of different perspectives:
My goal is to identify the target and later on also where the arrows hit the target to automatically count their score. Presumptions are as follows:
The camera's position is not fixed and might change
The archery target might also move or rotate slightly
The target might be of different size and have different amount of circles
There might be many holes (sometimes big scratches) in the target
I have already tried OpenCV to find contours, but even with preprocessing (grayscale -> blur (-> threshold) -> edge detection) I still find a few houndred contours which are all distracted by the arrows or other obstacles (holes) on the target, so it is impossible to find a nice circular line. Using Hough to find circles doesn't work either as it will give me weired results as Hough will only find perfect circles and not ellipses.
With preprocessing the image this is my best result so far:
I was thinking about ellipse and circle fitting, but as I don't know radius, position and pose of the target this might be a very cpu consuming task. Another thought was about using recognition from a template, but the position and rotation of the target changes often.
Now I have the idea to follow every line on the image to check if it is a curve and then guess which curves belong together to form a circle/ellipse (ellipse because of the perspective). The problem is that the lines might be intersected by arrows or holes in a short distance so the line would be too short to check if it is a curve. With the smaller circles on the target the chance is high that it isn't recognised at all. Also, as you can see, circle 8, 7 and 6 have no clear line on the left side.
I think it is not neccessary to do perspective correction to achieve this task as long as I can clearly identify all the rings in the target.
I googled a long time and found some thesis which are all not exactly focussed on this specific task and also too mathematical for me to understand.
Is it by any chance possible to achieve this task? Could you share with me an idea how to solve this problem? Anything is very appreciated.
I'm doing this in Java, but the programming language is secondary. Please let me know if you need more details.
for starters see
Detecting circles and shots from paper target.
If you are using standardized target as on the image ( btw. I use these same too for my bow :) ) then do not cut off the color. You can select the regions of blue red and yellow pixels to ease up the detection. see:
footprint fitting
From that you need to fit the circles. But as you got perspective then the objects are not circles nor ellipses. You got 2 options:
Perspective correction
Use right bottom table rectangle area as marker (or the whole target). It is rectangle with known aspect ratio. so measure it on image and construct transformation that will change the image so it became rectangle again. There are tons of stuff about this: 3D scene reconstruction so google/read/implement. The basic are based just on De-skew + scaling.
Approximate circles by ellipses (not axis aligned!)
so fit ellipses to found edges instead circles. This will not be as precise but still close enough. see:
ellipse fitting
[Edit1] sorry did not have time/mood for this for a while
As you were unable to adapt my approach yourself here it is:
remove noise
you need to recolor your image to remove noise to ease up the rest... I convert it to HSV and detect your 4 colors (circles+paper) by simple tresholding and recolor the image to 4 colors (circles,paper,background) back into RGB space.
fill the gaps
in some temp image I fill the gaps in circles created by arrows and stuff. It is simple just scan pixels from opposite sides of image (in each line/row) and stop if hit selected circle color (you need to go from outer circles to inner not to overwrite the previous ones...). Now just fill the space between these two points with your selected circle color. (I start with paper, then blue,red and yellow last):
now you can use the linked approach
So find avg point of each color, that is approx circle center. Then do a histogram of radius-es and chose the biggest one. From here just cast lines out of the circle and find where the circle really stops and compute the ellipse semi-axises from it and also update the center (that handles the perspective distortions). To visually check I render cross and circle for each circle into the image from #1:
As you can see it is pretty close. If you need even better match then cast more lines (not just 90 degree H,V lines) to obtain more points and compute ellipse algebraically or fit it by approximation (second link)
C++ code (for explanations look into first link):
picture pic0,pic1,pic2;
// pic0 - source
// pic1 - output
// pic2 - temp
DWORD c0;
int x,y,i,j,n,m,r,*hist;
int x0,y0,rx,ry; // ellipse
const int colors[4]=// color sequence from center
{
0x00FFFF00, // RGB yelow
0x00FF0000, // RGB red
0x000080FF, // RGB blue
0x00FFFFFF, // RGB White
};
// init output as source image and resize temp to same size
pic1=pic0;
pic2=pic0; pic2.clear(0);
// recolor image (in HSV space -> RGB) to avoid noise and select target pixels
pic1.rgb2hsv();
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
{
color c;
int h,s,v;
c=pic1.p[y][x];
h=c.db[picture::_h];
s=c.db[picture::_s];
v=c.db[picture::_v];
if (v>100) // bright enough pixels?
{
i=25; // treshold
if (abs(h- 40)+abs(s-225)<i) c.dd=colors[0]; // RGB yelow
else if (abs(h-250)+abs(s-165)<i) c.dd=colors[1]; // RGB red
else if (abs(h-145)+abs(s-215)<i) c.dd=colors[2]; // RGB blue
else if (abs(h-145)+abs(s- 10)<i) c.dd=colors[3]; // RGB white
else c.dd=0x00000000; // RGB black means unselected pixels
} else c.dd=0x00000000; // RGB black
pic1.p[y][x]=c;
}
pic1.save("out0.png");
// fit ellipses:
pic1.bmp->Canvas->Pen->Width=3;
pic1.bmp->Canvas->Pen->Color=0x0000FF00;
pic1.bmp->Canvas->Brush->Style=bsClear;
m=(pic1.xs+pic1.ys)*2;
hist=new int[m]; if (hist==NULL) return;
for (j=3;j>=0;j--)
{
// select color per pass
c0=colors[j];
// fill the gaps with H,V lines into temp pic2
for (y=0;y<pic1.ys;y++)
{
for (x= 0;(x<pic1.xs)&&(pic1.p[y][x].dd!=c0);x++); x0=x;
for (x=pic1.xs-1;(x> x0)&&(pic1.p[y][x].dd!=c0);x--);
for (;x0<x;x0++) pic2.p[y][x0].dd=c0;
}
for (x=0;x<pic1.xs;x++)
{
for (y= 0;(y<pic1.ys)&&(pic1.p[y][x].dd!=c0);y++); y0=y;
for (y=pic1.ys-1;(y> y0)&&(pic1.p[y][x].dd!=c0);y--);
for (;y0<y;y0++) pic2.p[y0][x].dd=c0;
}
if (j==3) continue; // do not continue for border
// avg point (possible center)
x0=0; y0=0; n=0;
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
if (pic2.p[y][x].dd==c0)
{ x0+=x; y0+=y; n++; }
if (!n) continue; // no points found
x0/=n; y0/=n; // center
// histogram of radius
for (i=0;i<m;i++) hist[i]=0;
n=0;
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
if (pic2.p[y][x].dd==c0)
{
r=sqrt(((x-x0)*(x-x0))+((y-y0)*(y-y0))); n++;
hist[r]++;
}
// select most occurent radius (biggest)
for (r=0,i=0;i<m;i++)
if (hist[r]<hist[i])
r=i;
// cast lines from possible center to find edges (and recompute rx,ry)
for (x=x0-r,y=y0;(x>= 0)&&(pic2.p[y][x].dd==c0);x--); rx=x; // scan left
for (x=x0+r,y=y0;(x<pic2.xs)&&(pic2.p[y][x].dd==c0);x++); // scan right
x0=(rx+x)>>1; rx=(x-rx)>>1;
for (x=x0,y=y0-r;(y>= 0)&&(pic2.p[y][x].dd==c0);y--); ry=y; // scan up
for (x=x0,y=y0+r;(y<pic2.ys)&&(pic2.p[y][x].dd==c0);y++); // scan down
y0=(ry+y)>>1; ry=(y-ry)>>1;
i=10;
pic1.bmp->Canvas->MoveTo(x0-i,y0);
pic1.bmp->Canvas->LineTo(x0+i,y0);
pic1.bmp->Canvas->MoveTo(x0,y0-i);
pic1.bmp->Canvas->LineTo(x0,y0+i);
//rx=r; ry=r;
pic1.bmp->Canvas->Ellipse(x0-rx,y0-ry,x0+rx,y0+ry);
}
pic2.save("out1.png");
pic1.save("out2.png");
pic1.bmp->Canvas->Pen->Width=1;
pic1.bmp->Canvas->Brush->Style=bsSolid;
delete[] hist;
I'm trying to write an AI maze solver program. To do this, I will draw 2-color mazes in GIMP with red being walls and blue being background or floor. Then I will export from GIMP as a png and use ImageIO.read() to get a BufferedImage object of the maze. Finally, I will assign Rectangle hitboxes to walls and store them in an ArrayList so I can use .intersect() to check for sprite contact with walls. I can work with it from here.
However, there is one thing I want to be able to do for my program that I don't know how to do: Once I have stored my image as a BufferedImage, how can I detect the red parts (all the exact same RGB shade of red) and create matching Rectangles?
Notes:
Mazes will always be of fixed size (1000x1000 pixels).
There is a fixed starting point for each maze
The red areas will always form straight rectangles. The Rectangle objects which I create are just used as hitboxes so I can use .intersect(), never drawn or anything like that.
Rectangles that are created will be stored in an ArrayList.
Example Maze: (a simple one)
What I want to be able to do: (green areas being where the java.awt.Rectangles are created and stored into ArrayList)
I will provide a quite naive way of solving the problem (not fully implemented, just so you get the idea)..
Have a list of all rectangles List<Rectangle> mazeRectangles. All rectangles will be stored here.. And of course the image BufferedImage image;
Now we will iterate over all pictures until we find one with the right colour
Every time we found a rectangle, we will skip all x values for the width of the rectangle..
//iterate over every pixel..
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++) {
//check if current pixel has maze colour
if(isMazeColour(image.getRGB(x, y))){
Rectangle rect = findRectangle(x, y);
x+=rect.width;
}
}
}
Your method for checking the colour:
public boolean isMazeColour(int colour){
// here you should actually check for a range of colours, since you can
// never expect to get a nicely encoded image..
return colour == Color.RED.getRGB();
}
The interesting part is the findRectangle method..
We see if there is already a Rectangle which contains our coordinates. If so return it, otherwise create a new Rectangle, add it to the list and return it.
If we have to create a new Rectangle, we will first check it's width. The annoying part about this is, that you'll still have to check every pixel for the rest of the rectangle, since you might have a configuration like that:
+++++++
+++++++
###
###
where # and + are separate boxes. So we first find the width:
public Rectangle findRectangle(int x, int y){
// this could be optimized. You could keep a separate collection where
// you remove rectangles from, once your cursor is below that rectangle
for(Rectangle rectangle : mazeRectangles){
if(!rectangle.contains(x, y)){
return rectangle;
}
}
//find the width of the `Rectangle`
int xD = 0;
while(x+xD < width && isMazeColour(image.getRGB(x+xD+1, y))){
xD++;
}
int yD = 0; //todo: find height of rect..
Rectangle toReturn = new Rectangle(x, y, xD, yD);
mazeRectangles.add(toReturn);
return toReturn;
}
I didn't implement the yD part, since it's a bit messy and I am a little lazy, but you'd need to iterate over y and check each row (so two nested loops)
Note that this algorithm might result in overlapping Rectangles. if you don't want that, when finding xD check for each pixel if it is already contained in a Rectangle. Only expand xD as long as you are not inside another Rectangle.
Another thing: You might end up with strange artefacts at the border of your rectangles, due to the interpolation of colours between red and blue. Maybe you want to check for Rectangles being to small (like only 1 pixel wide) and get rid of them..
Last year, someone asked about a more general case for solving a maze. They had one additional complexity in that there were multiple paths, but the "correct" path through an intersection was straight.
Python: solve "n-to-n" maze
The solution provided solves the maze by ray-casting. Starting at the beginning of a path, it projects lines down the path in all directions. Then it sorts the list and chooses the longest line and uses that to calculate the next starting point. Now, it repeats projecting lines in all directions except in the direction it came - the backtrack could be longer than the forward progress. That would just bounced the solution around in the longest leg of the maze.
If you are certain your angles are always 90 degrees, you could modify the code accordingly.
im trying do develop a Zelda like game. So far i am using bitmaps and everything runs smooth. At this point the camera of the hero is fixed, meaning, that he can be anywhere on the screen.
The problem with that is scaling. Supporting every device and keeping every in perfect sized rects doesnt seem to be that easy :D
To prevent that i need a moving camera. Than i can scale everything to be equally sized on every device. The hero would than be in the middle of the screen for the first step.
The working solution for that is
xCam += hero.moveX;
yCam += hero.moveY;
canvas.translate(xCam,yCam);
drawRoom();
canvas.restore();
drawHero();
I do it like this, because i dont wand to rearrange every tile in the game. I guess that could be too much processing on some devices. As i said, this works just fine. the hero is in the middle of the screen, and the whole room is moving.
But the problem is collision detection.
Here a quick example:
wall.rect.intersects(hero.rect);
Assuming the wall was originally on (0/0) and the hero is on (screenWitdh/2 / screenHeight/2) they should collide on some point.
The problem is, that the x and y of the wall.rect never change. They are (0/0) at any point of the canvas translation, so they can never collide.
I know, that I can work with canvas.getClipBounds() and then use the coordinates of the returned rect to change every tile, but as I mentioned above, I am trying to avoid that plus, the returned rect only works with int values, and not float.
Do you guys know any solution for that problem, or has anyone ever fixed something like this?
Looking forward to your answers!
You can separate your model logic and view logic. Suppose your development dimension for the window is WxH. In this case if your sprite in the model is 100x100 and placed at 0,0, it will cover area from 0,0 to 100, 100. Let's add next sprite (same 100x100 dimension) at 105,0 (basically slightly to the right of the first one), which covers area from 105,0 to 205,100. It is obvious that in the model they are not colliding. Now, as for view if your target device happens to be WxH you just draw the model as it is. If your device has a screen with w = 2*W, h = 2*H, so twice as big in each direction. You just multiply the x and y by w / W and h / H respectively. Therefore we get 2x for x and y, which on screen becomes 1st object - from 0,0 to 200, 200, 2nd object - from 210,0 to 410, 200. As can be seen they are still not colliding. To sum up, separate your game logic from your drawing (rendering) logic.
I think you should have variables holding the player's position on the "map". So you can use this to determine the collision with the non changing wall. It should look something like (depensing on the rest of your code):
canvas.translate(-hero.rect.centerX(), -.rect.centerY());
drawRoom();
canvas.restore();
drawHero();
Generally you should do the calculations in map coordinates, not on screen. For rendering just use the (negative) player position for translation.
I have a completely graphed out blueprint of X, Y coordinates of 8 different multi-pointed shapes on paper. I put these coordinates into an array such as..
Polygon shape1;
int[] shapeOneX = {1,2,3,4,5,6,7,8,9};
int[] shapeOneY = {1,2,3,4,5,6,7,8,9};
shape1 = new Polygon {shapeOneX, shapeOneY, shapeOneX.length};
These coordinates are fake, and not my actual ones but on paper, these coordinates would follow the rules completely on how you would expect vector graphing to look like. When I load this into a Java Applet, the shape does not follow these exact coordinates. They're sometimes close, but not exact, and I need precision for my project.
Does anyone know why, or if there is a different formula you need to use on the coordinates to have it look the same in a java applet? If need more info, let me know.
I understand that starting coordinates for java applet start at the top left 0,0 then expand from there. I guess my questioning is,I have the understanding that "vector" cords start at 0,0 as a Mid point. I don't know much about graphing. So... my shapes are being created from a vector style, but being "placed" into an applet which has a 0,0 top left origin. Which is fine, I have the tools to adjust them where I need to put them. I just can't get them to create the shape I actually graph on paper. Do I need to graph on paper from a 0,0 top left origin and only create positive X, Y variables?
Another Edit-- I've noticed that when it draws onto the applet, it draws it almost mirrored as well. In other words, (x) goes right, (-x) goes left. That's normal. But (y) goes DOWN, and (-y) goes UP.. That doesn't seem normal HMM.. Confused.
Final Edit(probably) -- Well I was right about the Y axis being mirrored. Why? I don't know. But it has allowed for me to redesign some coordinates. I am currently under the impression that line borders were so thick that connected each vertex, that they reformed the shapes into a blob of junk. Because of the overlapping borders. It was hard to see where each vertex actually truly was. I also had to increase the values of my (x,y) coordinates in order to compensate for the size difference. Which I have probably near 100 or so different (x,y) combinations that I will need to re-do because of this... I really wish there was an easier answer. I am open to any and all suggestions, meanwhile I will plug away at remapping this. Thanks everyone who has, or continues to contribute.
For Example.. This first was the orignal coordinates:
int[] wallX = { -2,-2,-1,-1, 2, 2 };
int[] wallY = { -1, 3, 3, 0, 0,-1 };
And then the new WORKING coordinates I found to work are:
int[] wallOneX = { -2,-2, 1, 1, 10, 10 };
int[] wallOneY = { 4,-8,-8, 1, 1, 4 };
So thats the difference of numbers needed to create the same shape from paper, into the java applet. I don't really see a pattern or anything to recreate it for all my other ones. So I don't know.
You need to scale your coordinates based on the height and width of your jpanel or canvas object on which you are painting the polygon. use getHeight() and getWidth() to get the dimensions. Also, the origin is in the upper right corner of the jpanel or canvass, so you either need to use addition/subtraction to shift the scaled coordinates, or you need to use the affine transform to get the polygon where you want it to go.
Sometimes it helps to start with working examples. You might try this approach or this approach. Here is a third approach already in an applet.