Line detection | Angle detection with Java - java

I'm processing some images that my UGV (Unmanned Ground Vehichle) captures to make it move on a line.
I want to get the angle of that line based on the horizon. I'll try to explain with a few examples:
The image above would make my UGV to keep straight ahead, as the angle is about 90 degrees.
But the following would make it turn left, as the angle compaired to the horizon rounds about 120.
I could successfully transform those images into the image below using otsu for thresholding:
And also used an edge detection algorithm to get this:
But I'm stuck right now trying to find an algorithm that detecs those edges/lines and outputs - or helps me to output - the angle of such line..

Here's my attempt using ImageJ:
// Open the Image
ImagePlus image = new ImagePlus(filename);
// Make the Image 8 bit
IJ.run(image, "8-bit", "");
// Apply a Threshold (0 - 50)
ByteProcessor tempBP = (ByteProcessor)image.getProcessor();
tempBP.setThreshold(0, 50, 0);
IJ.run(image, "Convert to Mask", "");
// Analyze the Particles
ParticleAnalyzer pa = new ParticleAnalyzer(
ParticleAnalyzer.SHOW_MASKS +
ParticleAnalyzer.IN_SITU_SHOW,
1023 +
ParticleAnalyzer.ELLIPSE
, rt, 0.0, 999999999, 0, 0.5);
IJ.run(image, "Set Measurements...", "bounding fit redirect=None decimal=3");
pa.analyze(image);
int k = 0;
double maxSize = -1;
for (int i = 0; i < rt.getCounter(); i ++) {
// Determine creteria for best oval.
// The major axis should be much longer than the minor axis.
// let k = best oval
}
double bx = rt.getValue("BX", k);
double by = rt.getValue("BY", k);
double width = rt.getValue("Width", k);
double height = rt.getValue("Height", k);
// Your angle:
double angle = rt.getValue("Angle", k);
double majorAxis = rt.getValue("Major", k);
double minorAxis = rt.getValue("Minor", k);
How the code works:
Make the image grayscaled.
Apply a threshold on it to only get the dark areas. This assumes the lines will always be near black.
Apply a Particle Analyzer to find Ellipses on the image.
Loop through the "Particles" to find ones that fit our criteria.
Get the angle from our Particle.
Here's an example of what the image looks like when I analyze it:
NOTE: The code is untested. I just converted what I did in the Visual ImageJ into Java.

Related

Colored Canny edge detection calculation problems

I'm working on a school project for the graphics class. My task is detecting edges on a colored image, we received the suggestion to use the Canny edge detection algorithm.
I've decided to write the entire program by myself in Java, because it looks easy with the given formulas. I've created a window with Java Swing, I'm reading in the input image as an sRGB image, converting it to CIELab* (because thats part of the task). I have managed to apply the Sobel kernels (Cx,Cy) which determines the partial derivative. However I'm stuck with the direction formula, and coding it.
My first problem is, I don't know if I should calculate the directions in every separate color channel, or do it in one piece.
Here are the formulas for the calculations (first is the direction, I'm stuck with, and on the right size there is the magnitude which requires the direction -theta)
Here is the source code for calculating the direction:
//Returns the gradients direction from Cx,Cy
public LabImg direction(LabImg Cx, LabImg Cy) {
LabImg result = new LabImg(Cx.getWidth(),Cx.getHeight());
for(int x = 0; x < result.getWidth(); x++) {
for(int y = 0; y < result.getHeight(); y++) {
float CxL = Cx.getPixel(x, y).getL();
float Cxa = Cx.getPixel(x, y).getA();
float Cxb = Cx.getPixel(x, y).getB();
float CyL = Cy.getPixel(x, y).getL();
float Cya = Cy.getPixel(x, y).getA();
float Cyb = Cy.getPixel(x, y).getB();
float dirL = (float) ((2*CxL*CyL)/((CxL*CxL)-(CyL*CyL)));
float dira = (float) ((2*Cxa*Cya)/((Cxa*Cxa)-(Cya*Cya)));
float dirb = (float) ((2*Cxb*Cyb)/((Cxb*Cxb)-(Cyb*Cyb)));
//float dir = (2*CxL*CyL+Cxa*Cya+Cxb*Cyb)/((CxL*CxL+Cxa*Cxa+Cxb*Cxb)-(CyL*CyL+Cya*Cya+Cyb*Cyb));
result.setLab(x, y, dirL, dira, dirb);
}
}
return result;
}
LabImg is a data type, which contains the size of the image, a 2D array of pixel values, and a buffered image.
If you want to do color edge detection, then you will need to process each color channel separately. So, you will have to find gradient directions for the three color channels separately.
Secondly, you can compute magnitude and directions as:
magnitude = Math.sqrt(Xgrad*Xgrad + Ygrad*Ygrad)
theta = Math.atan2(Ygrad,Xgrad)

Why does my 1D gravity simulation not act like a pendulum?

My gravity simulation acts more like a gravity slingshot. Once the two bodies pass over each other, they accelerate far more than they decelerate on the other side. It's not balanced. It won't oscillate around an attractor.
How do other gravity simulators get around it? example: http://www.testtubegames.com/gravity.html, if you create 2 bodies they will just oscillate back and forth, not drifting any further apart than their original distance even though they move through each other as in my example.
That's how it should be. But in my case, as soon as they get close they just shoot away from each other to the edges of the imaginary galaxy never to come back for a gazillion years.
edit: Here is a video of the bug https://imgur.com/PhhRhP7
Here is a minimal test case to run in processing.
//Constants:
float v;
int unit = 1; //1 pixel = 1 meter
float x;
float y;
float alx;
float aly;
float g = 6.67408 * pow(10, -11) * sq(unit); //g constant
float m1 = (1 * pow(10, 15)); // attractor mass
float m2 = 1; //object mass
void setup() {
size (200,200);
a = 0;
v = 0;
x = width/2; // object x
y = 0; // object y
alx = width/2; //attractor x
aly = height/2; //attractor y
}
void draw() {
background(0);
getAcc();
applyAcc();
fill(0,255,0);
ellipse(x, y, 10, 10); //object
fill(255,0,0);
ellipse(alx, aly, 10, 10); //attractor
}
void applyAcc() {
a = getAcc();
v += a * (1/frameRate); //add acceleration to velocity
y += v * (1/frameRate); //add velocity to Y
a = 0;
}
float getAcc() {
float a = 0;
float d = dist(x, y, alx, aly); //distance to attractor
float gravity = (g * m1 * m2)/sq(d); //gforce
a += gravity/m2;
if (y > aly){
a *= -1;}
return a;
}
Your distance doesn't include width of the object, so the objects effectively occupy the same space at the same time.
The way to "cap gravity" as suggested above is add a normal force when the outer edges touch, if it's a physical simulation.
You should get into the habit of debugging your code. Which line of code is behaving differently from what you expected?
For example, if I were you I would start by printing out the value of gravity every time you calculate it:
float gravity = (g * m1 * m2)/sq(d); //gforce
println(gravity);
You'll notice that your gravity value skyrockets as your circles get closer to each other. And this makes sense, because you're dividing by sq(d). Ad d gets smaller, your gravity increases.
You could simply cap your gravity value so it doesn't go off the charts anymore:
float gravity = (g * m1 * m2)/sq(d);
if(gravity > 100){
gravity = 100;
}
Alternatively you could cap d so it never goes below a certain value, but the result is the same.
In the end you'll find that this is not going to be as easy as you expected. You're going to have to tune the parameters quite a bit so your simulation works how you want.
Working demo here: https://beta.observablehq.com/#shaunlebron/1d-gravity
I followed the solution posted by the author of the sim that inspired this question here:
-First off, shrinking the timestep is always helpful. My simulation runs, as a baseline, about 40 ‘steps’ per frame, and 30 frames per second.
-To deal with the exact issue you talk about, I think modeling the bodies not as pure point masses - but rather spherical masses with a certain radius will be vital. That prevents the force of gravity from diverging to infinity. So, for instance, if you drop an asteroid into a star in my simulation (with collisions turned off), the force of gravity will increase as the asteroid gets closer, up until it reaches the surface of the star, at which point the force will begin to decrease. And the moment it’s at the center of the star (or nearby), the force will be zero (or nearly zero) - instead of near-infinite.
In my demo, I just completed turned off gravity when two objects are close enough together. Seems to work well enough.

Java rotation of pixel array

I have tried to make an algorithm in java to rotate a 2-d pixel array(Not restricted to 90 degrees), the only problem i have with this is: the end result leaves me with dots/holes within the image.
Here is the code :
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
int xp = (int) (nx + Math.cos(rotation) * (x - width / 2) + Math
.cos(rotation + Math.PI / 2) * (y - height / 2));
int yp = (int) (ny + Math.sin(rotation) * (x - width / 2) + Math
.sin(rotation + Math.PI / 2) * (y - height / 2));
int pixel = pixels[x + y * width];
Main.pixels[xp + yp * Main.WIDTH] = pixel;
}
}
'Main.pixels' is an array connected to a canvas display, this is what is displayed onto the monitor.
'pixels' and the function itself, is within a sprite class. The sprite class grabs the pixels from a '.png' image at initialization of the program.
I've tried looking at the 'Rotation Matrix' solutions. But they are too complicated for me. I have noticed that when the image gets closer to a point of 45 degrees, the image is some-what stretched ? What is going wrong? And what is the correct code; that adds the pixels to a larger scale array(E.g. Main.pixels[]).
Needs to be java! and relative to the code format above. I am not looking for complex examples, simply because i will not understand(As said above). Simple and straight to the point, is what i am looking for.
How id like the question to be answered.
Your formula is wrong because ....
Do this and the effect will be...
Simplify this...
Id recommend...
Im sorry if im asking to much, but i have looked for an answer relative to this question, that i can understand and use. But to always either be given a rotation of 90 degrees, or an example from another programming language.
You are pushing the pixels forward, and not every pixel is hit by the discretized rotation map. You can get rid of the gaps by calculating the source of each pixel instead.
Instead of
for each pixel p in the source
pixel q = rotate(p, theta)
q.setColor(p.getColor())
try
for each pixel q in the image
pixel p = rotate(q, -theta)
q.setColor(p.getColor())
This will still have visual artifacts. You can improve on this by interpolating instead of rounding the coordinates of the source pixel p to integer values.
Edit: Your rotation formulas looked odd, but they appear ok after using trig identities like cos(r+pi/2) = -sin(r) and sin(r+pi/2)=cos(r). They should not be the cause of any stretching.
To avoid holes you can:
compute the source coordinate from destination
(just reverse the computation to your current state) it is the same as Douglas Zare answer
use bilinear or better filtering
use less then single pixel step
usually 0.75 pixel is enough for covering the holes but you need to use floats instead of ints which sometimes is not possible (due to performance and or missing implementation or other reasons)
Distortion
if your image get distorted then you do not have aspect ratio correctly applied so x-pixel size is different then y-pixel size. You need to add scale to one axis so it matches the device/transforms applied. Here few hints:
Is the source image and destination image separate (not in place)? so Main.pixels and pixels are not the same thing... otherwise you are overwriting some pixels before their usage which could be another cause of distortion.
Just have realized you have cos,cos and sin,sin in rotation formula which is non standard and may be you got the angle delta wrongly signed somewhere so
Just to be sure here an example of the bullet #1. (reverse) with standard rotation formula (C++):
float c=Math.cos(-rotation);
float s=Math.sin(-rotation);
int x0=Main.width/2;
int y0=Main.height/2;
int x1= width/2;
int y1= height/2;
for (int a=0,y=0; y < Main.height; y++)
for (int x=0; x < Main.width; x++,a++)
{
// coordinate inside dst image rotation center biased
int xp=x-x0;
int yp=y-y0;
// rotate inverse
int xx=int(float(float(xp)*c-float(yp)*s));
int yy=int(float(float(xp)*s+float(yp)*c));
// coordinate inside src image
xp=xx+x1;
yp=yy+y1;
if ((xp>=0)&&(xp<width)&&(yp>=0)&&(yp<height))
Main.pixels[a]=pixels[xp + yp*width]; // copy pixel
else Main.pixels[a]=0; // out of src range pixel is black
}

PDFbox to iText coordinate conversions using AffineTransform

Question:
I can't seem to get one coordinate format to work with another format. I think I'm just not using the right matrix, but I don't know enough about them to be certain. I was hoping to get some help figuring out if I'm making an assumption on what my transform should be.
iText uses the bottom left as origin per ISO standard, but the pdfbox code and the program that gives me the coordinates to scrape from the pdf both use the upper left as the origin.
What transform should I be doing to adapt the coordinates so that iText can consume them in a way that will work?
Background
I've got some code that uses pdfbox to manipulate a pdf and strip out some data and now I need to inject the modified data back on the page. PDFBox's writer keeps corrupting the pdf so we have decided to go with iText to do the injection.
The trick is that the coordinates I used with pdfbox (and the ones we get from the system generating the pdf) don't seem to match up with iText's.
What I've done so far
I checked, and both the iText page and cropbox seem to be accurate:
PdfReader splitPDFDocumentReader = new PdfReader(splitPDFdocumentName);
com.lowagie.text.Rectangle theCropBox = splitPDFDocumentReader.getCropBox(1);
com.lowagie.text.Rectangle thePageSize = splitPDFDocumentReader.getPageSize(1);
consolePrintln("Cropbox: " + theCropBox.toString());
consolePrintln("\tBottom " + theCropBox.getBottom());
consolePrintln("\tLeft " + theCropBox.getLeft());
consolePrintln("\tTop " + theCropBox.getTop());
consolePrintln("\tRight " + theCropBox.getRight());
consolePrintln("PageSize: " + thePageSize.toString());
consolePrintln("\tBottom " + thePageSize.getBottom());
consolePrintln("\tLeft " + thePageSize.getLeft());
consolePrintln("\tTop " + thePageSize.getTop());
consolePrintln("\tRight " + thePageSize.getRight());
Outputs:
Cropbox: Rectangle: 612.0x792.0 (rot: 0 degrees)
Bottom 0.0
Left 0.0
Top 792.0
Right 612.0
PageSize: Rectangle: 612.0x792.0 (rot: 0 degrees)
Bottom 0.0
Left 0.0
Top 792.0
Right 612.0
Which would lead me to believe its just a matter of flipping the y coordinate since pdfbox's origin is in the top left, whereas iTexts is in the bottom left.
Where I run into trouble
When I apply the transform:
// matrix data example:
// [m00, m01, m02,
// m10, m11, m12,
// 0 , 0 , 1 ] // this bit is implied as part of affineTransform docs
content.saveState();
int m00 = 1;
int m01 = 0;
int m02 = 0;
int m10 = 0;
int m11 = -1;
int m12 = 0;
content.concatCTM(m00, m10, m01, m11, m02, m12);
content.setColorStroke(Color.RED);
content.setColorFill(Color.white);
content.rectangle(x, y, x + height, y + width);
content.fillStroke();
content.restoreState();
It doesn't seem to do what I would expect. It seems that the data is completely outside the page.
Misc notes
To be honest, i'm not very good with matrixes, perhaps I need to do some translation work and not just filp the y as I've tried to do?
The concatCTM function seems to take the same format as awt.geom.affinetransform, and I am going by this example and tutorial for using the transforms.
I figured it out. When I was fipping the y coordinate, I was assuming it would flip over the middle of the document and just invert everything. However it actually flips over the line y=0;
After it flips over y=0, you would need to shift the entire page back up.
I ended up using affineTransform directly to get it done, then just feed the resulting matrix into concatCTM.
content.saveState();
AffineTransform transform = new AffineTransform();
transform.scale(1, -1); // flip along the line y=0
transform.translate(0, -pageHeight); // move the page conet back up
/* the version of iText used in Jasper iReport doesn't seem to use affineTransform directly */
double[] transformMatrix = new double[6];
transform.getMatrix(transformMatrix);
content.concatCTM((float) transformMatrix[0], (float) transformMatrix[1], (float) transformMatrix[2], (float) transformMatrix[3], (float) transformMatrix[4], (float) transformMatrix[5]);
// drawing and printing code here (stamping?)
content.restoreState();

An exact method of area calculation using UTM coordinates

I have a list of lat/long coordinates that I would like to use to calculate an area of a polygon. I can get exact in many cases, but the larger the polygon gets, the higher chance for error.
I am first converting the coordinates to UTM using http://www.ibm.com/developerworks/java/library/j-coordconvert/
From there, I am using http://www.mathopenref.com/coordpolygonarea2.html to calculate the area of the UTM coordinates.
private Double polygonArea(int[] x, int[] y) {
Double area = 0.0;
int j = x.length-1;
for(int i = 0; i < x.length; i++) {
area = area + (x[j]+x[i]) * (y[j]-y[i]);
j = i;
}
area = area/2;
if (area < 0)
area = area * -1;
return area;
}
I compare these areas to the same coordinates I put into Microsoft SQL server and ArcGIS, but I cannot seem to match them exactly all the time. Does anyone know of a more exact method than this?
Thanks in advance.
EDIT 1
Thank you for the comments.
Here is my code for getting the area (CoordinateConversion code is listed above on the IBM link):
private Map<Integer, GeoPoint> vertices;
private Double getArea() {
List<Integer> xpoints = new ArrayList<Integer>();
List<Integer> ypoints = new ArrayList<Integer>();
CoordinateConversion cc = new CoordinateConversion();
for(Entry<Integer, GeoPoint> itm : vertices.entrySet()) {
GeoPoint pnt = itm.getValue();
String temp = cc.latLon2MGRUTM(pnt.getLatitudeE6()/1E6, pnt.getLongitudeE6()/1E6);
// Example return from CC: 02CNR0634657742
String easting = temp.substring(5, 10);
String northing = temp.substring(10, 15);
xpoints.add(Integer.parseInt(easting));
ypoints.add(Integer.parseInt(northing));
}
int[] x = toIntArray(xpoints);
int[] y = toIntArray(ypoints);
return polygonArea(x,y);
}
Here is an example list of points:
44.80016800 -106.40808100
44.80016800 -106.72123800
44.75016800 -106.72123800
44.75016800 -106.80123800
44.56699100 -106.80123800
In ArcGIS and MS SQL server I get 90847.0 Acres.
Using the code above I get 90817.4 Acres.
Another example list of points:
45.78412600 -108.51506700
45.78402600 -108.67972100
45.75512200 -108.67949400
45.75512200 -108.69962300
45.69795400 -108.69929400
In ArcGIS and MS SQL server I get 15732.9 Acres.
Using the code above I get 15731.9 Acres.
The area formula you are using is valid only on a flat plane. As the polygon gets larger, the Earth's curvature starts to have an effect, making the area larger than what you calculate with this formula. You need to find a formula that works on a the surface of a sphere.
A simple Google search for "area of polygon on spherical surface" turns up a bunch of hits, of which the most interesting is Wolfram MathWorld Spherical Polygon
It turns out that UTM just isn't able to get the extreme accuracy I was looking for. Switching projection systems to something more accurate like Albers or State Plane provided a much more accurate calculation.

Categories

Resources