Image recognition, separate lines - java

I need to separate lines from each other and make an array of coordinates for each graphic. The problem is (see the red circled section on the pic) some graphics overlap, and i don't know how to write a program that will find this overlaps and separate them. (Now we think that lines only touch each other not cross.)
#Rethunk i maked thinning and got this result.
after thinning

If you know that the lines are initially separated, you could keep track of them going from left to right. For each line the top and bottom y-coordinates gradually change, while the x-coordinate is increasing. For each pixel you go to the right, you could start at the average y-coordinate and move up and down to find the new top and bottom y-coordinates for each line.
When two lines touch, their top and bottom y-coordinates will be the same. This can be detected by comparing the coordinates for lines that are next to each other. So lets say for example, line 4 and 5 overlap at a certain point. For these lines, you know which one is the higher line (4) and the lower line (5). Lets say yTopOverlap = 130 and yBottomOverlap = 160. We could divide the pixels between the two lines. In this case, make yTop 130 and yBottom 145 for line 4 and make yTop 146 and yBottom 160 for line 5. When the lines separate again, it is no longer necessary to modify their y-coordinates.

Related

Image processing : detect high with camera

I have a camera that is pointing on a road. Right in boarder of this road, there is a big grey floor lamp. On this floor lamp i will paint some horizontal red lines.
I want to make a java application that can count the number of red lines.
I could simply get an image, with a for loop move on each pixel and show if it's red... but this will be really not optimized. I can specify that the lines are perpendicular to the image. To optimize I could simply move in y in the center of the image and count each time the color goes from gray to red?
do you have any idea of library that i can use, or an image processing process that make it better ?
thanx for help
EDIT :
From that image : image1
i can have this result : result
how coul'd i count the number of line
so at this moment i make some tries into my office.
Consider this wall is a floor lamp
on this picture you can see the calibrated pattern
And here you have the pattern used as overlay
Each line is separted with 5cm. when it snows, the bottom lines will gradually be hidden. So I can count the number of lines from the top to define the height of the snow.

Get rectangle bounds for each letter in a image

So I'm trying to fill an ArrayList<Rectangle> with the bounds of each letter of an image file.
For example, given this .png image:
I want to fill an ArrayList<Rectangle> with 14 Rectangle(one rectangle for each letter)
We can assume that the image will contain only 2 colors, one for the background and one for the letters, in this case, pixels will be either white or red.
At first, I thought I could search for white columns in between the letters, then if I found a completely white column I could get for example the width by getting the lowest red pixel value and the highest red pixel value and width = maxX-minX and so on:
x = minX;
y = minY;
w = maxX-minX;
h = maxY-minY;
letterBounds.add(new Rectangle(x,y,w,h));
The problem is that there's no space in between the letters, not even 1 pixel:
My next idea was for each red pixel I find, look for a neighbor that hasn't been seen yet, then if I can't find a neighbor I have all the pixels to get the bounds of that letter. But with this approach, I will get 2 rectangles for letters like "i" I could then write some algorithm to merge those rectangles but I don't know how that will turn out with other multi part letters, and before I try that I wanted to ask here for more ideas
So do you guys have any ideas?
You can use the OpenCV cv2.findContours() function. Instead of using the cv2.drawcontours() function for drawing the contours, which will highlight the outline of the letter, you could instead draw a rectangle on the image by using the cv2.rectangle and by extracting the coordinates from cv2.findContours() function.
I think two step algorithm is enough to solve the problem if not using library like OpenCV.
histogram
seam calculation
1. histogram
C.....C..C...
.C.C.C...C...
. C.C....CCCC
1111111003111
dot(.) means background color(white)
C means any colors except background color(in your case, red)
accumulating the number of vertical pixels with non-background color generates histogram.
*
*
******..****
0123456789AB
It is clear the boundary exists at 6 and 7
2. seam calculation
Some cases like We, cannot be solved by histogram because there is no empty vertical lines at all.
Seam Carving algorithm gives us some hints
https://en.wikipedia.org/wiki/Seam_carving
More detail implementation is found at
princeton.edu - seamCarving.html
Energy calcuation for a pixel
The red numbers are not color values for pixels, but energy values calculated from adjacent pixels.
The vertical pathes with minimal energy give us the boundary of each characters.
3. On more...
Statistical data is required to determine whether to apply the seam carving or not.
Max and min width of characters
Even if histogram give us vertical boundaries, it is not clear there are two or more characters in a group.

How do I stop one line segment from intersecting another?

Geometry code gets tiresome after a while, but I want to finish this library, so here goes.
Basically, what's the most efficient way to move one line segment, A, so that it no longer intersects with another, B?
Both line segments are defined with a start point (x, y) and a vector describing how the segment extends from that point (eX, eY). An example of how the line segment is described is below:
The solution I'm looking for is where the line segment is moved (its extent is not modified in any way) to the nearest location where it does not intersect. An example:
What is the most efficient way to get this result?
EDIT: People have asked what I mean by "move" - I mean change the (x, y) coordinate of the line segment start point. This will translate the entire segment.
And the line segments exist on the Cartesian plane, and any x/y movement is allowed.
How about this: find the four vectors: two from red line's endpoints going perpendicularly to the black line, and two going perpendicularly from the red line to the black line's endpoints. Take the shortest of these vectors and move the red line along it.
Since you don't specify in which dimension you are free to move, I will assume that any are fine.
I assume your red line is characterized by a starting point (x,y), and a vector from there to the endpoint (eX,eY). Any point on the line is thus [0,1]*(eX,eY)+(x,y).
Lets find the point where lines cross. That's where a*(eX1,eY1)+(x1,y1) = (eX2,eY2)+(x2,y2) with a in [0,1].
If this crossing exists, you can just move the line so that it ends at this crosspoint, with a being the length you have to move.
(x1',y1') = (x1,y1) - a*(eX1,eY1)
This way, you move the starting point away until the crossing point you found before is the touching point of the two lines.

Calculate shape orientation in Java (Image analysis)

I have an image such as this:
and I need to calculate the orientation of it. In this case the shape is pointing towards the top left of the screen. Accuracy isn't hugely important as long as 3 or 4 calculations average out to within 5 degrees or so of the actual orientation (it will be moving slightly).
Can anyone point me towards an algorithm to do this? I don't mind if the orientation is returned as a double or as a vector.
If the image is always T-shaped, you can simply get the furthest pair of pixels, then find the furthest pair from either both of those (the edges of the T), find which is further from the other two, draw a line from that one to the middle point of those two.
You can further refine it by then finding the base of the T by comparing the middle line with the edges of the base, and adjusting the angle and offset until it is actually in the middle.
The definitive solution is impossible I guess, since requires image recognition. I would project the 2D image onto axis, i.e. obtain the width and height of the image and get direction vector from these values taking them as components.
First, a couple of assumptions:
The center and centroid are "close"
The descending bar of the T is longer than the cross-bar
First, determine the bounding rectangle of the image and find the points of the image that lie along this rectangle. For points that lie along the line and are a certain distance from one another (say 5 pixels to pick a value) you'll need to only take 1 point from that cluster. At the end of this you should have 3 points, i.e. a triangle. The shortest side of the triangle should be the cross-bar (from assumption 2), i.e. find the two points closest to each other. The line that is perpendicular to the line crossing those two points is then your orientation line, i.e. find the angle between it and the horizontal axis.
I would try morphological skeletonization to simplify the image, followed by some straightforward algorithm to determine the orientation of the longer leg of the skeleton.
The solution in the end was to use a Convex Hull Algorithm, which finds the minimum number of points needed to enclose a shape with a bound.

How does a QuadTree work for non-square areas?

I understand how quad trees work on square images (by splitting the image until the section is a single colour, which is stored in the leaf node).
What happens if the image has one dimension longer that the other, you may end up with a 2x1 pixel area as the smallest sub unit, making it difficult to use quadtree division methods to store a single colour. How would you solve this issue?
You could pad the image until it is an equal and power of two size. While it may add some extra memory requirements, the increase shouldn't be that large.
The 2x1 example would be padded to a standard 2x2 and store the real size or use a special value for padded nodes so you can restore the original size.
Why don't you allow empty leafes in your tree?
Edit:
Maybe i don't understand the question^^. Your problem is that you end up with a non square images like 2x1 and want to represent them as a quadtreenode?
When you have a 2x2 square like
1 2
3 4
you would create a Quadnode with something like "new QuadNode(1,2,3,4)"
I would suggest to handel a 2x1 square like
1 2
with something like "new QuadNode(1,2,null,null)"
When you have bigger missing pieces you can use the same system. When you have a 4x2 picture like
1 2 3 4
5 6 7 8
you would get a "new QuadNode(new QuadNode(1,2,3,4),null,new QuadNode(5,6,7,8),null)"
This should also work with pieces with equal color instead of pixels.
Did i understand your problem and made myself clear?
A square is a special rectangle, Quad trees work on rectangles, too.
You just need a split method which gives 4 rectangles for a given one.
In case the top most root quad cell is an rectangle, just divide the width and height by 2.
In case of pixels, it makes only sense if the root cell widthand height are both a power of 2.
So if root cell = 2048 * 1024
The split just divides both width and height by 2.

Categories

Resources