Opencv Imgproc.HoughLines() tuning length parameters - java

--See update/answer below. User error!--
I'm trying to understand how to set the parameters in Imgproc.HoughLines() to find shorter lines. I've tried something like this that doesn't work at all:
Imgproc.HoughLines(matSrc, matLines, 1, Math.PI / 180, houghThreshCurrent, 25, 10);
I have tried several values for the last two parameters, but none seem to work--it finds no lines. However, using the version of the method without the last two parameters does a decent job of finding the lines I want, just not the shorter lines no matter how low the threshold is.
Here's the doc for the last two params:
srn For the multi-scale Hough transform, it is a divisor for the distance resolution rho. The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn. If both srn=0 and stn=0, the classical Hough transform is used. Otherwise, both these parameters should be positive.
stn For the multi-scale Hough transform, it is a divisor for the distance resolution theta.
Could someone translate or provide example values for that? :)
I've also tried the probabalistic version, HoughLinesP(). It doesn't seem to work very well for my use case. The other option would be to scale my image to a larger size where the default HoughLines() works if I can't get the line distance parameters working.
Answer: My problem was I didn't realize the Mat returned by HoughLinesP() was in a different format than the one returned by HoughLines(). I was transforming the results from HoughLinesP() from polar coordinates when they were already in the XY coordinates! Turns out HoughLinesP() is far superior for our needs and its parameters work great for tuning line length. Here's the link that helped me see the error of my ways: https://dsp.stackexchange.com/questions/10467/influence-of-image-size-to-edge-detection-in-opencv

A very good example can be found here: http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_lines/hough_lines.html
Try using CV_PI instead of Math.PI? Other reasons could be because of your threshold. Try inserting a value like 50(play around with the numbers). The last 2 values, u can try leaving it to be zero and test it first before inserting values. Default Values for the last 2 is usually zero.
There could be many reasons why its not working, so let's slowly find the cause one by one. Also, you did Canny it or something before you applied Hough right?
Hope that helps, let me know if my suggestions are useful and helped(: Cheers.

Related

JavaFX get Euler angle from Transform

I'm trying to make some sort of 3D editor, and I'm trying to make an "orbit" tool, much like the one in Blender.
Now I'd like to know what the angle is. I used the code provided by José Pereda that can be found here, but I need the angles to be in the range from 0 to 2π. I cannot retrieve them from the acquired angles since the output values don't range from -π to π and are different for every output.
Also, the extracted angles need to be relative to each other, much like if you put 3 different Rotate transformations where every following one is relative to the preceding ones, preferably in YXZ order, since that's the order I use everywhere else, as well as the model format the editor is going to export into.
As requested, I have uploaded the part of the code that is relevant to the question on Pastebin. As far as I could tell from the printed values, the angles are relative to each other in XYZ order. For angle calculation, I use the snippet from the linked question:
public static Vec3d getAngle(Node n) {
Transform T = n.getLocalToSceneTransform();
double roll = Math.atan2(-T.getMyx(), T.getMxx());
double pitch = Math.atan2(-T.getMzy(), T.getMzz());
double yaw = Math.atan2(T.getMzx(), Math.sqrt(T.getMzy() * T.getMzy() + T.getMzz() * T.getMzz()));
return new Vec3d(roll, pitch, yaw);
}
Just a quick notice, I have very little knowledge of matrices and their transformations, since that's something we'll be taking in our school next year, which makes it harder for me to understand.

Integrality Gap in MAXIMIZATION in CPLEX+Java | Bug?

I try to solve a large MIP in which the . If it does not solve optimally, it shall return the integrality gap (that is, difference between best integer solution and best solution of the linear relaxation).
Using getMIPRelativeGap of the Java+CPLEX interface, I sometimes got values in the range of 1.0E11-1.0E13 which does not make sense, as an integrality gap should be a percentage between 0 and 1. I tracked those cases down and found out that I get those results, if the best integer solution has a value of 0 (my inner problem is a profitable tour problem, thus, if the best route is not visiting any vertice). The integrality gap should be (bestobjective-bestinteger)/bestobjective (https://www.ibm.com/support/knowledgecenter/SSSA5P_12.6.0/ilog.odms.cplex.help/refdotnetcplex/html/M_ILOG_CPLEX_Cplex_MIPInfoCallback_GetMIPRelativeGap.htm), yet, it seems to be (bestobjective-bestinteger)/bestinteger.
I also tested a couple of other values (if the integer objective is positive), and were able to confirm this in examples.
Can someone else reproduce this behavior? Does this behavior make sense to you?
Thanks :)
Indeed, the documentation for CPXgetmiprelgap in the Callable Library (C API) says the following:
For a minimization problem, this value is computed by
(bestinteger - bestobjective) / (1e-10 + |bestinteger|)
where bestinteger is the value returned by CPXXgetobjval/CPXgetobjval
and bestobjective is the value returned by
CPXXgetbestobjval/CPXgetbestobjval. For a maximization problem,
the value is computed by:
(bestobjective - bestinteger) / (1e-10 + |bestinteger|)
So, it looks like the documentation for the Java API is buggy. The Java API just calls CPXgetmiprelgap under the hood, so it should be the same. Thanks for reporting this. I'll make sure that this gets passed on to the folks who can fix it.

openCV java compareHist() value larger than 1

I am learning openCV and when I tried to compare 2 histograms and print the result the values coming were actually larger than 1. Sorry if it is a stupid question but I am still learning.
Output showing those values:
enter image description here
as you can see in the image the values are in thousands .
Thanks a lot
The third argument of the compareHist method selects the comparison function. In your case, the Chi-Square function is used to compare the two histograms, which does not have an upper limit. You could use the correlation comparison function (CV_COMP_CORREL, 0), which is bound between -1 and 1. See also this question on the OpenCV stack exchange.

java.math.BigDecimal.scale() equivalent for double

I've got a matrix of values like the one below that I need to scale. I've been looking around for an inbuilt function if there is one that could do this for me. I haven't found one & so have ended up writing code to do the scaling using the below formula
scaledMatrix = (Matrix - MeanMatrix)/Standard Deviation
This code is a bit buggy & I'm working on correcting it. While I do that, I happened to bump on java.math.BigDecimal.scale() & did look up an equivalent for double as the matrix I have is double type numbers
If someone could please help me with details on
1) If there is an inbuilt function that accepts matrix of values & returns me the scaled matrix
2) `java.math.BigDecimal.scale()` equivalent for `double` type data
Any help would be much appreciated please.
The BigDecimal.scale() method does not do what you seem to think it is doing. A BigDecimal value is stored as a * 10^b (where ^ denotes exponentiation). The BigDecimal.scale() method basically returns the b part of that.
I do not know of a similar method for double values, nor do I know of a method which performs the function you need. Since you put apache-commons in the tags, I suggest you look into Apache Commons's extensive statistical library.

Random geographic coordinates (on land, avoid ocean)

Any clever ideas on how to generate random coordinates (latitude / longitude) of places on Earth? Latitude / Longitude. Precision to 5 points and avoid bodies of water.
double minLat = -90.00;
double maxLat = 90.00;
double latitude = minLat + (double)(Math.random() * ((maxLat - minLat) + 1));
double minLon = 0.00;
double maxLon = 180.00;
double longitude = minLon + (double)(Math.random() * ((maxLon - minLon) + 1));
DecimalFormat df = new DecimalFormat("#.#####");
log.info("latitude:longitude --> " + df.format(latitude) + "," + df.format(longitude));
Maybe i'm living in a dream world and the water topic is unavoidable ... but hopefully there's a nicer, cleaner and more efficient way to do this?
EDIT
Some fantastic answers/ideas -- however, at scale, let's say I need to generate 25,000 coordinates. Going to an external service provider may not be the best option due to latency, cost and a few other factors.
To deal with the body of water problem is going to be largely a data issue, e.g. do you just want to miss the oceans or do you need to also miss small streams. Either you need to use a service with the quality of data that you need, or, you need to obtain the data yourself and run it locally. From your edit, it sounds like you want to go the local data route, so I'll focus on a way to do that.
One method is to obtain a shapefile for either land areas or water areas. You can then generate a random point and determine if it intersects a land area (or alternatively, does not intersect a water area).
To get started, you might get some low resolution data here and then get higher resolution data here for when you want to get better answers on coast lines or with lakes/rivers/etc. You mentioned that you want precision in your points to 5 decimal places, which is a little over 1m. Do be aware that if you get data to match that precision, you will have one giant data set. And, if you want really good data, be prepared to pay for it.
Once you have your shape data, you need some tools to help you determine the intersection of your random points. Geotools is a great place to start and probably will work for your needs. You will also end up looking at opengis code (docs under geotools site - not sure if they consumed them or what) and JTS for the geometry handling. Using this you can quickly open the shapefile and start doing some intersection queries.
File f = new File ( "world.shp" );
ShapefileDataStore dataStore = new ShapefileDataStore ( f.toURI ().toURL () );
FeatureSource<SimpleFeatureType, SimpleFeature> featureSource =
dataStore.getFeatureSource ();
String geomAttrName = featureSource.getSchema ()
.getGeometryDescriptor ().getLocalName ();
ResourceInfo resourceInfo = featureSource.getInfo ();
CoordinateReferenceSystem crs = resourceInfo.getCRS ();
Hints hints = GeoTools.getDefaultHints ();
hints.put ( Hints.JTS_SRID, 4326 );
hints.put ( Hints.CRS, crs );
FilterFactory2 ff = CommonFactoryFinder.getFilterFactory2 ( hints );
GeometryFactory gf = JTSFactoryFinder.getGeometryFactory ( hints );
Coordinate land = new Coordinate ( -122.0087, 47.54650 );
Point pointLand = gf.createPoint ( land );
Coordinate water = new Coordinate ( 0, 0 );
Point pointWater = gf.createPoint ( water );
Intersects filter = ff.intersects ( ff.property ( geomAttrName ),
ff.literal ( pointLand ) );
FeatureCollection<SimpleFeatureType, SimpleFeature> features = featureSource
.getFeatures ( filter );
filter = ff.intersects ( ff.property ( geomAttrName ),
ff.literal ( pointWater ) );
features = featureSource.getFeatures ( filter );
Quick explanations:
This assumes the shapefile you got is polygon data. Intersection on lines or points isn't going to give you what you want.
First section opens the shapefile - nothing interesting
you have to fetch the geometry property name for the given file
coordinate system stuff - you specified lat/long in your post but GIS can be quite a bit more complicated. In general, the data I pointed you at is geographic, wgs84, and, that is what I setup here. However, if this is not the case for you then you need to be sure you are dealing with your data in the correct coordinate system. If that all sounds like gibberish, google around for a tutorial on GIS/coordinate systems/datum/ellipsoid.
generating the coordinate geometries and the filters are pretty self-explanatory. The resulting set of features will either be empty, meaning the coordinate is in the water if your data is land cover, or not empty, meaning the opposite.
Note: if you do this with a really random set of points, you are going to hit water pretty often and it could take you a while to get to 25k points. You may want to try to scope your point generation better than truly random (like remove big chunks of the Atlantic/Pacific/Indian oceans).
Also, you may find that your intersection queries are too slow. If so, you may want to look into creating a quadtree index (qix) with a tool like GDAL. I don't recall which index types are supported by geotools, though.
This has being asked a long time ago and I now have the similar need. There are two possibilities I am looking into:
1. Define the surface ranges for the random generator.
Here it's important to identify the level of precision you are going for. The easiest way would be to have a very relaxed and approximate approach. In this case you can divide the world map into "boxes":
Each box has it's own range of lat lon. Then you first randomise to get a random box, then you randomise to get a random lat and random long within the boundaries of that box.
Precisions is of course not the best at all here... Though it depends:) If you do your homework well and define a lot of boxes covering most complex surface shapes - you might be quite ok with the precision.
2. List item
Some API to return continent name from coordinates OR address OR country OR district = something that WATER doesn't have. Google Maps API's can help here. I didn't research this one deeper, but I think it's possible, though you will have to run the check on each generated pair of coordinates and rerun IF it's wrong. So you can get a bit stuck if random generator keeps throwing you in the ocean.
Also - some water does belong to countries, districts...so yeah, not very precise.
For my needs - I am going with "boxes" because I also want to control exact areas from which the random coordinates are taken and don't mind if it lands on a lake or river, just not open ocean:)
Download a truckload of KML files containing land-only locations.
Extract all coordinates from them this might help here.
Pick them at random.
Definitely you should have a map as a resource. You can take it here: http://www.naturalearthdata.com/
Then I would prepare 1bit black and white bitmap resource with 1s marking land and 0x marking water.
The size of bitmap depends on your required precision. If you need 5 degrees then your bitmap will be 360/5 x 180/5 = 72x36 pixels = 2592 bits.
Then I would load this bitmap in Java, generate random integer withing range above, read bit, and regenerate if it was zero.
P.S. Also you can dig here http://geotools.org/ for some ready made solutions.
To get a nice even distribution over latitudes and longitudes you should do something like this to get the right angles:
double longitude = Math.random() * Math.PI * 2;
double latitude = Math.acos(Math.random() * 2 - 1);
As for avoiding bodies of water, do you have the data for where water is already? Well, just resample until you get a hit! If you don't have this data already then it seems some other people have some better suggestions than I would for that...
Hope this helps, cheers.
There is another way to approach this using the Google Earth Api. I know it is javascript, but I thought it was a novel way to solve the problem.
Anyhow, I have put together a full working solution here - notice it works for rivers too: http://www.msa.mmu.ac.uk/~fraser/ge/coord/
The basic idea I have used is implement the hiTest method of the GEView object in the Google Earth Api.
Take a look at the following example of the hitest from Google.
http://earth-api-samples.googlecode.com/svn/trunk/examples/hittest.html
The hitTest method is supplied a random point on the screen in (pixel coordinates) for which it returns a GEHitTestResult object that contains information about the geographic location corresponding to the point. If one uses the GEPlugin.HIT_TEST_TERRAIN mode with the method one can limit results only to land (terrain) as long as we screen the results to points with an altitude > 1m
This is the function I use that implements the hitTest:
var hitTestTerrain = function()
{
var x = getRandomInt(0, 200); // same pixel size as the map3d div height
var y = getRandomInt(0, 200); // ditto for width
var result = ge.getView().hitTest(x, ge.UNITS_PIXELS, y, ge.UNITS_PIXELS, ge.HIT_TEST_TERRAIN);
var success = result && (result.getAltitude() > 1);
return { success: success, result: result };
};
Obviously you also want to have random results from anywhere on the globe (not just random points visible from a single viewpoint). To do this I move the earth view after each successful hitTestTerrain call. This is achieved using a small helper function.
var flyTo = function(lat, lng, rng)
{
lookAt.setLatitude(lat);
lookAt.setLongitude(lng);
lookAt.setRange(rng);
ge.getView().setAbstractView(lookAt);
};
Finally here is a stripped down version of the main code block that calls these two methods.
var getRandomLandCoordinates = function()
{
var test = hitTestTerrain();
if (test.success)
{
coords[coords.length] = { lat: test.result.getLatitude(), lng: test.result.getLongitude() };
}
if (coords.length <= number)
{
getRandomLandCoordinates();
}
else
{
displayResults();
}
};
So, the earth moves randomly to a postition
The other functions in there are just helpers to generate the random x,y and random lat,lng numbers, to output the results and also to toggle the controls etc.
I have tested the code quite a bit and the results are not 100% perfect, tweaking the altitude to something higher, like 50m solves this but obviously it is diminishing the area of possible selected coordinates.
Obviously you could adapt the idea to suit you needs. Maybe running the code multiple times to populate a database or something.
As a plan B, maybe you can pick a random country and then pick a random coordinate inside of this country. To be fair when picking a country, you can use its area as weight.
There is a library here and you can use its .random() method to get a random coordinate. Then you can use GeoNames WebServices to determine whether it is on land or not. They have a list of webservices and you'll just have to use the right one. GeoNames is free and reliable.
Go there http://wiki.openstreetmap.org/
Try to use API: http://wiki.openstreetmap.org/wiki/Databases_and_data_access_APIs
I guess you could use a world map, define a few points on it to delimit most of water bodies as you say and use a polygon.contains method to validate the coordinates.
A faster algorithm would be to use this map, take some random point and check the color beneath, if it's blue, then water... when you have the coordinates, you convert them to lat/long.
You might also do the blue green thing , and then store all the green points for later look up. This has the benifit of being "step wise" refinable. As you figure out a better way to make your list of points you can just point your random graber at a more and more acurate group of points.
Maybe a service provider has an answer to your question already: e.g. https://www.google.com/enterprise/marketplace/viewListing?productListingId=3030+17310026046429031496&pli=1
Elevation api? http://code.google.com/apis/maps/documentation/elevation/ above sea level or below? (no dutch points for you!)
Generating is easy, the Problem is that they should not be on water. I would import the "Open Streetmap" for example here http://ftp.ecki-netz.de/osm/ and import it to an Database (verry easy data Structure). I would suggest PostgreSQL, it comes with some geo functions http://www.postgresql.org/docs/8.2/static/functions-geometry.html . For that you have to save the points in a "polygon"-column, then you can check with the "&&" operator if it is in an Water polygon. For the attributes of an OpenStreetmap Way-Entry you should have a look at http://wiki.openstreetmap.org/wiki/Category:En:Keys
Supplementary to what bsimic said about digging into GeoNames' Webservices, here is a shortcut:
they have a dedicated WebService for requesting an ocean name.
(I am aware the of OP's constraint to not using public web services due to the amount of requests. Nevertheless I stumbled upon this with the same basic question and consider this helpful.)
Go to http://www.geonames.org/export/web-services.html#astergdem and have a look at "Ocean / reverse geocoding". It is available as XML and JSON. Create a free user account to prevent daily limits on the demo account.
Request example on ocean area (Baltic Sea, JSON-URL):
http://api.geonames.org/oceanJSON?lat=54.049889&lng=10.851388&username=demo
results in
{
"ocean": {
"distance": "0",
"name": "Baltic Sea"
}
}
while some coordinates on land result in
{
"status": {
"message": "we are afraid we could not find an ocean for latitude and longitude :53.0,9.0",
"value": 15
}
}
Do the random points have to be uniformly distributed all over the world? If you could settle for a seemingly uniform distribution, you can do this:
Open your favorite map service, draw a rectangle inside the United States, Russia, China, Western Europe and definitely the northern part of Africa - making sure there are no big lakes or Caspian seas inside the rectangles. Take the corner coordinates of each rectangle, and then select coordinates at random inside those rectangles.
You are guaranteed non of these points will be on any sea or lake. You might find an occasional river, but I'm not sure how many geoservices are going to be accurate enough for that anyway.
This is an extremely interesting question, from both a theoretical and practical perspective. The most suitable solution will largely depend on your exact requirements. Do you need to account for every body of water, or just the major seas and oceans? How critical are accuracy and correctness; Will identifying sea as land or vice-versa be a catastrophic failure?
I think machine learning techniques would be an excellent solution to this problem, provided that you don't mind the (hopefully small) probability that a point of water is incorrectly classified as land. If that's not an issue, then this approach should have a number of advantages against other techniques.
Using a bitmap is a nice solution, simple and elegant. It can be produced to a specified accuracy and the classification is guaranteed to be correct (Or a least as correct as you made the bitmap). But its practicality is dependent on how accurate you need the solution to be. You mention that you want the coordinate accuracy to 5 decimal places (which would be equivalent to mapping the whole surface of the planet to about the nearest metre). Using 1 bit per element, the bitmap would weigh in at ~73.6 terabytes!
We don't need to store all of this data though; We only need to know where the coastlines are. Just by knowing where a point is in relation to the coast, we can determine whether it is on land or sea. As a rough estimate, the CIA world factbook reports that there are 22498km of coastline on Earth. If we were to store coordiates for every metre of coastline, using a 32 bit word for each latitude and longitude, this would take less than 1.35GB to store. It's still a lot if this is for a trivial application, but a few orders of magnitude less than using a bitmap. If having such a high degree of accuracy isn't neccessary though, these numbers would drop considerably. Reducing the mapping to only the nearest kilometre would make the bitmap just ~75GB and the coordinates for the world's coastline could fit on a floppy disk.
What I propose is to use a clustering algorithm to decide whether a point is on land or not. We would first need a suitably large number of coordinates that we already know to be on either land or sea. Existing GIS databases would be suitable for this. Then we can analyse the points to determine clusters of land and sea. The decision boundary between the clusters should fall on the coastlines, and all points not determining the decision boundary can be removed. This process can be iterated to give a progressively more accurate boundary.
Only the points determining the decision boundary/the coastline need to be stored, and by using a simple distance metric we can quickly and easily decide if a set of coordinates are on land or sea. A large amount of resources would be required to train the system, but once complete the classifier would require very little space or time.
Assuming Atlantis isn't in the database, you could randomly select cities. This also provides a more realistic distribution of points if you intend to mimic human activity:
https://simplemaps.com/data/world-cities
There's only 7,300 cities in the free version.

Categories

Resources