Due to the lack of easily accessible and readable libraries or methods for two-dimensional contour plotting in Java 8 environment, I decided to write such functionality myself. My solution is based on the approach described here and it can be summed up as follows:
I create a rectangular container with specific layoutX and layoutY describing its position on the contour map. I draw in this container appropriate polygon depends on the ternary index assigned to the aforementioned container.
Taking into account, that each container is created for four data values from given dataset, it is easy to calculate, that for matrix with size of [11,11] there will be 10^2 containers, but for matrix with size of [1001, 1001] it will be 10^6.
In this case, it is necessary to choose appropriate container for polygon, in order to optimize plotting for larger datasets as much as it is possible.
Which container should I choose in order to provide best execution times for the creation of contour map based on the assumptions described above?
The lites container is probably a Group or no container at all if you just apply the appropriate transform to your Polygon. But I don't understand why you are actually looking for this. The algorithm that you linked to is for computing iso-lines but is not efficient to use the same setup to display the result of this computation.
Related
is user inside volume OpenGL ES Java Android
I have an opengl renderer that shows airspaces.
I need to calculate if my location already converted in float[3] is inside many volumes.
I also want to calculate the distance with the nearest volume.
Volumes are random shapes extruded along z axis.
What is the most efficient algorithm to do that?
I don t want to use external library.
What you have here is a Nearest Neighbor Search problem. Since your meshes are constant and won't change, you should probably use a space partioning algorithm. It's a big topic but, in short, you generally need to use a tree structure and sort all the objects to be put into various tree nodes. You'll need to pre-calculate the tree itself. There are plenty of books and tutorials on the net about space partioning, and you could also at source code of, for example, id Software products like Doom, Quake etc. to see how this algorithms (BSP, at least) are used. The efficiency of each algorithm depends on what you have and what you need. Using BSP trees, for example, you'll have the objects sorted from nearest to farthest so you can quickly get the one you need.
I'm trying to figure out what is the best way to calculate the overlap area of two arbitrary polygons in Java.
Here's the research I've done so far:
I've read the documentation of Area class (from java.awt.geom). It doesn't seem to support this option.
I've tried looking at other classes that may support this, and were offered in other similar occasions (classes that implement Shape interface for example). None of them seemed to have this option.
I am aware that there are 3rd party modules that support this, but I'm looking for one with free license for every use (including commercial use).
Some more details about the Polygons:
The only assumption about the polygons is that they are "simple" - i.e - not containing any holes.
Polygons are given as a list of coordinates. I also have the Area and GeneralPath objects that represents them.
Is there any way to achieve this task in Java without downloading external libraries?
The only solution I thought of this far is to create for both a set of inner points by finding the bounding rectangle, and using Area's contain function, and then finding the union of both of these sets. The problem with this solution is that it's very inefficient.
You need polygon clipping algorithm suitable for non-convex polygons. Some known algorithms:
Vatti's algorithm, works with arbitrary polygons, including complex, used in Clipper library
Weiler-Atherton algorithm
Greiner-Hormann algorithm , page has links to some implementations
I have to store a set of 2D polygons in memory (less than 1000) in a structure which allows to find efficiently the ones containing a point. Polygons never change and contain about 10 points.
I have to launch the query about 10000 times per second.
I guess a structure using quad trees or similar and bounding boxes of the polygons would do this as I need.
Does anybody know a free java library offering this service ?
I don't think there's such a service, but as a structure you can use https://docs.oracle.com/javase/8/docs/api/java/awt/Polygon.html. You even have a method to check for point inclusion.
I have a large list of regions with 2D coordinates. None of the regions overlap. The regions are not immediately adjacent to one another and do not follow a placement pattern.
Is there an efficient lookup algorithm that can be used to let me know what region a specific point will fall into? This seems like it would be the exact inverse of what a QuadTree is.
The data structure you need is called an R-Tree. Most RTrees permit a "Within" or "Intersection" query, which will return any geographic area containing or overlapping a given region, see, e.g. wikipedia.
There is no reason that you cannot build your own R-Tree, its just a variant on a balanced B-Tree which can hold extended structures and allows some overlap. This implementation is lightweight, and you could use it here by wrapping your regions in rectangles. Each query might return more than one result but then you could check the underlying region. Its probably an easier solution than trying to build a polyline-supporting R-tree version.
What you need, if I understand correctly, is a point location data structure that is, as you put it, somehow the opposite of quad or R-tree. In a point location data structure you have a set of regions stored, and the queries are of the form: given point p give me the region in which it is contained.
Several point location data structures exists, the most famous and the one that achieves the best performance is the Kirkpatrick's one also known as triangulation refinement and achieves O(n) space and O(logn) query time; but is also famous to be hard to implement. On the other hand there are several simpler data structures that achieves O(n) or O(nlogn) space but O(log^2n) query time, which is not that bad and way easier to implement, and for some is possible to reduce the query time to O(logn) using a method called fractional cascading.
I recommend you to take a look into chapter 6 of de Berg, Overmars, et al. Computational Geometry: Algorithms and Applications which explains the subject in a way very easy to grasp, though it doesn't includes Kirkpatrick's method, which you can find it in Preparata's book or read it directly from Kirkpatrick's paper.
BTW, several of this structures assumes that your regions are not overlapping but are expected to be adjacent (regions share edges), and the edges forms a connected graph, some times triangular regions are also assumed. In all cases you can extend your set of regions by adding new edges, but don't you worry for that, since the extra space needed will be still linear, since the final set of regions will induce a planar graph. So you can blindly extend your sets of regions without worrying with too much growth of space.
I've created two clustering algorithms: k-means and divisive, maybe later I'll add aglomerative as well. I have to analyze how good they are with high dimension data, and for that I have to calculate the average/sum distance to the clusters center. In the case of k-means, it's easy, i have the centroid, but how to find the center in the divisive/aglomerative algorithm?
While I'm here: I've currently implemented Euclede's, Manhattans and Pearsons distance, are there any more distance measures which i could use?
Thanks in advance!
You may want to get this book:
Encyclopedia of distances, Michel Deza, Elena Deza, 590 pages.
which covers many of the alternate distance functions you could use.
Probably a few hundred different distances ...
However, you will also need to look into your evaluation method -- if it is centroid based, it will be biased towards k-means. So the comparison is likely unfair.
Furthermore, if you use artificial data, make sure you do not unfairly favor one method over another because the method correlates with the way you generate your data (e.g. if you generate Gaussian clusters, it favors methods such as k-means).
The goal of my work is to analyze these clusters, when they have to create clusters from data with high dimensionality. It is hard to evaluate them and it's very unlikely that the result will be completely fair, so I'm going to use the average, accumulated distance between records in one cluster and the minimal distance between two records from different clusters.
Regarding the way on how to find the center of a cluster in Hierarchical clustering algorithms - the same formula used in k-means, used to recalculate the centroid after each iteration.