I want to determine whether a bipartite graph is separable when there is a vertex whose weight is less than or equal to the threshold. For example, 0.2 is chosen as a threshold.
In figure 1), there is a vertex with red whose weight is less than 0.2. The bipartite graph can be separated into three subgraphs and the red vertex is copied into the three subgraphs respectively.
In figure 2), there is also a vertex with red whose weight is less than 0.2. However, the red edge causes the bipartite graph to not be split into subgraphs.
My idea:
copy the vertex(named lowVer, red) whose weight is less than or equal to the threshold and link the duplicate vertices to associated vertice respectively(green edges). Associated vertice is the vertices connected to the vertex lowVer.
disconnect from the vertex lowVer(yellow edges).
judge whether the bipartite graph is separable by depth-first-search
Is there a better way?
If I understand well, what you want is to know if a given vertex (the one less than the thresold) is an articulation point or not. An articulation point is a vertex that, when removed from the graph, increase the number of connected components.
If I formulated correctly your problem, then there are many algorithms to find articulation points, see for example https://en.wikipedia.org/wiki/Biconnected_component#Other_algorithms or https://www.geeksforgeeks.org/articulation-points-or-cut-vertices-in-a-graph/
There can be many ways to solve this.
Let's pick node with 0.1 weight and put it like the root of the graph.
image
now if the leaf node has degree 1 , its separable
else, it's not separable.
Let me know if i am missing something..
Related
Good morning/afternoon/evening.
So our data structures course gave us an assignment to segment a grayscale image in java using the following algorithm:
Input: A gray-scale image with P pixels and number R
Output: An image segmented into R regions
1. Map the image onto a primal weighted graph.
2. Find an MST of the graph.
3. Cut the MST at the R – 1 most costly edges.
4. Assign the average tree vertex weight to each vertex in each tree in the forest
5. Map the partition onto a segmentation image
The thing is, they just threw us in the dark. They gave us the jgraph package which we had absolutely no experience with (we never studied it) practically saying "go teach yourselves". Nothing new there.
The way I'm going about doing this is by making a class for vertix objects that contains the coordinates of the pixel in addition to its value so that I can add each one both to the graph and a 2D array. Afterwards, I used the array to add weighted edges between adjacent vertices because java can't tell where in the graph a vertix actually is without edges.
Afterwards, I used Kruskal's packaged method for minimum spanning trees and an arraylist to get around the protected status of edge weights in the tree like so:
ArrayList<WeightedEdge> edgeList = new ArrayList<>(height*width*3);
KruskalMinimumSpanningTree mst4 = new KruskalMinimumSpanningTree(map4);
Set<DefaultWeightedEdge> edges = mst4.getSpanningTree().getEdges();
for (DefaultWeightedEdge edge : edges) {
edgeList.add(new WeightedEdge(edge, map4.getEdgeWeight(edge)));
}
edgeList.sort(null);
for (int i = 0; i < n; i++) {
map4.removeEdge(edgeList.get(edgeList.size()-1).getEdge());
}
So now that I cut the (R-1) most costly edges in the graph, I should be left with a forest. And that's where I hit another dead end. How do I get the program to traverse each tree? The way I'm understanding this, I need a general traversal algorithm to visit every tree and assign the average value to each vertix. The problem? There isn't a general traversal algorithm in the package. And there isn't a way to identify individual trees either.
The idea is easy to understand and implement on paper, really. The problems only lie in actually coding all of this in java.
Sorry if this was messy or too long, but I'm just at my wit's end and physical limits. Thank you in advance.
I am a big fan of JGraphT and honestly I think it is pretty good that you're given it for your task assignment. It takes a bit of time to get started, but then it proves to be a very good tool. But you also need to understand the CS behind implemented algorithms, using JGraphT without knowing the theory is somewhat difficult.
From your task assignment I don't really understand step 1 (building the primal weighted graph). The rest should work with JGraphT quite well.
You did step 2 with KruskalMinimumSpanningTree. Now you can sort the edges by weight and remove R-1 top edges from the graph.
I would, however, suggest that you first build a new graph which would represent the calculated MST. And then remove remove R-1 top edges from that graph. Effectively truning it into a forest.
How do I get the program to traverse each tree?
With the forest from the previous step you can use the ConnectivityInspector to get a list of sets of connected vertices. Each set will contain vertices from one of the trees of the forest. Sets of vertices are easy to work with, you don't need any traversal, just iterate over the set.
I'm trying to implement Lee's visibility graph.
There might be n number of polygons, where each side of the polygon is an edge. Let's say there is a point p1, and a half-liner parallel to the positive x-axis start at p1. I need to find edges that are intersected by r, and store them in sorted order.
An edge that is intersected first by line r has higher priority, also an edge that's closer has a higher priority, but when seen > distance.
E.g p1 = (0, 1), and a polygon with the following vertices {(2,
4),(3,6),(5, 20)}. The edges for this polygon should be sorted as
[((2, 4),(5, 20)), ((2,4),(3, 6)), ((3, 6),(5, 20))].
Hence, how can I sort these edges?
(if you go to the link and read that, I think you will have a better idea, sorry for my explanation).
My prime idea: sort them by distance and angel from p1 to the first Vertex of the edge encountered by r. Though, all vertices have more than one edge (since each vertex/edge is part of a polygon), and I don't know how to sort these two.
Any ideas or hints would be much appreciated.
Just some references:
https://taipanrex.github.io/2016/10/19/Distance-Tables-Part-2-Lees-Visibility-Graph-Algorithm.html
And a book: Computational Geometry ALgorithms and Application.
I found a way for those who are interested:
The sweep line is rotating anti-clockwise and its ordering edges based on taking measurements when it encounters the first vertice of that edge, which are: angle between the initial position of the half line (when it's parallel to positive x-axis) and the vertice encountered, the distance between p1 and the encountered vertice. The smaller the angle and distance the better, also angle has higher priority than distance.
I also take a 3rd measurement, since the half line rotates, when it reaches a vertice it should intersect it, hence I take the angle between the half line and the edge of the vertice. The large the angle the higher priority for this one, since it means the edge is closer. This angle is to differentiate between edges that have the same first encountered vertice by the half line. Therefore, the priority for this measurement is the lowest.
Hopefully, this will help someone.
I'm developing a simulation in Java where objects move around in a 2D grid. Each cell in the grid can only be occupied by one cell, and the objects move by moving from one cell to another.
This is more theoretical than Java specific, but does anyone have ideas as to how I should approach collision handling with this grid? Are there any algorithms that people have used to handle collisions in a grid-like world?
Please note, I am not talking about collision detection as this is trivial since the objects move from one cell to another. I'm talking about collision handling, which can get extremely complex.
Ex: Object A wishes to move into the same location as Object B, and Object C wishes to move into Object B's current location. Since each cell can only contain one object, if Object A is able to move into its desired location, this will cause Object B to remain stationary and thus cause Object C to also remain stationary.
One can imagine that this could create an even longer chain of collisions that need handled.
Are there any algorithms or methods that people have used that can help with this? It is almost impossible to search for this problem without the search results being saturated with collision detection algorithms.
Edit: All objects move at the exact same time. I want to limit the number of objects that must remain stationary, so I prefer to handle objects with longer collision chains first (like Object B in this example).
According to discussion with OP the objects should be moved in a way maximising the number of all moved objects.
Objects with their references form a forest of trees. If an object A is referencing a cell occupied by B object we may say that B is a parent of A in the tree. So objects correspond to nodes and references correspond to edges in trees. There will be some empty cell in the root of each tree (so this is actually the case when empty cell corresponds to a node). Trees do not have common nodes.
Before moving forward we must admit that there may be situations with cycles. Like this:
[A] -> [B]
^ v or [A] <=> [B]
[D] <- [C]
Such cycles can be easily identified. Some object may be referencing cycle objects directly or indirectly also forming a tree. The cycle can happen only in the root of the tree.
Lets say we have built all trees. Now the question is How can we resolve collisions? keeping in mind that we want to maximise the number of moved nodes. Collisions correspond to a node having more than 1 child.
In cycle-root trees we do not have any other option except of moving only cycle objects while not moving any other object in the tree. It is clear that we cannot do another way.
In empty-cell-root tree first we have to decide which root child will be placed on the root empty cell. Then we will have a new empty cell where we will have to take the same decision. And so on so forth up until a leaf node. In order to maximise the number of moved nodes we have to take the longest chain from root to some leaf and move its nodes. All other nodes do not move. This can be easily done by traversing the tree with some recursive function and calculating the following function f for each node fleaf = 0 and fnode = MAX(fchild1, fchild2, ...) + 1. So aforementioned decision is choosing a child node with the biggest f.
*
/|\
A B C The optimal move path is * <- C <- E <- H
/ /|\
D E F G
/
H
Checking "ahead of time"/"next step" will lead to hard calculations and even explosions.
Instead, you can (in-parallel) use checking "which particle can move to this spot?" on each cell. For this, you need velocity map having a velocity vector in each cell. Then following that vector backwards, you spot a cell. If there is a particle, get it to the cell.
This way no collisions occur because only 1 action per cell is done. Just get closest cell from the negative velocity vector's end point.
This is somewhat tricky to compute since velocity may not land exactly on the center of cell. Needs probability(if it is very close to corners than center) for equality of motion / macro effects too.
So there will be a race condition on only the index of cells, and that will be tolerated by randomness of movement(or particle picking depending on the distance from center of cell)
On quantum physics, particles can leap otherside of a wall, stand still(even if it shouldn't) and do some other things that are not natural for classical physics. So if resolution is high, and velocity is not higher than map size, it should look natural, especially with the randomness when multiple cells compete to get a particle from another cell.
Multiple passes:
calcuate which cells need to gather from which cells,
calculate probabilites and then winning cells
winnings cells gather their particles from source cells in parallel.
(not recommended), if still there are non-picked particles with a velocity, then doing per-particle computation to move them (if their target cell is empty), should reduce stationary particles even further.
diffuse particle velocities on cells for a few times(using a 2D stencil) to be used in next iteration.(this one handles the complex collision calculation easily and in an embarrassingly parallel way, good for multithreading-gpgpu)(if particles can only go to closest neighbours, this also helps to know which one, even without any priority nor probability since each cell bents neighbouring cells velocities with the velocity diffusion)
check for Gauss-Seidel(https://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method) method to solve many linear equations.
computation is done on cells instead of particles so the map borders will be implicity bullet-proof and calculation can be distributed equally among all cores.
Example:
particle A going down
particle B going right
they are in collision course
solve for velocity map equilibrium state(Gauss-Seidel)
now particle A's cell has a velocity to down+right
same as B
as if they were collided but they are still on their cells.
Moving them with their cells' velocities will make them look like they are in collision.
I need to generate a Voronoi diagram around a concave (non-convex) inside polygon. I have looked for methods online, but I haven't been able to figure out how to do this. Basically, I generate the convex hull of the points, calculate the dual points and build an edge network between these points. However, when meeting the edges of the inside polygon, it has to look like the edge of the shape, just like the convex hull. So, by doing this and clipping all the edges at the borders, I should end up with a Voronoi diagram that has nice edges to the borders of the inside polygon and no cells that are on both sides of the inside polygon.
Let me give you an example:
The problem with this is that the cells cross the inside polygon edges and there is no visual relation between the cell structure and the polygon shape.
Does anybody know how to approach this problem? Is there some algorithm that already does this or gets close to what I'm trying to achieve?
Thank you so much for any kind of input!
You might be able to build a conforming Delaunay triangulation (i.e. a triangulation that includes the polygon edges as constraints) and then form the Voronoi diagram as the dual. A conforming triangulation will ensure that no edge in the triangulation intersects with a constraint edge - all constraint edges will be an edge in the triangulation.
Have a look at the Triangle package here, as a reference for this type of approach. In my experience it's a fast and robust library, although it's written in c not java.
I'm not sure I understand at this stage how the points (the Voronoi centres) are generated in your diagram. If you're actually looking to do mesh generation in a polygonal domain, then there may be other approaches to consider, although the Triangle package supports (conforming) Delaunay refinement mesh generation.
EDIT: It looks like you can also directly form the Voronoi diagram for general line segments, check out the VRONI library, here. Addressing your comment - I'm not sure that you can always expect to have a uniform Voronoi diagram that also conforms to a general polygonal boundary. I would expect that the shape of the polygonal boundary would impose a maximum dimension on the boundary Voronoi cells.
Hope this helps.
Clearly you need to generate your Voronoi diagram to the constraints of the greater polygon. Although you refer to it as a polygon, I notice that your example diagram has spline-based edges. Let's forget that for now.
What you want to do is to ensure that you start out with the containing polygon (whether generated by you or from another source) having edges of fairly equal length; a variance factor would make this look more natural. I would probably go for a variance of 10-20%.
Now that you have your containing polygon bounded by lines segments of approximately equal length, you have a basis from which to begin generating your Voronoi diagram. For each edge on your container:
Determine the edge normal (perp line jutting inward from centre of that segment).
Use the edge normal as a sliding scale on which to place a new Voronoi node centre. The distance away from the edge itself would be determined by what you want your average Voronoi cell "diameter" to be, if they were all taken as circles. In your example that looks like maybe 30 pixels (or whatever the equivalent in your world units would be). Again, you should apply a variance factor to this so that not every cell centre is placed equidistant from its source edge.
Generate the Voronoi cell for your newly placed centre.
Store your Voronoi cell source point in a list.
As you incrementally generate each point, you should begin to see that the algorithm subdivides each convex "constituent area" of your concave container in a radial fashion.
You may be wondering what the list is for. Well, obviously, you're not done yet, you've only generated a fraction of the total Voronoi tesselation you want. Once you have created these "boundary" cells of your concave space, you don't want new cells to be generated closer to the boundary than the boundary cells already are, you only want them inside that area. By maintaining a list of the boundary cell source points, you can then ensure that any further points you create are inside that area. It's a little bit like taking an internal Minkowski sum to ensure you have a buffer zone. Now you can randomise the rest of your cells in this derived concave space, to completion.
(Caveat emptor: You will have to be careful with this previous step. If any "passage" areas are too narrow, then the boundaries of this derived space will overlap, you will have a non-simple polygon, and you may find yourself placing points in the wrong places in spite of your efforts. The solution is to ensure that either your maximum placement distance from edges is never more than half of your minimum passage width... or use some other geometric means, including Minkowski summation as one possibility, to ensure you do not wind up with a degenerate derived polygon. It is quite possible that you will end with a multipolygon, i.e. fragments.)
I've not applied this method myself yet, but although there will certainly be bugs to work out, I think the general idea will get you started in the right direction.
Look for a paper called:
"Efficient computation of continuous skeletons" by Kirkpatrick, David G, written in 1979.
Here's the abstract:
An O(n lgn) algorithm is presented for the construction of skeletons
of arbitrary n-line polygonal figures. This algorithm is based on an
O(n lgn) algorithm for the construction of generalized Voronoi
diagrams (our generalization replaces point sets by sets of line
segments constrained to intersect only at end points). The generalized
Voronoi diagram algorithm employs a linear time algorithm for the
merging of two arbitrary (standard) Voronoi diagrams.
"Sets of line segments is the constrained to intersect only at end points" is the concave polygon you describe.
I have a problem that I am really struggling with. I have a set of points with weighed edges and I need to create a minimum spanning tree to find the shortest amount of edges needed. I need to do it in java. Right now I have it creating an adjacency matrix and thats the point im stuck. I really have no idea where to go next. Any help would be awesome.
Take a look on Kruskal and Prim algorithms,
I really like Prim because it is very simple to implement and understand: http://en.wikipedia.org/wiki/Prim%27s_algorithm
About your question, what do next (Resumed Prim's algorithm):
Choose one random vertex and get the edge with smaller cost, insert it into your MST.
While you do not have all vertex at your MST:
Choose the edge with smaller cost frm the edges of your MST and insert it at your MST.
If you are trying to find a minimum spanning tree, you will always have the same number of edges, the weights will just be different. The method that I recommend using to solve this problem is Prim's algorithm. It works best when you have distinct weighted edges. Even though someone else has discussed it as a possibility, I will explain it in full here to solve the minimum spanning tree problem with an undirected, connected graph.
The first step to Prim's is starting with any vertex V and add it to a (currently) empty set of vertices called "Discovered". From there, you will look at all of the edges that are adjacent to V (using your adjacency matrix) and are connected to vertices that are not in "Discovered" (using your Discovered set), and add them to a minimum heap structure. The heap will allow you to take the minimum edge and add it to a new tree structure. The other end of that edge is your next new starting vertex. Repeat this process until you have your minimum spanning tree.