We are given the number of vertices of a directed unweighted graph and a list of pairs of vertices like (a, b) . We want to make the graph in a way that there will be a directed paths between two vertices of each given pair. The problem is to find minimum number of edges to satisfy the conditions. Note that for every pair like (a, b) we have the below conditions:
1- (a, b) is different from (b, a)
2- if (a, b) is in the given list, then there should be a directed path from a to b in graph. This path may be multiple hop.
here is my try: as I found that hard to find an algorithm to find edges should be in the graph, I decided to add an edge between two vertices of each given pair in the graph (i.e. if the pairs (a ,b) and (c, d) are given, I draw two edges. One from a to b and the other from c to d) and then delete the edges which removing them does not affect connectivity between vertices. However I still could not find a way to indicate these edges.
We are given the number of vertices of a directed unweighted graph
I will assume that this graph has some pre-existing edges to which you are going to add to and perhaps remove some. You do not say this, but the question makes no sense unless there are such pre-existing edges.
Here is the algorithm ( pseudo code )
ADD edges between given node pairs
FIND maximal cliques https://en.wikipedia.org/wiki/Clique_problem
LOOP over cliques
FIND minimum spanning tree in clique https://en.wikipedia.org/wiki/Minimum_spanning_tree
LOOP over edges in clique
IF edge is NOT in spanning tree
REMOVE edge
Related
I’m trying to figure out, how to calculate the shortest path for a graph with weighted vertices. Classical algorithms like Dijkstra and Floyd–Warshall normally work with weighted edges and I'm not seeing a way how to apply them to my case (weighted vertices):
One of the ideas I had was to convert the graph to the more classical view with weighted edges. This is what I received:
Here we have mono and bi-directional weighted edges, but I'm still not sure which algorithm would handle this in order to find the shortest path.
You can certainly do this by transforming the graph. The simplest way is to transform each edge into a vertex, and connect the new vertexes together with edges that have the same cost as the vertex that used to join them.
But you don't really need to bother with any of that...
Dijkstra's algorithm is very easy to adapt to vertex costs without using any such transformation. When you traverse an edge, instead of new_vertex_cost = old_vertex_cost + edge_weight, you just do new_vertex_cost = old_vertex_cost + new_vertex_weight.
You can reduce the problem to the classical shortest path problem and use Dijkstra, Bellman-Ford, or Floyd-Warshal as it suits the purpose. For the sake of simplicity, in what follows, I assume all weights are non-negative. I consider such an assumption reasonable since the question mentions using Dijkstra's algorithm to solve the problem. In the end, this assumption can be removed with care.
Consider most general form of the problem: Assume G = <V, E> is a directed weighted graph with weights on both edges and vertices. Construct a graph H = <V', E'>, with weights only on edges, as follows: For any node v in G, create two nodes v_in and v_out in H; add an edge (v_in -> v_out) with weight equal to the weight of node v in G. Also, for any edge (u -> w) in G, add an edge (u_out -> w_in) in H (new edge carries the same weight as the original edge).
To summarize, for any vertex in the original graph add two vertices in
H, one dedicated to ingoing edges, and the other dedicated to the
outgoing edges (also, connect the new correlated nodes in H based on
the weight of their corresponding vertex in G).
Now, you have a directed weighted graph H with no weight on vertices, but only on edges. It is easy to prove that the shortest path between (s_in, t_out) in H is the same as the shortest path between (s,t) in the original graph G.
The proof is based on the fact that any such path goes through the
edge (v_in, v_out) in H if and only if the corresponding path in
G goes through node v.
As far as the analysis goes, we have |V'| = 2|V|, and |E'| = |E| + |V|. So the reduction does not affect the asymptotic behavior of the employed algorithm for finding shortest paths.
I recently created an unweighted bidirectional graph by using an adjacency list from a HashMap in Java. I have randomly created connections between nodes and now I am unsure of how to check if there's a single path that passes through every node once and exactly once.
What is the best way / algorithm to check if a path exists between all nodes?
//Sample
A -> B
B -> A -> C -> D
C -> B -> E
D -> B
E -> C -> G
F -> G
G -> E -> F
The sort of path you’re asking for is called a Hamiltonian path and unfortunately there are no known algorithms for this problem that run efficiently on all inputs (the problem is NP-complete). You could solve this problem by brute force (list all possible paths and see if any of them go through all the nodes once and exactly once). There’s also a famous O(n22n)-time dynamic programming algorithm for this problem as well.
You can use DFS or BFS algorithm to traverse the graph and check if each of the nodes is visited at least once.
If you only want to know whether the graph is connected or not then you can do the following:
Use BFS/DFS to map the graph starting from any of the nodes. Whenever you encounter a new node (including the first one) increase a counter by 1. This will give you the number of connected nodes to your starting node.
Compare that counter with the size of the map (number of keys). If the number of keys is greater than the number of nodes traversed then the graph is disconnected, if the number is equal then it is connected.
It's not really clear what your data structure is because you're talking about a HashMap, but showing something else. Generally, BFS/DFS run in O(|V| + |E|) (where |V| is the number of vertices and |E| is the number of edges), and size is O(1) for a HashMap.
If you want to know which nodes are not connected to your starting one, then
Store or mark all the nodes you traverse. This will give you all the nodes connected to the node from which you initiated the traversing.
Iterate over all the nodes and find those which are not contained in your previous step.
contains is also O(1) for a HashMap, though you would have to run it |V|-1 times.
There's a similar (more theory oriented) question on Computer Science: How to check whether a graph is connected in polynomial time?
My tree is represented by its edges and the root node. The edge list is undirected.
char[][] edges =new char[][]{
new char[]{'D','B'},
new char[]{'A','C'},
new char[]{'B','A'}
};
char root='A';
The tree is
A
B C
D
How do I do depth first traversal on this tree? What is the time complexity?
I know time complexity of depth first traversal on linked nodes is O(n). But if the tree is represented by edges, I feel the time complexity is O(n^2). Am I wrong?
Giving code is appreciated, although I know it looks like homework assignment..
The general template behind DFS looks something like this:
function DFS(node) {
if (!node.visited) {
node.visited = true;
for (each edge {node, v}) {
DFS(v);
}
}
}
If you have your edges represented as a list of all the edges in the graph, then you could implement the for loop by iterating across all the edges in the graph and, every time you find one with the current node as its source, following the edge to its endpoint and running a DFS from there. If you do this, then you'll do O(m) work per node in the graph (here, m is the number of edges), so the runtime will be O(mn), since you'll do this at most once per node in the graph. In a tree, the number of edges is always O(n), so for a tree the runtime is O(n2).
That said, if you have a tree and there are only n edges, you can speed this up in a bunch of ways. First, you could consider doing an O(n log n) preprocessing step to sort the array of edges. Then, you can find all the edges leaving a given node by doing a binary search to find the first edge leaving the node, then iterating across the edges starting there to find just the edges leaving the node. This improves the runtime quite a bit: you do O(log n) work per node for the binary search, and then every edge gets visited only once. This means that the runtime is O(n log n). Since you've mentioned that the edges are undirected, you'll actually need to create two different copies of the edges array - one that's the original one, and one with the edges reversed - and should sort each one independently. The fact that DFS marks visited nodes along the way means that you don't need to do any extra bookkeeping here to figure out which direction you should go at each step, and this doesn't change the overall time complexity, though it does increase the space usage.
Alternatively, you could use a hashing-based solution. Before doing the DFS, iterate across the edges and convert them into a hash table whose keys are the nodes and whose values are lists of the edges leaving that node. This will take expected time O(n). You can then implement the "for each edge" step quite efficiently by just doing a hash table lookup to find the edges in question. This reduces the time to (expected) O(n), though the space usage goes up to O(n) as well. Since your edges are undirected, as you populate the table, just be sure to insert the edge in each direction.
I want to be able to generate random, undirected, and connected graphs in Java. In addition, I want to be able to control the maximum number of vertices in the graph. I am not sure what would be the best way to approach this problem, but here are a few I can think of:
(1) Generate a number between 0 and n and let that be the number of vertices. Then, somehow randomly link vertices together (maybe generate a random number per vertex and let that be the number of edges coming out of said vertex). Traverse the graph starting from an arbitrary vertex (say with Breadth-First-Search) and let our random graph G be all the visited nodes (this way, we make sure that G is connected).
(2) Generate a random square matrix (of 0's and 1's) with side length between 0 and n (somehow). This would be the adjacency matrix for our graph (the diagonal of the matrix should then either be all 1's or all 0's). Make a data structure from the graph and traverse the graph from any node to get a connected list of nodes and call that the graph G.
Any other way to generate a sufficiently random graph is welcomed. Note: I do not need a purely random graph, i.e., the graph you generate doesn't have to have any special mathematical properties (like uniformity of some sort). I simply need lots and lots of graphs for testing purposes of something else.
Here is the Java Node class I am using:
public class Node<T> {
T data;
ArrayList<Node> children= new ArrayList<Node>();
...}
Here is the Graph class I am using (you can tell why I am only interested in connected graphs at the moment):
public class Graph {
Node mainNode;
ArrayList<Node> V= new ArrayList<Node>();
public Graph(Node node){
mainNode= node;
}
...}
As an example, this is how I make graphs for testing purposes right now:
//The following makes a "kite" graph G (with "a" as the main node).
/* a-b
|/|
c-d
*/
Node<String> a= new Node("a");
Node<String> b= new Node("b");
Node<String> c= new Node("c");
Node<String> d= new Node("d");
a.addChild(b);
a.addChild(c);
b.addChild(a);
b.addChild(c);
b.addChild(d);
c.addChild(a);
c.addChild(b);
c.addChild(d);
d.addChild(c);
d.addChild(b);
Graph G1= new Graph(a);
Whatever you want to do with your graph, I guess its density is also an important parameter. Otherwise, you'd just generate a set of small cliques (complete graphs) using random sizes, and then connect them randomly.
If I'm correct, I'd advise you to use the Erdős-Rényi model: it's simple, not far from what you originally proposed, and allows you to control the graph density (so, basically: the number of links).
Here's a short description of this model:
Define a probability value p (the higher p and the denser the graph: 0=no link, 1=fully connected graph);
Create your n nodes (as objects, as an adjacency matrix, or anything that suits you);
Each pair of nodes is connected with a (independent) probability p. So, you have to decide of the existence of a link between them using this probability p. For example, I guess you could ranbdomly draw a value q between 0 and 1 and create the link iff q < p. Then do the same thing for each possible pair of nodes in the graph.
With this model, if your p is large enough, then it's highly probable your graph is connected (cf. the Wikipedia reference for details). In any case, if you have several components, you can also force its connectedness by creating links between nodes of distinct components. First, you have to identify each component by performing breadth-first searches (one for each component). Then, you select pairs of nodes in two distinct components, create a link between them and consider both components as merged. You repeat this process until you've got a single component remaining.
The only tricky part is ensuring that the final graph is connected. To do that, you can use a disjoint set data structure. Keep track of the number of components, initially n. Repeatedly pick pairs of random vertices u and v, adding the edge (u, v) to the graph and to the disjoint set structure, and decrementing the component count when the that structure tells you u and v belonged to different components. Stop when the component count reaches 1. (Note that using an adjacency matrix simplifies managing the case where the edge (u, v) is already present in the graph: in this case, adj[u][v] will be set to 1 a second time, which as desired has no effect.)
If you find this creates graphs that are too dense (or too sparse), then you can use another random number to add edges only k% of the time when the endpoints are already part of the same component (or when they are part of different components), for some k.
The following paper proposes an algorithm that uniformly samples connected random graphs with prescribed degree sequence, with an efficient implementation. It is available in several libraries, like Networkit or igraph.
Fast generation of random connected graphs with prescribed degrees.
Fabien Viger, Matthieu Latapy
Be careful when you make simulations on random graphs: if they are not sampled uniformly, then they may have hidden properties that impact simulations; alternatively, uniformly sampled graphs may be very different from the ones your code will meet in practice...
I need to make an algorithm that verifies if there is a road from the node x to the node y in a graph. The edges in the graph have a series of rights attached to them (like r, w, e, etc.). My algorithm need to have |e| + |v| complexity. I can only go through nodes whose edge with the node before them has a certain set of rights given as a parameter.
For example, if I have a set of rights: r, w, e, g and I distribute these rights randomly on the edges, and I give as a parameter for my search method the set of rights: e, g, I can only go through nodes whose edges has the rights e,g.
How can I do this in |e| + |v| time complexity if DFS algorithm has is I recall correctly |e| + |v| time complexity and I also need to search if the edges have the desired set of rights, which I think adds to the complexity.
You need to apply breadth-first search (unlike DFS, it will find the shortest path) modifying it slightly to take into account only nodes which have the required rights.
Here is the pseudo-code, I'm sure you can translate it to Java:
procedure BFS(G,v):
create a queue Q
create a set V
enqueue v onto Q
add v to V
while Q is not empty:
t ‹ Q.dequeue()
if t is what we are looking for:
return t
for all edges e in G.adjacentEdges(t) do
u ‹ G.adjacentVertex(t,e)
if u is not in V and t.hasRights(allowedRights):
add u to V
enqueue u onto Q
return none
It differs from the one on Wikipedia only by checking the t.hasRights(allowedRights) condition.
Using Java HashSet, checking a set of rights can be easily done in O(1) time, adding nothing to E+V complexity of the BFS algorithm (assuming number of available rights is constant).
In each node you store a set of rights, and then check if all required rights are in the set (HashSet.contains(Object) is O(1)).
Also, you can represent your rights as enum and use EnumSet to store the right sets. EnumSet is implemented as bit vectors and so is as fast as you can get with sets.