How to compare all elements between two arrays? - java

I'm attempting to write a program which can identify all nodes in a graph that don't share any common neighbors, and in which all vertices are contained within the various subsystems in the graph. Assume all nodes are numerically labeled for simplicity.
For example, in a graph of a cube, the furthest corners share no common nodes and are part of subsystems that together contain all vertices.
I'm looking to write a program that compares each potential subsystem against all other potential subsystems, regardless of the graph, number of nodes or sides, and finds groups of subsystems whose central nodes don't share common neighbors. For simplicity's sake, assume the graphs aren't usually symmetrical, unlike the cube example, as this introduces functionally equivalent systems. However, the number of nodes in a subsystem, or elements in an array, can vary.
The goal for the program is to find a set of central nodes whose neighbors are each unique to the subsystem, that is no neighbor appears in another subsystem. This would also mean that the total number of nodes in all subsystems, central nodes and neighbors together, would equal the total number of vertices in the graph.
My original plan was to use a 2d array, where rows act as stand-ins for the subsystems. It would compare individual elements in an array against all other elements in all other arrays. If two arrays contain no similar elements, then index the compared array and its central node is recorded, otherwise it is discarded for this iteration. After the program has finished iterating through the 2d array against the first row, it adds up the number of elements from all recorded rows to see if all nodes in the graph are represented. So if a graph contains x nodes, and the number of elements in the recorded rows is less than x, then the program iterates down one row to the next subsystem and compares all values in the row against all other values like before.
Eventually, this system should print out which nodes can make up a group of subsystems that encompass all vertices and whose central nodes share no common neighbors.
My limited exposure to CS makes a task like this daunting, as it's just my way of solving puzzles presented by my professor. I'd find the systems by hand through guess-and-check methods, but with a 60+ node array...
Thanks for any help, and simply pointers in the right direction would be very much appreciated.

I don't have a good solution (and maybe there exists none; it sounds very close to vertex cover). So you may need to resort backtracking. The idea is the following:
Keep a list of uncovered vertices and a list of potential central node candidates. Both lists initially contain all vertices. Then start placing a random central node from the candidate list. This will erase the node and its one-ring from the uncovered list and the node plus its one-ring and two-ring from the candidate list. Do this until the uncovered list is empty or you run out of candidates. If you make a mistake, revert the last step (and possibly more).
In pseudo-code, this looks as follows:
findSolution(uncoveredVertices : list, centralNodeCandidates : list, centralNodes : list)
if uncoveredVertices is empty
return centralNodes //we have found a valid partitioning
if centralNodeCandidates is empty
return [failure] //we cannot place more central nodes
for every n in centralNodeCandidates
newUncoveredVertices <- uncoveredVertices \ { n } \ one-ring of n
newCentralNodeCandidates <- centralNodeCandidates \ { n } \ one-ring of n \ two-ring of n
newCentralNodes = centralNodes u { n }
subProblemSolution = findSolution(newUncoveredVertices, newCentralNodeCandidates, newCentralNodes)
if subProblemSolution is not [failure]
return subProblemSolution
next
return [failure] //none of the possible routes to go yielded a valid solution
Here, \ is the set minus operator and u is set union.
There are several possible optimizations:
If you know the maximum number of nodes, you can represent the lists as bitmaps (for a maximum of 64 nodes, this even fits into a 64 bit integer).
You may end up checking the same state multiple times. To avoid this, you may want to cache the states that resulted in failures. This is in the spirit of dynamic programming.

Related

Algorithm for minimal cost + maximum matching in a general graph

I've got a dataset consisting of nodes and edges.
The nodes respresent people and the edges represent their relations, which each has a cost that's been calculated using euclidean distance.
Now I wish to match these nodes together through their respective edges, where there is only one constraint:
Any node can only be matched with a single other node.
Now from this we know that I'm working in a general graph, where every node could theoretically be matched with any node in the dataset, as long as there is an edge between them.
What I wish to do, is find the solution with the maximum matches and the overall minimum cost.
Node A
Node B
Node C
Node D
- Edge 1:
Start: End Cost
Node A Node B 0.5
- Edge 2:
Start: End Cost
Node B Node C 1
- Edge 3:
Start: End Cost
Node C Node D 0.5
- Edge 2:
Start: End Cost
Node D Node A 1
The solution to this problem, would be the following:
Assign Edge 1 and Edge 3, as that is the maximum amount of matches ( in this case, there's obviously only 2 solutions, but there could be tons of branching edges to other nodes)
Edge 1 and Edge 3 is assigned, because it's the solution with maximum amount of matches and the minimum overall cost (1)
I've looked into quite a few algorithms including Hungarian, Blossom, Minimal-cost flow, but I'm uncertain which is the best for this case. Also there seems so be an awful lot of material to solving these kinds of problems in bipartial graph's, which isn't really the case in this matter.
So I ask you:
Which algorithm would be the best in this scenario to return the (a) maximum amount of matches and (b) with the lowest overall cost.
Do you know of any good material (maybe some easy-to-understand pseudocode), for your recomended algorithm? I'm not the strongest in mathematical notation.
For (a), the most suitable algorithm (there are theoretically faster ones, but they're more difficult to understand) would be Edmonds' Blossom algorithm. Unfortunately it is quite complicated, but I'll try to explain the basis as best I can.
The basic idea is to take a matching, and continually improve it (increase the number of matched nodes) by making some local changes. The key concept is an alternating path: a path from an unmatched node to another unmatched node, with the property that the edges alternate between being in the matching, and being outside it.
If you have an alternating path, then you can increase the size of the matching by one by flipping the state (whether or not they are in the matching) of the edges in the alternating path.
If there exists an alternating path, then the matching is not maximum (since the path gives you a way to increase the size of the matching) and conversely, you can show that if there is no alternating path, then the matching is maximum. So, to find a maximum matching, all you need to be able to do is find an alternating path.
In bipartite graphs, this is very easy to do (it can be done with DFS). In general graphs this is more complicated, and this is were Edmonds' Blossom algorithm comes in. Roughly speaking:
Build a new graph, where there is an edge between two vertices if you can get from u to v by first traversing an edge that is in the matching, and then traversing and edge that isn't.
In this graph, try to find a path from an unmatched vertex to a matched vertex that has an unmatched neighbor (that is, a neighbor in the original graph).
Each edge in the path you find corresponds to two edges of the original graph (namely an edge in the matching and one not in the matching), so the path translates to an alternating walk in the new graph, but this is not necessarily an alternating path (the distinction between path and walk is that a path only uses each vertex once, but a walk can use each vertex multiple times).
If the walk is a path, you have an alternating path and are done.
If not, then the walk uses some vertex more than once. You can remove the part of the walk between the two visits to this vertex, and you obtain a new graph (with part of the vertices removed). In this new graph you have to do the whole search again, and if you find an alternating path in the new graph you can "lift" it to an alternating path for the original graph.
Going into the details of this (crucial) last step would be a bit too much for a stackoverflow answer, but you can find more details on Wikipedia and perhaps having this high-level overview helps you understand the more mathematical articles.
Implementing this from scratch will be quite challenging.
For the weighted version (with the Euclidean distance), there is an even more complicated variant of Edmonds' Algorithm that can handle weights. Kolmogorov offers a C++ implementation and accompanying paper. This can also be used for the unweighted case, so using this implementation might be a good idea (even if it is not in java, there should be some way to interface with it).
Since your weights are based on Euclidean distances there might be a specialized algorithm for that case, but the more general version I mentioned above would also work and and implementation is available for it.

Traverse a tree represented by its edges

My tree is represented by its edges and the root node. The edge list is undirected.
char[][] edges =new char[][]{
new char[]{'D','B'},
new char[]{'A','C'},
new char[]{'B','A'}
};
char root='A';
The tree is
A
B C
D
How do I do depth first traversal on this tree? What is the time complexity?
I know time complexity of depth first traversal on linked nodes is O(n). But if the tree is represented by edges, I feel the time complexity is O(n^2). Am I wrong?
Giving code is appreciated, although I know it looks like homework assignment..
The general template behind DFS looks something like this:
function DFS(node) {
if (!node.visited) {
node.visited = true;
for (each edge {node, v}) {
DFS(v);
}
}
}
If you have your edges represented as a list of all the edges in the graph, then you could implement the for loop by iterating across all the edges in the graph and, every time you find one with the current node as its source, following the edge to its endpoint and running a DFS from there. If you do this, then you'll do O(m) work per node in the graph (here, m is the number of edges), so the runtime will be O(mn), since you'll do this at most once per node in the graph. In a tree, the number of edges is always O(n), so for a tree the runtime is O(n2).
That said, if you have a tree and there are only n edges, you can speed this up in a bunch of ways. First, you could consider doing an O(n log n) preprocessing step to sort the array of edges. Then, you can find all the edges leaving a given node by doing a binary search to find the first edge leaving the node, then iterating across the edges starting there to find just the edges leaving the node. This improves the runtime quite a bit: you do O(log n) work per node for the binary search, and then every edge gets visited only once. This means that the runtime is O(n log n). Since you've mentioned that the edges are undirected, you'll actually need to create two different copies of the edges array - one that's the original one, and one with the edges reversed - and should sort each one independently. The fact that DFS marks visited nodes along the way means that you don't need to do any extra bookkeeping here to figure out which direction you should go at each step, and this doesn't change the overall time complexity, though it does increase the space usage.
Alternatively, you could use a hashing-based solution. Before doing the DFS, iterate across the edges and convert them into a hash table whose keys are the nodes and whose values are lists of the edges leaving that node. This will take expected time O(n). You can then implement the "for each edge" step quite efficiently by just doing a hash table lookup to find the edges in question. This reduces the time to (expected) O(n), though the space usage goes up to O(n) as well. Since your edges are undirected, as you populate the table, just be sure to insert the edge in each direction.

How to generate random graphs?

I want to be able to generate random, undirected, and connected graphs in Java. In addition, I want to be able to control the maximum number of vertices in the graph. I am not sure what would be the best way to approach this problem, but here are a few I can think of:
(1) Generate a number between 0 and n and let that be the number of vertices. Then, somehow randomly link vertices together (maybe generate a random number per vertex and let that be the number of edges coming out of said vertex). Traverse the graph starting from an arbitrary vertex (say with Breadth-First-Search) and let our random graph G be all the visited nodes (this way, we make sure that G is connected).
(2) Generate a random square matrix (of 0's and 1's) with side length between 0 and n (somehow). This would be the adjacency matrix for our graph (the diagonal of the matrix should then either be all 1's or all 0's). Make a data structure from the graph and traverse the graph from any node to get a connected list of nodes and call that the graph G.
Any other way to generate a sufficiently random graph is welcomed. Note: I do not need a purely random graph, i.e., the graph you generate doesn't have to have any special mathematical properties (like uniformity of some sort). I simply need lots and lots of graphs for testing purposes of something else.
Here is the Java Node class I am using:
public class Node<T> {
T data;
ArrayList<Node> children= new ArrayList<Node>();
...}
Here is the Graph class I am using (you can tell why I am only interested in connected graphs at the moment):
public class Graph {
Node mainNode;
ArrayList<Node> V= new ArrayList<Node>();
public Graph(Node node){
mainNode= node;
}
...}
As an example, this is how I make graphs for testing purposes right now:
//The following makes a "kite" graph G (with "a" as the main node).
/* a-b
|/|
c-d
*/
Node<String> a= new Node("a");
Node<String> b= new Node("b");
Node<String> c= new Node("c");
Node<String> d= new Node("d");
a.addChild(b);
a.addChild(c);
b.addChild(a);
b.addChild(c);
b.addChild(d);
c.addChild(a);
c.addChild(b);
c.addChild(d);
d.addChild(c);
d.addChild(b);
Graph G1= new Graph(a);
Whatever you want to do with your graph, I guess its density is also an important parameter. Otherwise, you'd just generate a set of small cliques (complete graphs) using random sizes, and then connect them randomly.
If I'm correct, I'd advise you to use the Erdős-Rényi model: it's simple, not far from what you originally proposed, and allows you to control the graph density (so, basically: the number of links).
Here's a short description of this model:
Define a probability value p (the higher p and the denser the graph: 0=no link, 1=fully connected graph);
Create your n nodes (as objects, as an adjacency matrix, or anything that suits you);
Each pair of nodes is connected with a (independent) probability p. So, you have to decide of the existence of a link between them using this probability p. For example, I guess you could ranbdomly draw a value q between 0 and 1 and create the link iff q < p. Then do the same thing for each possible pair of nodes in the graph.
With this model, if your p is large enough, then it's highly probable your graph is connected (cf. the Wikipedia reference for details). In any case, if you have several components, you can also force its connectedness by creating links between nodes of distinct components. First, you have to identify each component by performing breadth-first searches (one for each component). Then, you select pairs of nodes in two distinct components, create a link between them and consider both components as merged. You repeat this process until you've got a single component remaining.
The only tricky part is ensuring that the final graph is connected. To do that, you can use a disjoint set data structure. Keep track of the number of components, initially n. Repeatedly pick pairs of random vertices u and v, adding the edge (u, v) to the graph and to the disjoint set structure, and decrementing the component count when the that structure tells you u and v belonged to different components. Stop when the component count reaches 1. (Note that using an adjacency matrix simplifies managing the case where the edge (u, v) is already present in the graph: in this case, adj[u][v] will be set to 1 a second time, which as desired has no effect.)
If you find this creates graphs that are too dense (or too sparse), then you can use another random number to add edges only k% of the time when the endpoints are already part of the same component (or when they are part of different components), for some k.
The following paper proposes an algorithm that uniformly samples connected random graphs with prescribed degree sequence, with an efficient implementation. It is available in several libraries, like Networkit or igraph.
Fast generation of random connected graphs with prescribed degrees.
Fabien Viger, Matthieu Latapy
Be careful when you make simulations on random graphs: if they are not sampled uniformly, then they may have hidden properties that impact simulations; alternatively, uniformly sampled graphs may be very different from the ones your code will meet in practice...

Space differences in implementations of a trie

The more I read about tries the more confused I get for some reason.
What confuses me now is the following:
I have read about 2 types of implementation.
Using arrays to represent the characters (not storing the characters
itself) and in each node also store the index to the actual word (if
we reached a word).
Using a Collection of nodes that store characters and at the end
of each node use a boolean to determine if we reached a word going
down this path
In the first case it is not mentioned but it seems that we must actually keep all the dictionary words (since we indirectly reference them). So we have the array_size*numberOfNodes*lengthOfword + size of dictionary processed
In the latter case we don't need the dictionary since the chars are store directly in the tree. So it seems to me that the second implementation is more space efficient. But I am not sure by how much.
Is my understanding correct on the implementations and is there specific reasons to choose one over the other? Also how could we calculate the space requirements for the second case?
Tries do no store the original words anywhere and instead store them implicitly. The basic structure of a trie is the following: each node in the trie stores
A single bit determining whether or not the path that arrives at the node forms a word, and
A collection of pointers to child nodes labeled by characters.
To determine whether a word is in the trie, you start at the root, then follow the appropritately-labeled pointers one at a time. If you arrive at a node marked as a word, then the word exists in the trie. If you arrive at a node that isn't marked or you fall off the trie, the word is not present.
The difference between the two structures you have listed above is how the child pointers are stored. In the first version, the child pointers are stored as an array of one pointer per symbol in the alphabet, which makes following child pointers extremely fast but can be extremely space-inefficient. In the second version, you explicitly store some type of collection holding just the labeled pointers you need. This is slower, but is more space efficient for sparse tries.
The space usage of a trie depends on the number of nodes (call it n), size of the alphabet (call it k), and the way in which child pointers are represented. If you store a fixed-sized array of pointers, then the space usage is about kn pointers (n nodes with k pointers each), plus n bits for the markers at each node. If you have, say, a dynamic array of pointers stored in sorted order, the overhead will be n total child pointers, plus n bits, plus n times the amount of space necessary to store a single collection.
The advantage of the first approach is speed and simplicity, with very good performance on dense tries. The second is slower but more memory efficient for sparse tries.
These are not the only space optimizations possible. Patricia tries compress nodes with just one child together and are very space-efficient. DAWGs try to merge as many nodes as possible together, but do not support efficient insertions.
Hope this helps!

k nearest neighbors graph implementation in Java

I am implementing a Flame clustering algorithm as a way of learning a bit more about graphs and graph traversal, and one of the first steps is constructing a K-nearest-neighbors graph, and I'm wondering what the fastest way would be of running through a list of nodes and connecting each one only to say, it's nearest five neighbors. My thought was that I would start at a node, iterate through the list of other nodes and keep the ones that are closest within an array, making sure that everything past the top n are discarded. Now, I could do this by just sorting a list and keeping the top n entries, but I would much rather keep less fewer things in memory and so I was wondering if there was a way to just have the final array and update that array as I iterate through, or if there is a more efficient way of generating a k nearest neighbors graph.
Also, please note, this is NOT a duplicate of K-Nearest Neighbour Implementation in Java. KNNG is distinct from KNN.
Place the first n nodes, sorted in a List. Then iterate through the rest of nodes and if it fits in the current list (i.e. is a top n node), place it in the corresponding position in the list and discard the last top n node. If it doesn't fit in the top n list, discard it.
for each neighborNode
for(int i = 0; i < topNList.size(); i++){
if((dist = distanceMetric(neighborNode,currentNode)) > topNList.get(i).distance){
topNList.remove(topNList.size()-1)
neighborNode.setDistance(dist);
topNList.add(i, neighborNode);
}
I think the most efficient way would be using a bound priority queue, like
https://github.com/tdebatty/java-graphs#bounded-priority-queue

Categories

Resources