class Solution {
// Function to return Breadth First Traversal of given graph.
public ArrayList<Integer> bfsOfGraph(int V, ArrayList<ArrayList<Integer>> adj)
{
ArrayList<Integer> result = new ArrayList<>();
Queue<Integer> q = new LinkedList<>();
q.add(0);
boolean[] visited = new boolean[V];
visited[0] = true;
while(!q.isEmpty()) {
int v = q.poll();
result.add(v);
ArrayList<Integer> adjList = adj.get(v);
for(int i : adjList) {
if(!visited[i]) {
visited[i] = true;
q.add(i);
}
}
}
return result;
}
}
Error:
[1
I am attempting bfs algorithm in undirected graph and it is showing
error of segmentation fault if anyone have any knowledge regarding the
concept please reply.
Segmentation Faults happen in programs (such as the JVM) due to memory errors. Either the JVM has a bug in it that makes it try to use the wrong section of memory on the computer when its cranked up to use that much buffer space, or it tries to allocate 256M of memory and in the process, it uses more space than the computer gave it.
Make sure you update your JVM. Please let me know if you still face the issue.
Related
I have implemented dbscan algorithm to cluster 3d point cloud data. It works very well indeed but the only problem is that it takes too long processing time. almost 15secs for 6000 point cloud. Wanna implement multithreading to reduce the processing time. Would highly appreciate it if one could help with implementing multithreading on the following complete piece of code. Thanks!
public ArrayList<List<Vector>> Run() {
int index = 0; //index for each point cloud (cloud -->input data)
List <Vector> neighbors;
ArrayList<List<Vector>> resultList = new ArrayList<List<Vector>>(); //group of cluster --> ArrayList<list<Vector>>
while (cloud.size() > index) {
Vector p = cloud.get(index);
if (!visited.contains(p)) {
visited.add(p);
neighbors = get_neighbors(p);
if (neighbors.size() >= minPts) { //minpts = 5
int ind = 0;
while (neighbors.size() > ind) {
Vector r = neighbors.get(ind);
if (!visited.contains(r)) {
visited.add(r);
List<Vector> individualNeighbors = get_neighbors(r);
if (individualNeighbors.size() >= minPts) {
neighbors = merge_neighbors(
neighbors,
individualNeighbors);
}
}
ind++;
}
resultList.add(neighbors);
}
}
index++;
}
return resultList;
}
private List<Vector> merge_neighbors(List<Vector>neighborPts1, List<Vector>neighborPts2) {
for (Vector n2: neighborPts2) {
if (!neighborPts1.contains(n2)) {
neighborPts1.add(n2);
}
}
return neighborPts1;
}
private List<Vector> get_neighbors(Vector pt){
CopyOnWriteArrayList<Vector> pts = new CopyOnWriteArrayList<>();
for (Vector p: cloud) {
if (computeDistance (pt,p)<=eps*eps) {
pts.add(p);
}
}
return pts;
}
private double computeDistance (Vector core,Vector target) {
return Math.pow(core.getX()-target.getX(),2)
+ Math.pow(core.getY()-target.getY(),2)
+Math.pow(core.getZ()-target.getZ(),2);
}
}
A) there is lot of optimization potential in your implementation that is easier to do than multithreading. So first optimize your code.
In particular, if you load your data into tools such as ELKI (make sure to add a spatial index, which is not default), you'll notice that they run much faster even with just a single thread
B) there are publications on multicore DBSCAN that discuss the difficulties and challenges when pallelizing DBSCAN. Read then first, as the whole story is to long for this Q&A format here:
Patwary, M. A., Palsetia, D., Agrawal, A., Liao, W. K., Manne, F., & Choudhary, A. (2012, November). A new scalable parallel DBSCAN algorithm using the disjoint-set data structure. In Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis (p. 62). IEEE Computer Society Press.
Götz, M., Bodenstein, C., & Riedel, M. (2015, November). HPDBSCAN: highly parallel DBSCAN. In Proceedings of the workshop on machine learning in high-performance computing environments (p. 2). ACM.
Welton, B., Samanas, E., & Miller, B. P. (2013, November). Mr. scan: Extreme scale density-based clustering using a tree-based network of gpgpu nodes. In SC'13: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis (pp. 1-11). IEEE.
I write a simple code about dfs in a data file with 720 thousand vertex pair and find stack overflow. I am not quite sure whether it is caused by large data set or problems of my code. Any ideas is appreciated. Code part is showed below:
private void dfs(Graph G, int v) {
dfsMarked[v] = true;
for (Edge e : G.adj(v)) {
int w = e.other(v);
if (!dfsMarked[w]) {
dfsEdgeTo[w] = v;
dfs(G, w);
}
}
}
720 thousand vertex pairs with a pass spanning a few hundred thousand of them will easily overflow stack on most systems.
You needs to switch to an implementation of DFS that uses your own stack allocated independently of Java stack:
Stack<Integer> stack = new Stack<Integer>();
stack.push(start);
while (!stack.empty()) {
int v = stack.pop();
dfsMarked[v] = true;
for (Edge e : G.adj(v)) {
int w = e.other(v);
if (!dfsMarked[w]) {
dfsEdgeTo[w] = v;
stack.push(w);
}
}
}
Note: The above assumes that adjacency lists are unordered. If you need to preserve the specific ordering to match the recursive version, change the nested loop to enumerate adjacency lists in reverse.
Here is the revised code, the other classes don't matter I hope. If you need the other classes, tell me and I'll add it. When I run this I get the naming error when I try to retrieve the spaces that are possible to move in.
public static void main(String[] args) {
// TODO Auto-generated method stub
List<Space> openlist = new ArrayList<Space>();
int g = 0;
Bot Dave = new Bot("Dave");
Goal goal = new Goal();
Obstacle first = new Obstacle("First");
int numofobst = 1;
Space right = new Space(Dave.getX()+1, Dave.getY());
Space left = new Space(Dave.getX()-1, Dave.getY());
Space up = new Space(Dave.getX(), Dave.getY()+1);
Space down = new Space(Dave.getX(), Dave.getY()-1);
int openpossible= 0;
//now its creating an array of each space and getting the fs of each one.
/*time to check which spaces are possible for the bot to move. if they are possible add the space to a possible array list.
* then we check to see which f is smaller by addign a min value.
* we then sort it and get the first space.
* we move to that space.
*/
if (Dave.checkob(first, right, numofobst) == right){
openlist.add(right);
}
if (Dave.checkob(first, left, numofobst) == left){
openlist.add(left);
}
if (Dave.checkob(first, up, numofobst) == up){
openlist.add(up);}
if (Dave.checkob(first, down, numofobst) == down){
openlist.add(down);}
for (int i = 0; i < openlist.size(); i++){
System.out.println("Space available is" + openlist.get(i));
}
System.out.println("Space available is" + openlist);
}
You're code is missing a lot of things (mostly everything).
First try to implement a simple Dijkstra. Do the one in O(V^2), then upgrade it to O(ElogV). It's a bit slower than A* but a lot simpler to understand. Once you get it you can upgrade it to A* by changing a few lines of code.
I'am trying to improve the Bellman-Ford algorithm's performance and I would like to know if the improvement is correct.
I run the relaxing part not V-1 but V times, and I got a boolean variable involved, which is set true if any relax happened during the iteration of the outer loop. If no relax happened at the n. iteration where n <= V, it returns from the loop with the shortest path, but if it relaxes at n = V iteration, that means we have a negative cycle.
I thought it might improve runtime, since sometime we don't have to iterate for V-1 times to find the shortest path, and we can return earlier, and it's also more elegant than checking the cycle with another block of code.
AdjacencyListALD graph;
int[] distTo;
int[] edgeTo;
public BellmanFord(AdjacencyListALD g)
{
graph = g;
}
public int findSP(int source, int dest)
{
// initialization
distTo = new int[graph.SIZE];
edgeTo = new int[graph.SIZE];
for (int i = 0;i<graph.SIZE;i++)
{
distTo[i] = Integer.MAX_VALUE;
}
distTo[source] = 0;
// relaxing V-1 times + 1 for checking negative cycle = V times
for(int i = 0;i<(graph.SIZE);i++)
{
boolean hasRelaxed=false;
for(int j = 0;j<graph.SIZE;j++)
{
for(int x=0;x<graph.sources[j].length;x++)
{
int s = j;
int d = graph.sources[j].get(x).label;
int w = graph.sources[j].get(x).weight;
if(distTo[d] > distTo[s]+w)
{
distTo[d] = distTo[s]+w;
hasRelaxed = true;
}
}
}
if(!hasRelaxed)
return distTo[dest];
}
System.out.println("Negative cycle detected");
return -1;
}
Good comments on the need for testing. That's a given. But it doesn't address the underlying question, whether the OP's modifications to Bellman-Ford constitute an improvement to the algorithm. And the answer is, yes, this is actually a well-known improvement, as G. Bach pointed out in comments.
The OP's observation is that if, in any relaxation iteration, nothing relaxes, then there will be no changes in subsequent iterations and we can therefore just stop. Absolutely correct. There are no outside influences on the values assigned to the vertices. The only thing updating those values is the relaxation step itself. If it finds nothing to do on any iteration there is no way that something to do will materialize out of the aether. Ergo we can terminate.
This doesn't affect the complexity of the algorithm, nor does it help with worst case graphs, but it can reduce actual running time in practice.
As for running the relaxation one more time (|V| times rather than the usual |V|-1), this is just another way of stating the check for negative cycles that follows the relaxation step. It's just another way of saying that, when we terminate by running |V|-1 relaxation iterations, we need to see if any improvement can still be calculated, which reveals a negative cycle.
Bottom line: OP's approach is sound. Now, yes, test the code.
Dijkstra algorithm has a step which mentions "chose the node with shortest path". I realize that this step is unnecessary if we dont throw a node out of the graph/queue. This works great in my knowledge with no known disadvantage. Here is the code. Please instruct me if it fails ? if it does then how ? [EDIT => THIS CODE IS TESTED AND WORKS WELL, BUT THERE IS A CHANCE MY TEST CASES WERE NOT EXHAUSTIVE, THUS POSTING IT ON STACKOVERFLOW]
public Map<Integer, Integer> findShortest(int source) {
final Map<Integer, Integer> vertexMinDistance = new HashMap<Integer, Integer>();
final Queue<Integer> queue = new LinkedList<Integer>();
queue.add(source);
vertexMinDistance.put(source, 0);
while (!queue.isEmpty()) {
source = queue.poll();
List<Edge> adjlist = graph.getAdj(source);
int sourceDistance = vertexMinDistance.get(source);
for (Edge edge : adjlist) {
int adjVertex = edge.getVertex();
if (vertexMinDistance.containsKey(adjVertex)) {
int vertexDistance = vertexMinDistance.get(adjVertex);
if (vertexDistance > (sourceDistance + edge.getDistance())) {
//previous bug
//vertexMinDistance.put(adjVertex, vertexDistance);
vertexMinDistance.put(adjVertex, sourceDistance + edge.getDistance())
}
} else {
queue.add(adjVertex);
vertexMinDistance.put(adjVertex, edge.getDistance());
}
}
}
return vertexMinDistance;
}
Problem 1
I think there is a bug in the code where it says:
int vertexDistance = vertexMinDistance.get(adjVertex);
if (vertexDistance > (sourceDistance + edge.getDistance())) {
vertexMinDistance.put(adjVertex, vertexDistance);
}
because this has no effect (vertexMinDistance for adjVertex is set back to its original value).
Better would be something like:
int vertexDistance = vertexMinDistance.get(adjVertex);
int newDistance = sourceDistance + edge.getDistance();
if (vertexDistance > newDistance ) {
vertexMinDistance.put(adjVertex, newDistance );
}
Problem 2
You also need to add the adjVertex into the queue using something like:
int vertexDistance = vertexMinDistance.get(adjVertex);
int newDistance = sourceDistance + edge.getDistance();
if (vertexDistance > newDistance ) {
vertexMinDistance.put(adjVertex, newDistance );
queue.add(adjVertex);
}
If you don't do this then you will get an incorrect answer for graphs such as:
A->B (1)
A->C (10)
B->C (1)
B->D (10)
C->D (1)
The correct path is A->B->C->D of weight 3, but without the modification then I believe your algorithm will choose a longer path (as it doesn't reexamine C once it has found a shorter path to it).
High level response
With these modifications I think this approach is basically sound, but you should be careful about the computational complexity.
Dijkstra will only need to go round the main loop V times (where V is the number of vertices in the graph), while your algorithm may need many more loops for certain graphs.
You will still get the correct answer, but it may take longer.
Although the worst-case complexity will be much worse than Dijkstra, I would be interested in how well it performs in practice. My guess is that it will work well for sparse almost tree-like graphs, but less well for dense graphs.