Modification to Dijkstra, verification needed - java

Dijkstra algorithm has a step which mentions "chose the node with shortest path". I realize that this step is unnecessary if we dont throw a node out of the graph/queue. This works great in my knowledge with no known disadvantage. Here is the code. Please instruct me if it fails ? if it does then how ? [EDIT => THIS CODE IS TESTED AND WORKS WELL, BUT THERE IS A CHANCE MY TEST CASES WERE NOT EXHAUSTIVE, THUS POSTING IT ON STACKOVERFLOW]
public Map<Integer, Integer> findShortest(int source) {
final Map<Integer, Integer> vertexMinDistance = new HashMap<Integer, Integer>();
final Queue<Integer> queue = new LinkedList<Integer>();
queue.add(source);
vertexMinDistance.put(source, 0);
while (!queue.isEmpty()) {
source = queue.poll();
List<Edge> adjlist = graph.getAdj(source);
int sourceDistance = vertexMinDistance.get(source);
for (Edge edge : adjlist) {
int adjVertex = edge.getVertex();
if (vertexMinDistance.containsKey(adjVertex)) {
int vertexDistance = vertexMinDistance.get(adjVertex);
if (vertexDistance > (sourceDistance + edge.getDistance())) {
//previous bug
//vertexMinDistance.put(adjVertex, vertexDistance);
vertexMinDistance.put(adjVertex, sourceDistance + edge.getDistance())
}
} else {
queue.add(adjVertex);
vertexMinDistance.put(adjVertex, edge.getDistance());
}
}
}
return vertexMinDistance;
}

Problem 1
I think there is a bug in the code where it says:
int vertexDistance = vertexMinDistance.get(adjVertex);
if (vertexDistance > (sourceDistance + edge.getDistance())) {
vertexMinDistance.put(adjVertex, vertexDistance);
}
because this has no effect (vertexMinDistance for adjVertex is set back to its original value).
Better would be something like:
int vertexDistance = vertexMinDistance.get(adjVertex);
int newDistance = sourceDistance + edge.getDistance();
if (vertexDistance > newDistance ) {
vertexMinDistance.put(adjVertex, newDistance );
}
Problem 2
You also need to add the adjVertex into the queue using something like:
int vertexDistance = vertexMinDistance.get(adjVertex);
int newDistance = sourceDistance + edge.getDistance();
if (vertexDistance > newDistance ) {
vertexMinDistance.put(adjVertex, newDistance );
queue.add(adjVertex);
}
If you don't do this then you will get an incorrect answer for graphs such as:
A->B (1)
A->C (10)
B->C (1)
B->D (10)
C->D (1)
The correct path is A->B->C->D of weight 3, but without the modification then I believe your algorithm will choose a longer path (as it doesn't reexamine C once it has found a shorter path to it).
High level response
With these modifications I think this approach is basically sound, but you should be careful about the computational complexity.
Dijkstra will only need to go round the main loop V times (where V is the number of vertices in the graph), while your algorithm may need many more loops for certain graphs.
You will still get the correct answer, but it may take longer.
Although the worst-case complexity will be much worse than Dijkstra, I would be interested in how well it performs in practice. My guess is that it will work well for sparse almost tree-like graphs, but less well for dense graphs.

Related

find the most valuable vertex among all reachable vertices

I have a Directed Graph G=(V,E) that each vertex v has two properties:
r indicating the worthiness
m indicating the highest v''s r (where v' is a reachable vertex from v).
I need to find ms for all vertices in O(|V|+|E|) time.
For example,
Initial G
A(r = 1, m = 1) → B(r = 3, m = 3) ← C(r = 2, m = 2)
↓
D(r = 4, m = 4)
has to be
A(r = 1, m = 4) → B(r = 3, m = 3) ← C(r = 2, m = 3)
↓
D(r = 4, m = 4)
I searched SO and found some Here, but one of the answers does not bound in time and another answer is very badly explained. Is there any simpler idea here?
In practice, I would use use the algorithm from Ehsan's answer, but it's not quite O(V+E). If you really need that complexity, then you can do this:
Divide the graph into strongly-connected components using, e.g., Tarjan's algorithm This is O(V+E).
Make a graph of the SCCs. Every node in an SCC is reachable from every other one, so the node for each SCC in the new graph gets the highest r value in the SCC. You can do this in O(V+E) too.
The graph of SCCs is acyclic, so you can do a topological sort. All the popular algorithms for that are O(V+E).
Process the SCC nodes in reverse topological order, calculating each m from neighbors. Because all the edges point from later to earlier nodes, the inputs for each node will be finished by the time you get to it. This is O(V+E) too.
Go through the original graph, setting every node's m to the value for its component in the SCC graph. O(V)
Use following O(E+V*log(V)) algorithm :
- Reverse all directions
- while |V| > 0 do
find max(v) from remaining nodes in V
from that node execute DFS and find all reachable nodes and update their m as max(V)
remove all updated nodes from V
the time-complexity of this algorithm is as your request O(V*log(V)+E)
How to solve the problem?
Reachable vertices in a directed graph
Which vertices can a given vertex visit?
Which vertices can visit the given vertex?
We are dealing with directed graphs. So, we need to find strongly connected components to answer the questions like above efficiently for this problem.
Once we know the strongly connected components, we can deal with the highest worthiness part.
In every strongly connected component, what is the highest worthiness value? Update accordingly.
Both steps are possible with O(V + E). With proper thought process, I believe it should be able to do both the steps in a single pass.
How to find strongly connected components?
Kosaraju's algorithm
Tarjan's algorithm
Path-based strong component algorithm
If you are looking for something simple, go for Kosaraju's algorithm. To me, it is the simplest of the above three.
If you are looking for efficiency, Kosaraju's algorithm takes two depth-first traversals but the other two algorithms accomplish the same within 1 depth-first traversal.
A Space-Efficient Algorithm for Finding Strongly Connected Components mentions that Tarjan’s algorithm required at most v(2 + 5w) bits of storage, where w is the machine’s word size. The improvement mentioned in the paper reduces the space requirements to v(1 + 3w) bits in the worst case.
Implementation:
Apparently, you are looking for some type of implementation.
For the mentioned 3 ways of finding strongly connected components, you can find java implementation here.
There are multiple Path-based strong component algorithms. To my knowledge, Gabow's algorithm is much simpler to understand than Tarjan's algorithm and the latest in path-based strong component algorithms. You can find java implementation for Gabow's algorithm here.
I am adding this answer, although there are correct answers with upvotes before me, only because you tagged java and python. So I will add java implementation now, and if needed the python implementation will follow.
The algorithm
This is a tweak on the classic topological sort:
foreach vertex:
foreach neighbour:
if didn't yet calculate m, calculate.
Take the maximum of yourself and neighbours. Mark yourself as visited, and if asked again for m, return the calculated.
It is implemented at calculateMostValuableVertex.
Time computation complexity
foreach vertex (O(|V|))
2. foreach edge(O(|E|) totally, as it will eventually go over each edge once.):
If not yet computed, compute m.
Please note that foreach vertex, it will be calculated either in stage 1, or 3. not twice, wince it is checked before the calculation.
Therefore the time complexity of this algorithm is O(|V| + |E|)
Assumptions
This solution relies heavily on the fact that HashMap in Java does operations such as add/update in O(1). That is true in average, but if that is not enough, the same idea can be fully implemented only with arrays, which will improve the solution into O(|V|+|E|) in the worst case.
Implementation
Let's first define the basic classes:
Vertex:
import java.util.ArrayList;
class Vertex {
String label;
public int r; // Worthiness
public int m; // Highest worthiness.
Vertex(String label, int r, int m) {
this.label = label;
this.r = r;
this.m = m;
}
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result * r * m
+ ((label == null) ? 0 : label.hashCode());
return result;
}
#Override
public boolean equals(final Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
final Vertex other = (Vertex) obj;
boolean labelEquals;
if (label == null) {
labelEquals = other.label == null;
} else {
labelEquals = label.equals(other.label);
}
return labelEquals && r == other.r && m == other.m;
}
#Override
public String toString() {
return "Vertex{" +
"label='" + label + '\'' +
", r=" + r +
", m=" + m +
'}';
}
}
It is important to define the methods equals and hashCode so later on their hash computations will work as expected.
Graph:
class Graph {
private final Map<Vertex, List<Vertex>> adjVertices = new HashMap<>();
private final Map<String, Vertex> nameToVertex = new HashMap<>();
private final List<Vertex> vertices = new ArrayList<>();
void addVertex(String label, int r, int m) {
Vertex vertex = new Vertex(label, r, m);
adjVertices.putIfAbsent(vertex, new ArrayList<>());
nameToVertex.putIfAbsent(label, vertex);
vertices.add(vertex);
}
void addEdge(String label1, String label2) {
adjVertices.get(nameToVertex.get(label1)).add(nameToVertex.get(label2));
}
public void calculateMostValuableVertex() {
Map<Vertex, Boolean> visitedVertices = new HashMap<>();
for (Vertex vertex : vertices) {
visitedVertices.put(vertex, false);
}
for (Vertex vertex : vertices) {
if (visitedVertices.get(vertex)) {
continue;
}
calculateMostValuableVertexInternal(vertex, visitedVertices);
}
}
public void calculateMostValuableVertexInternal(Vertex vertex, Map<Vertex, Boolean> visitedVertices) {
List<Vertex> neighbours = adjVertices.get(vertex);
visitedVertices.put(vertex, true);
int max = vertex.r;
for (Vertex neighbour: neighbours) {
if (visitedVertices.get(neighbour)) {
max = Math.max(max, neighbour.m);
} else {
calculateMostValuableVertexInternal(neighbour, visitedVertices);
max = Math.max(max, neighbour.m);
}
}
vertex.m = max;
}
#Override
public String toString() {
StringBuilder sb = new StringBuilder();
Iterator<Map.Entry<Vertex, List<Vertex>>> iter = adjVertices.entrySet().iterator();
while (iter.hasNext()) {
Map.Entry<Vertex, List<Vertex>> entry = iter.next();
sb.append(entry.getKey());
sb.append('=').append('"');
sb.append(entry.getValue());
sb.append('"');
if (iter.hasNext()) {
sb.append(',').append('\n');
}
}
return "Graph{" +
"adjVertices=\n" + sb +
'}';
}
}
Finally, to run the above logic, you can do:
Graph g = new Graph();
g.addVertex("A", 1, 1);
g.addVertex("B", 3, 3);
g.addVertex("C", 2, 2);
g.addVertex("D", 4, 4);
g.addEdge("A", "B");
g.addEdge("C", "B");
g.addEdge("A", "D");
g.calculateMostValuableVertex();
System.out.println(g);
The output of the above is:
Graph{adjVertices=
Vertex{label='A', r=1, m=4}="[Vertex{label='B', r=3, m=3}, Vertex{label='D', r=4, m=4}]",
Vertex{label='D', r=4, m=4}="[]",
Vertex{label='B', r=3, m=3}="[]",
Vertex{label='C', r=2, m=3}="[Vertex{label='B', r=3, m=3}]"}
as expected. It supports graphs with cycles as well. For example the output of:
Graph g = new Graph();
g.addVertex("A", 1, 1);
g.addVertex("B", 3, 3);
g.addVertex("C", 2, 2);
g.addVertex("D", 4, 4);
g.addVertex("E", 5, 5);
g.addVertex("F", 6, 6);
g.addVertex("G", 7, 7);
g.addEdge("A", "B");
g.addEdge("C", "B");
g.addEdge("A", "D");
g.addEdge("A", "E");
g.addEdge("E", "F");
g.addEdge("F", "G");
g.addEdge("G", "A");
g.calculateMostValuableVertex();
System.out.println(g);
is:
Graph{adjVertices=
Vertex{label='A', r=1, m=7}="[Vertex{label='B', r=3, m=3}, Vertex{label='D', r=4, m=4}, Vertex{label='E', r=5, m=7}]",
Vertex{label='B', r=3, m=3}="[]",
Vertex{label='C', r=2, m=3}="[Vertex{label='B', r=3, m=3}]",
Vertex{label='D', r=4, m=4}="[]",
Vertex{label='E', r=5, m=7}="[Vertex{label='F', r=6, m=7}]",
Vertex{label='F', r=6, m=7}="[Vertex{label='G', r=7, m=7}]",
Vertex{label='G', r=7, m=7}="[Vertex{label='A', r=1, m=7}]"}
I implemented my answer from the linked question in Python. The lines that don't reference minreach closely follow Wikipedia's description of Tarjan's SCC algorithm.
import random
def random_graph(n):
return {
i: {random.randrange(n) for j in range(random.randrange(n))} for i in range(n)
}
class SCC:
def __init__(self, graph):
self.graph = graph
self.index = {}
self.lowlink = {}
self.stack = []
self.stackset = set()
self.minreach = {}
self.components = []
def dfs(self, v):
self.lowlink[v] = self.index[v] = len(self.index)
self.stack.append(v)
self.stackset.add(v)
self.minreach[v] = v
for w in self.graph[v]:
if w not in self.index:
self.dfs(w)
self.lowlink[v] = min(self.lowlink[v], self.lowlink[w])
elif w in self.stackset:
self.lowlink[v] = min(self.lowlink[v], self.index[w])
self.minreach[v] = min(self.minreach[v], self.minreach[w])
if self.lowlink[v] == self.index[v]:
component = set()
while True:
w = self.stack.pop()
self.stackset.remove(w)
self.minreach[w] = self.minreach[v]
component.add(w)
if w == v:
break
self.components.append(component)
def scc(self):
for v in self.graph:
if v not in self.index:
self.dfs(v)
return self.components, self.minreach
if __name__ == "__main__":
g = random_graph(6)
print(g)
components, minreach = SCC(g).scc()
print(components)
print(minreach)

Best way to remove uninteresting/losing lines in chess, in time-based solution?

I'm creating a chess engine as a practice in Java, I know it's not recommended due to speed issues but I'm doing it just for practice.
After implementing minimax with alpha-beta pruning, I thought of implementing a time-limit to find the score of a given move.
Here is the code
private int minimax(MoveNode node, MoveNodeType nodeType, int alpha, int beta, Side side, int depth) throws Exception {
// isInterestingLine(prevscores, node, side);
if (depth <= 0) {
count++;
return node.evaluateBoard(side);
}
// Generate Child nodes if we haven't.
if (node.childNodes == null || node.childNodes.size() == 0) {
node.createSingleChild();
}
if (nodeType == MoveNodeType.MAX) {
int bestValue = -1000;
for (int i = 0; i < node.childNodes.size(); i++) {
if (node.childNodes.get(i) == null) continue;
int value = minimax(node.childNodes.get(i), MoveNodeType.MIN, alpha, beta, side, depth - 1);
bestValue = Math.max(bestValue, value);
alpha = Math.max(alpha, bestValue);
if (beta <= alpha) {
break;
}
node.createSingleChild();
}
// reCalculateScore();
return bestValue;
} else {
int bestValue = 1000;
for (int i = 0; i < node.childNodes.size(); i++) {
if (node.childNodes.get(i) == null) continue;
int value = minimax(node.childNodes.get(i), MoveNodeType.MAX, alpha, beta, side, depth - 1);
bestValue = Math.min(bestValue, value);
beta = Math.min(beta, bestValue);
if (beta <= alpha) {
break;
}
node.createSingleChild();
}
// reCalculateScore();
return bestValue;
}
}
and the driver code.
void evaluateMove(Move mv, Board brd) throws Exception {
System.out.println("Started Comparing! " + this.tree.getRootNode().getMove().toString());
minmaxThread = new Thread(new Runnable() {
#Override
public void run() {
try {
bestMoveScore = minimax(tree.getRootNode(), MoveNodeType.MIN, -1000, 1000, side, MAX_DEPTH);
} catch (Exception e) {
e.printStackTrace();
}
}
});
minmaxThread.start();
}
This is how I implemented time-limit.
long time = System.currentTimeMillis();
moveEvaluator.evaluateMove(move, board.clone());
while((System.currentTimeMillis() - time) < secToCalculate*1000 && !moveEvaluator.minmaxThread.isAlive()) {
}
System.out.println("Time completed! score = " + moveEvaluator.bestMoveScore + " move = " + move + " depth = " + moveEvaluator.searchDepth) ;
callback.callback(move, moveEvaluator.bestMoveScore);
Now, Here is the problem
You see, it only calculated Bb7, because of the depth-first search time runs out before even calculating another line.
So I want a way to calculate like following in a time-limit based solution.
Here are a few solutions I taught of.
Implementing an isInteresting() function. which takes all the previous scores and checks if the current line is interesting/winning if yes then and only then calculates next child nodes.
e.g.
[0,0,0,0,0,0] can be interpreted as a drawn line.
[-2,-3,-5,-2,-1] can be interpreted as a losing line.
Searching for small depth-first and then elimination all losing lines.
for (int i = min_depth; i <= max_depth; i ++) {
scores = [];
for(Node childnode : NodesToCalculate) {
scores.push(minimax(childnode, type, alpha, beta, side, i));
}
// decide which child node to calculate for next iterations.
}
But, none of the solutions is perfect and efficient, In the first one, we are just making a guess and In second on we are calculating one node more than once.
Is there a better way to do this?
The solution to this problem used by every chess engine is iterative deepening.
Instead of searching to a fixed depth (MAX_DEPTH in your example) you start by searching to a depth of one, then when this search is done you start again with a depth of two and you continue to increase depth like this until you are out of time. When you are out of time you can play the move of the last search that was completed.
It may seem like lots of time will be spent on lower depth iteration that are later replaced by deeper search and that the time sent doing so is completely lost, but in practice it's not true. Since searching to a depth N is so much longer than searching at depth N-1 the time spent on the lower depth search is always much less than the time spent on the last (deeper) search.
If your engine use a transposition table, the data in the transposition table from previous iteration will help the later iterations. The alpha-beta algorithm's performance is really sensitive to the order move are searched. The time saved by alpha-beta over minimax is optimal when the best move is searched first. If you did a search for depth N-1 before the search for depth N, the transposition table will probably contains a good guess of the best move for most positions that can then be searched first.
In practice, in a engine using a transposition table and ordering move at the root based on previous iteration it's faster to use iterative deepening than not using it. I mean for exemple it's faster to do a depth 1 search, then en depth 2 search, then a depth3 search until say a depth 10 search than it is doing a depth 10 search right away. Plus you get the option to stop the search whenever you want and still have a move to play,=.

Find negative cycles in a directed edge-weighted graph using JGrapht

Is it possible to use JGrapht to find negative cycles in a directed edge-weighted graph? I've looked through the Javadocs and found I can use a CycleDetector to detect cycles, but not specifically negative cycles. CycleDetector finds cycles, but you can't tell if they're negative or not without somehow exploring them some other way. Thanks!
You can try to use the BellmanFordShortestPath, but it will not find the loop if you look for a path from one vertex to itself, because every vertex is implicitly connected to itself with a weight 0.
DefaultDirectedWeightedGraph<String, DefaultWeightedEdge> directedGraph = new DefaultDirectedWeightedGraph<>(DefaultWeightedEdge.class);
...
BellmanFordShortestPath<String, DefaultWeightedEdge> algorithm = new BellmanFordShortestPath(graph);
GraphPath<String, DefaultWeightedEdge> path = algorithm.getPath(node1, node1);
int length = path.getLength(); // returns 0
double weight = path.getWeight(); // returns 0.0
The best I can find are the algorithms in org.jgrapht.alg.cycle, which give you all cycles and then you have to calculate the total weight of the path around the cycle.
private boolean hasNegativeLoop(DefaultDirectedWeightedGraph<String, DefaultWeightedEdge> graph){
SzwarcfiterLauerSimpleCycles<String, DefaultWeightedEdge> cycleDetector = new SzwarcfiterLauerSimpleCycles<>(graph);
List<List<String>> cycles = cycleDetector.findSimpleCycles();
for (List<String> cycle : cycles){
double cycleWeight = getCycleWeight(graph, cycle);
if(cycleWeight < 0) return true;
}
return false;
}
private double getCycleWeight(DefaultDirectedWeightedGraph<String, DefaultWeightedEdge> graph, List<String> cycle) {
double totalWeight = 0;
for(int i = 1; i < cycle.size(); i++){
double weight = graph.getEdgeWeight(graph.getEdge(cycle.get(i-1), cycle.get(i)));
totalWeight += weight;
}
double weightBackToStart = graph.getEdgeWeight(graph.getEdge(cycle.get(cycle.size()-1), cycle.get(0)));
return totalWeight + weightBackToStart;
}
This is way more inefficient comparing to Bellman Ford negative cycle detection, but could serve as a reference for your implementation.
In general you could use BellmanFordShortestPathto check for negative cycles in a graph, although the non-existence of a shortest path only tells you whether at least one negative cycle exists. I haven't had a proper look at the BellmanFordShortestPath implementation in JgraphT, so I can't provide you with code for that.
Other than that, there is a neat paper linked in https://cs.stackexchange.com/questions/6919/getting-negative-cycle-using-bellman-ford.
A working link to the paper should be:
https://www.semanticscholar.org/paper/Negative-Weight-Cycle-Algorithms-Huang/dc1391024d74f736aa7a9c24191a35e822589516/pdf
So if all else fails, you could at least implement a working algorithm yourself, using a JgraphT graph like DefaultDirectedWeightedGraph

Bellman-Ford improvement: does it work?

I'am trying to improve the Bellman-Ford algorithm's performance and I would like to know if the improvement is correct.
I run the relaxing part not V-1 but V times, and I got a boolean variable involved, which is set true if any relax happened during the iteration of the outer loop. If no relax happened at the n. iteration where n <= V, it returns from the loop with the shortest path, but if it relaxes at n = V iteration, that means we have a negative cycle.
I thought it might improve runtime, since sometime we don't have to iterate for V-1 times to find the shortest path, and we can return earlier, and it's also more elegant than checking the cycle with another block of code.
AdjacencyListALD graph;
int[] distTo;
int[] edgeTo;
public BellmanFord(AdjacencyListALD g)
{
graph = g;
}
public int findSP(int source, int dest)
{
// initialization
distTo = new int[graph.SIZE];
edgeTo = new int[graph.SIZE];
for (int i = 0;i<graph.SIZE;i++)
{
distTo[i] = Integer.MAX_VALUE;
}
distTo[source] = 0;
// relaxing V-1 times + 1 for checking negative cycle = V times
for(int i = 0;i<(graph.SIZE);i++)
{
boolean hasRelaxed=false;
for(int j = 0;j<graph.SIZE;j++)
{
for(int x=0;x<graph.sources[j].length;x++)
{
int s = j;
int d = graph.sources[j].get(x).label;
int w = graph.sources[j].get(x).weight;
if(distTo[d] > distTo[s]+w)
{
distTo[d] = distTo[s]+w;
hasRelaxed = true;
}
}
}
if(!hasRelaxed)
return distTo[dest];
}
System.out.println("Negative cycle detected");
return -1;
}
Good comments on the need for testing. That's a given. But it doesn't address the underlying question, whether the OP's modifications to Bellman-Ford constitute an improvement to the algorithm. And the answer is, yes, this is actually a well-known improvement, as G. Bach pointed out in comments.
The OP's observation is that if, in any relaxation iteration, nothing relaxes, then there will be no changes in subsequent iterations and we can therefore just stop. Absolutely correct. There are no outside influences on the values assigned to the vertices. The only thing updating those values is the relaxation step itself. If it finds nothing to do on any iteration there is no way that something to do will materialize out of the aether. Ergo we can terminate.
This doesn't affect the complexity of the algorithm, nor does it help with worst case graphs, but it can reduce actual running time in practice.
As for running the relaxation one more time (|V| times rather than the usual |V|-1), this is just another way of stating the check for negative cycles that follows the relaxation step. It's just another way of saying that, when we terminate by running |V|-1 relaxation iterations, we need to see if any improvement can still be calculated, which reveals a negative cycle.
Bottom line: OP's approach is sound. Now, yes, test the code.

Optimization: replace for loop with ListIterator

It's my first working on a quite big project, and I've been asked to obtain the best performances.
So I've thouhgt to replace my for loops with a ListIterator, because I've got around 180 loops which call list.get(i) on lists with about 5000 elements.
So I've got two questions.
1) Are those 2 snippets equal? I mean, do them produce the same output? If no, how can I correct the ListIterator thing?
ListIterator<Corsa> ridesIterator = rides.listIterator();
while (ridesIterator.hasNext()) {
ridesIterator.next();
Corsa previous = ridesIterator.previous(); //rides.get(i-1)
Corsa current = ridesIterator.next(); //rides.get(i)
if (current.getOP() < d.getFFP() && previous.getOA() > d.getIP() && current.wait(previous) > DP) {
doSomething();
break;
}
}
__
for (int i = 1; i < rides.size(); i++) {
if (rides.get(i).getOP() < d.getFP() && rides.get(i - 1).getOA() > d.getIP() && rides.get(i).getOP() - rides.get(i - 1).getOA() > DP) {
doSomething();
break;
}
}
2) How will it be the first snippet if I've got something like this? (changed i and its exit condition)
for (int i = 0; i < rides.size() - 1; i++) {
if (rides.get(i).getOP() < d.getFP() && rides.get(i + 1).getOA() > d.getIP() && rides.get(i).getOP() - rides.get(i + 1).getOA() > DP) {
doSomething();
break;
}
}
I'm asking because it's the first time that I'm using a ListIterator and I can't try it now!
EDIT: I'm not using an ArrayList, it's a custom List based on a LinkedList
EDIT 2 : I'm adding some more infos.
I can't use a caching system because my data is changing on evry iteration and managing the cache would be hard as I'd have to deal with inconsistent data.
I can't even merge some of this loops into one big loop, as I've got them on different methods because they need to do a lot of different things.
So, sticking on this particular case, what do you think is the best pratice?
Is ListIterator the best way to deal with my case? And how can I use the ListIterator if my for loop works between 0 and size-1 ?
If you know the maximum size, you will get the best performance if you resign from collections such as ArrayList replacing them with simple arrays.
So instead creating ArrayList<Corsa> with 5000 elements, do Corsa[] rides = new Corsa[5000]. Instead of hard-coding 5000 use it as final static int MAX_RIDES = 5000 for example, to avoid magic number in the code. Then iterate with normal for, referring to rides[i].
Generally if you look for performance, you should code in Java, as if it was C/C++ (of course where you can). The code is not so object-oriented and beautiful, but it's fast. Remember to do optimization always in the end, when you are sure, you have found a bottleneck. Otherwise, your efforts are futile, only making the code less readable and maintainable. Also use a profiler, to make sure your changes are in fact upgrades, not downgrades.
Another downside of using ListIterator is that it internally allocates memory. So GC (Garbage Collector) will awake more often, which also can have impact on the overall performance.
No they do not do the same.
while (ridesIterator.hasNext()) {
ridesIterator.next();
Corsa previous = ridesIterator.previous(); //rides.get(i-1)
Corsa current = ridesIterator.next(); //rides.get(i)
The variables previous and current would contain the same "Corsa" value, see the ListIterator documentation for details (iterators are "in between" positions).
The correct code would look as follows:
while (ridesIterator.hasNext()) {
Corsa previous = ridesIterator.next(); //rides.get(i-1)
if(!ridesIterator.hasNext())
break; // We are already at the last element
Corsa current = ridesIterator.next(); //rides.get(i)
ridesIterator.previous(); // going back 1, to start correctly next time
The code would actually look exactly the same, only the interpretation (as shown in the comments) would be different:
while (ridesIterator.hasNext()) {
Corsa previous = ridesIterator.next(); //rides.get(i)
if(!ridesIterator.hasNext())
break; // We are already at the last element
Corsa current = ridesIterator.next(); //rides.get(i+1)
ridesIterator.previous(); // going back 1, to start correctly next time
From a (premature?) optimization viewpoint the ListIterator implementation is better.
LinkedList is a doubly-linked list which means that each element links to both its predecessor (previous) as well as its successor (next). So it does 3 referals per loop. => 3*N
Each get(i) needs to go through all previous elements to get to the i index position. So on average N/4 referals per loop. (You'd think N/2, but LinkedList starts from the beginning or the end of the list.) => 2 * N * N/4 == N^2 /2
Here are some suggestions, hopefully one or two will be applicable to your situation.
Try to do only one rides.get(x) per loop.
Cache method results in local variables as appropriate for your code.
In some cases the compiler can optimize multiple calls to the same thing doing it just once instead, but not always for many subtle reasons. As a programmer, if you know for a fact that these should deliver the same values, then cache them in local variables.
For example,
int sz = rides.size ();
float dFP = d.getFP (); // wasn't sure of the type, so just called if float..
float dIP = d.getIP ();
Corsa lastRide = rides.get ( 0 );
for ( int i = 1; i < sz; i++ ) {
Corsa = rides.get ( i );
float rOP = r.getOP ();
if ( rOP < dFP ) {
float lastRideOA = lastRide.getOA (); // only get OA if rOP < dFP
if ( lastRideOA > dIP && rOP - lastRideOA > DP ) {
doSomething ();
// maybe break;
}
}
lastRide = r;
}
These are optimizations that may not work in all cases. For example, if your doSomething expands the list, then you need to recompute sz, or maybe go back to doing rides.size() each iteration. These optimizations also assumes that the list is stable in that the elements don't change during the get..()'s. If doSomething makes changes to the list, then you'd need to cache less. Hopefully you get the idea. You can apply some of these techniques to the iterator form of the loop as well.

Categories

Resources