# Lecture 099 - Dijkstra, Bellman-Ford, Johnson

## Shortest Path

Algorithms:

• Dijkstra: work efficient but sequential with non-negative edge weights

• Bellman-Ford: parallel algorithm with more work

• Johnson: parallel algorithm that find shortest paths between pairs of vertices, no just single source.

Edge weight: a mapping between each edge to a real number. (When edge does not exist, then weight is infinity)

We allow negative edge weight. But this is non-trivial: There can be a cycle with negative total weight, leading to shortest path with weight $-\infty$. Even if we don't allow cycles, extension to allow negative weight is challenging.

Weight of the path: sum of weights in the path

Flavors of Shortest Path

• Single-Pair Shortest Path: return a shortest path from $a$ to $b$

• Single-Source Shortest Path (SSSP): return a shortest path from $s$ to every other vertex

• All-Pairs Shortest Paths: find shortest paths between all pairs of vertices

• SSSP+: SSSP but weights are non-negative

Sub-path property: any sub-path of a shortest path is itself a shortest path

See graph above: Suppose we want the shortest path from $s$ to $v$. If an oracle tells us the shortest path to all vertices except $v$, then we only need $O(|V|)$ to find the shortest path.

Notice that adding a constant to each path changes the shortest path, but multiplying does not.

### Dijkstra's Algorithm

Dijkstra’s Property: The overall shortest-path weight from $s$ via a vertex in $X$ directly to a neighbor in $Y$ (in the frontier) is as short as any path from $s$ to any vertex in $Y$.

Dijkstra's Algorithm: priority-first search:

1. start at $s$ with $d(s) = 0$
2. use priority = $p(v) = \min_{x\in X}(d(x)+w(x, v))$
3. set $d(v) = p(v)$ when $v$ visited

Note that we calculate $\min$ using priority queue. We are sure a path is shortest only when we pop out of queue. There can be multiple duplicated elements in the queue, but only the shortest will get visited.

Variants:

• One variant checks whether $u$ is already in $X$ inside the relax function, and if so does not inserts it into the priority queue. (This does not affect the asymptotic work bounds)

• Another variant decreases the priority of the neighbors instead of adding duplicates to the priority queue. This requires a more powerful priority queue that supports a decreaseKey function.

Note that if we use decreaseKey, we can do priority queue operation in $O(m + n \log n)$

For enumerated graphs the cost of the tree tables could be improved by using adjacency sequences for the graph, and ephemeral or single-threaded sequences for the distance table, but priority queue operation still dominates the cost even when using decreaseKey.

$O(m\log n) = O(m \log m)$ since $m \leq n^2$

### A* Algorithm

Heuristic must be:

• consistent: $h(u) \leq \delta(u, v) + h(v)$

• admissible: $h(v) \leq \delta(v, t)$ (but we don't need this since we don't permit re-visit vertices, which sacrefices asymptotic bound: any consistent heuristic is also admissible.)

• destination zero: $h(t) = 0$

Worst heuristic: $h(v) = 0$

Best heuristic: $h(v) = \delta(v, t)$ (visits exactly the vertices on the shortest path)

### Bellman-Ford's Algorithm

Algorithm

1. keep a single-threaded sequence denoting the minimum distance so far from vertex $s$
2. initialize the sequence at source $s$ position to be $0$ and other to be $\infty$
3. for every edge, update the shortest distance
4. repeat until when either there is no update in this round or you have repeated $|V|$ times
5. If the last time is still updating, then there is a negative cycle
6. If you want to find all cycles, repeat $|V|$ many times again for negative to propagate throughout the graph (anything reachable to negative cycle will also turn to negative)

// QUESTION: Can you tell which node is directly involved in creating negative cycles?

Costs with table:

• finding the in-neighbors $N_G^-(v)$: $O(\log |V|)$

• access map D[u] and w(u, v): $O(\log|V|)$

• reduce: $O(|N_G(v)|)$ work, $O(\log |N_G(v)|)$ span

• Line 5 and Line 6 gives $O((m+n)\log n)$ work, $O(\log n)$ span

• Line 9 tabulate and reduce requires $O(n \log n)$ work, $O(\log n)$ span

• In total: $O(mn\log n)$ work, $O(n \log n)$ span

Costs with sequence: $O(mn)$ work and $O(n \log n)$ span.

### Johnson's Algorithm

Here is a good video

### Aside

Graph Strategies: