# Python Algorithms - C9 Graphs

Python算法设计篇(9) Chapter 9: From A to B with Edsger and Friends

The shortest distance between two points is under construction. ——Noelie Altito

[The shortest path problem comes in several varieties. For example, you can find shortest paths (just like any other kinds of paths) in both directed and undirected graphs. The most important distinctions, though, stem from your starting points and destinations. Do you want to find the shortest from one node to all others (single source)? From one node to another (single pair, one to one, point to point)? From all nodes to one (single destination)? From all nodes to all others (all pairs)? Two of these—single source and all pairs—are perhaps the most important. Although we have some tricks for the single pair problem (see “Meeting in the middle” and “Knowing where you’re going,” later), there are no guarantees that will let us solve that problem any faster than the general single source problem. The single destination problem is, of course, equivalent (just flip the edges for the directed case). The all pairs problem can be tackled by using each node as a single source (and we’ll look into that), but there are special-purpose algorithms for that problem as well.]

assume that we start in node s and that we initialize D[s] to zero, while all other distance estimates are set to infinity. Let d(u,v) be the length of the shortest path from u to v.

• d(s,v) <= d(s,u) + W[u,v]. This is an example of the triangle inequality.

• d(s,v) <= D[v]. For v other than s, D[v] is initially infinite, and we reduce it only when we find actual shortcuts. We never “cheat,” so it remains an upper bound.

• If there is no path to node v, then relaxing will never get D[v] below infinity. That’s because we’ll never find any shortcuts to improve D[v].

• Assumeashortestpathtovisformedbyapathfromstouandanedgefromutov. Now, if D[u] is correct at any time before relaxing the edge from u to v, then D[v] is correct at all times afterward. The path defined by P[v] will also be correct.

• Let [s, a, b, … , z, v] be a shortest path from s to v. Assume all the edges (s,a), (a,b), … , (z,v) in the path have been relaxed in order. Then D[v] and P[v] will be correct. It doesn’t matter if other relax operations have been performed in between.

[最后这个是路径松弛性质，也就是后面的Bellman-Ford算法的核心]

#relaxtion
inf = float('inf')
def relax(W, u, v, D, P):
d = D.get(u,inf) + W[u][v]                  # Possible shortcut estimate
if d < D.get(v,inf):                        # Is it really a shortcut?
D[v], P[v] = d, u                       # Update estimate and parent
return True                             # There was a change!

#测试代码
u = 0; v = 1
D, W, P = {}, {u:{v:3}}, {}
D[u] = 7
D[v] = 13
print D[u] # 7
print D[v] # 13
print W[u][v] # 3
relax(W, u, v, D, P) # True
print D[v] # 10
D[v] = 8
relax(W, u, v, D, P)
print D[v] # 8

[上图的解释，需要注意的是，如果边的松弛顺序不同，可能中间得到的结果不同，但是最后的结果都是一样的：The execution of the Bellman-Ford algorithm. The source is vertex s. The d values are shown within the vertices, and shaded edges indicate predecessor values: if edge (u, v) is shaded, then π[v] = u. In this particular example, each pass relaxes the edges in the order (t, x), (t, y), (t, z), (x, t), (y, x), (y, z), (z, x), (z, s), (s, t), (s, y). (a) The situation just before the first pass over the edges. (b)-(e) The situation after each successive pass over the edges. The d and π values in part (e) are the final values. The Bellman-Ford algorithm returns TRUE in this example.]

#Bellman-Ford算法
def bellman_ford(G, s):
D, P = {s:0}, {}                            # Zero-dist to s; no parents
for rnd in G:                               # n = len(G) rounds
changed = False                         # No changes in round so far
for u in G:                             # For every from-node...
for v in G[u]:                      # ... and its to-nodes...
if relax(G, u, v, D, P):        # Shortcut to v from u?
changed = True              # Yes! So something changed
if not changed: break                   # No change in round: Done
else:                                       # Not done before round n?
raise ValueError('negative cycle')      # Negative cycle detected
return D, P                                 # Otherwise: D and P correct

#测试代码
s, t, x, y, z = range(5)
W = {
s: {t:6, y:7},
t: {x:5, y:8, z:-4},
x: {t:-2},
y: {x:-3, z:9},
z: {s:2, x:7}
}
D, P = bellman_ford(W, s)
print [D[v] for v in [s, t, x, y, z]] # [0, 2, 4, 7, -2]
print s not in P # True
print [P[v] for v in [t, x, y, z]] == [x, y, s, t] # True
W[s][t] = -100
print bellman_ford(W, s)
# Traceback (most recent call last):
#         ...
# ValueError: negative cycle

[上图的解释：The execution of the algorithm for shortest paths in a directed acyclic graph. The vertices are topologically sorted from left to right. The source vertex is s. The d values are shown within the vertices, and shaded edges indicate the π values. (a) The situation before the first iteration of the for loop of lines 3-5. (b)-(g) The situation after each iteration of the for loop of lines 3-5. The newly blackened vertex in each iteration was used as u in that iteration. The values shown in part (g) are the final values.]

[这里我想了好久怎么解释，但是还是觉得原文实在太精彩，我想我这有限的水平很难讲明白，故这里附上原文，前面部分作者解释了为什么DAG最短路径算法中边松弛的顺序和拓扑排序有关，然后作者继续解释(Dijkstra算法中)下一个要加入(到已包含的节点集合)的节点必须有正确的距离估计值，最后作者解释了这个节点肯定是那个具有最小距离估计值的节点！一切顺风顺水，但是有一个重要前提条件，那就是边不能有负权值！]

To get thing started, we can imagine that we already know the distances from the start node to each of the others. We don’t, of course, but this imaginary situation can help our reasoning. Imagine ordering the nodes, left to right, based on their distance. What happens? For the general case—not much. However, we’re assuming that we have no negative edge weights, and that makes all the difference.

Because all edges are positive, the only nodes that can contribute to a node’s solution will lie to its left in our hypothetical ordering. It will be impossible to locate a node to the right that will help us find a shortcut, because this node is further away, and could only give us a shortcut if it had a negative back edge. The positive back edges are completely useless to us, and aren’t part of the problem structure. What remains, then, is a DAG, and the topological ordering we’d like to use is exactly the hypothetical ordering we started with: nodes sorted by their actual distance. See Figure 9-1 for an illustration of this structure. (I’ll get back to the question marks in a minute.)

Predictably enough, we now hit the major gap in the solution: it’s totally circular. In uncovering the basic problem structure (decomposing into subproblems or finding the hidden DAG), we’ve assumed that we’ve already solved the problem. The reasoning has still been useful, though, because we now have something specific to look for. We want to find the ordering—and we can find it with our trusty workhorse, induction!

Consider, again, Figure 9-1. Assume that the highlighted node is the one we’re trying to identify in our inductive step (meaning that the earlier ones have been identified and already have correct distance estimates). Just like in the ordinary DAG shortest path problem, we’ll be relaxing all out-edges for each node, as soon as we’ve identified it and determined its correct distance. That means that we’ve relaxed the edges out of all earlier nodes. We haven’t relaxed the out-edges of later nodes, but as discussed, they can’t matter: the distance estimates of these later nodes are upper bounds, and the back-edges have positive weights, so there’s no way they can contribute to a shortcut.

This means (by the earlier relaxation properties or the discussion of the DAG shortest path algorithm in Chapter 8) that the next node must have a correct distance estimate. That is, the highlighted node in Figure 9-1 must by now have received its correct distance estimate, because we’ve relaxed all edges out of the first three nodes. This is very good news, and all that remains is to figure out which node it is. We still don’t really know what the ordering is, remember? We’re figuring out the topological sorting as we go along, step by step.

There is only one node that could possibly be the next one, of course:3 the one with the lowest distance estimate. We know it’s next in the sorted order, and we know it has a correct estimate; because these estimates are upper bounds, none of the later nodes could possibly have lower estimates. Cool, no? And now, by induction, we’ve solved the problem. We just relax all out-edges of the nodes of each node in distance order—which means always taking the one with the lowest estimate next.

[上图的解释：The execution of Dijkstra’s algorithm. The source s is the leftmost vertex. The shortest-path estimates are shown within the vertices, and shaded edges indicate predecessor values. Black vertices are in the set S, and white vertices are in the min-priority queue Q = V - S. (a) The situation just before the first iteration of the while loop of lines 4-8. The shaded vertex has the minimum d value and is chosen as vertex u in line 5. (b)-(f) The situation after each successive iteration of the while loop. The shaded vertex in each part is chosen as vertex u in line 5 of the next iteration. The d and π values shown in part (f) are the final values.]

#Dijkstra算法
from heapq import heappush, heappop

def dijkstra(G, s):
D, P, Q, S = {s:0}, {}, [(0,s)], set()      # Est., tree, queue, visited
while Q:                                    # Still unprocessed nodes?
_, u = heappop(Q)                       # Node with lowest estimate
if u in S: continue                     # Already visited? Skip it
S.add(u)                                # We've visited it now
for v in G[u]:                          # Go through all its neighbors
relax(G, u, v, D, P)                # Relax the out-edge
heappush(Q, (D[v], v))              # Add to queue, w/est. as pri
return D, P                                 # Final D and P returned

#测试代码
s, t, x, y, z = range(5)
W = {
s: {t:10, y:5},
t: {x:1, y:2},
x: {z:4},
y: {t:3, x:9, z:2},
z: {x:6, s:7}
}
D, P = dijkstra(W, s)
print [D[v] for v in [s, t, x, y, z]] # [0, 8, 9, 5, 7]
print s not in P # True
print [P[v] for v in [t, x, y, z]] == [y, t, s, y] # True

Dijkstra算法和Prim算法的实现很像，也和BFS算法实现很像，其实，如果我们把每条权值为 w 的边(u,v)想象成节点 u 和节点 v 中间有 (w-1) 个节点，且每条边都是权值为1的一条路径的话，BFS算法其实就和Dijkstra算法差不多了。 Dijkstra算法的时间复杂度和使用的优先队列有关，上面的实现用的是最小堆，所以时间复杂度是$O(m lg n)$，其中 m 是边数，n 是节点数。

d(s,v) <= d(s,u) + W[u,v]. This is an example of the triangle inequality.

sum=[w(s,v1)+h(s)-h(v1)]+[w(v1,v2)+h(v1)-h(v2)]+…+[w(u,v)+h(u)-h(v)] =w(v1,v2)+w(v2,v3)+…+w(u,v)-h(v)

D’(u,v)=D(u,v)+h(u)-h(v) ==> D(u,v)=D’(u,v)-h(u)+h(v)

#Johnson’s Algorithm
def johnson(G):                                 # All pairs shortest paths
G = deepcopy(G)                             # Don't want to break original
s = object()                                # Guaranteed unique node
G[s] = {v:0 for v in G}                     # Edges from s have zero wgt
h, _ = bellman_ford(G, s)                   # h[v]: Shortest dist from s
del G[s]                                    # No more need for s
for u in G:                                 # The weight from u...
for v in G[u]:                          # ... to v...
G[u][v] += h[u] - h[v]              # ... is adjusted (nonneg.)
D, P = {}, {}                               # D[u][v] and P[u][v]
for u in G:                                 # From every u...
D[u], P[u] = dijkstra(G, u)             # ... find the shortest paths
for v in G:                             # For each destination...
D[u][v] += h[v] - h[u]              # ... readjust the distance
return D, P                                 # These are two-dimensional

a, b, c, d, e = range(5)
W = {
a: {c:1, d:7},
b: {a:4},
c: {b:-5, e:2},
d: {c:6},
e: {a:3, b:8, d:-4}
}
D, P = johnson(W)
print [D[a][v] for v in [a, b, c, d, e]] # [0, -4, 1, -1, 3]
print [D[b][v] for v in [a, b, c, d, e]] # [4, 0, 5, 3, 7]
print [D[c][v] for v in [a, b, c, d, e]] # [-1, -5, 0, -2, 2]
print [D[d][v] for v in [a, b, c, d, e]] # [5, 1, 6, 0, 8]
print [D[e][v] for v in [a, b, c, d, e]] # [1, -3, 2, -4, 0]

Let d(u, v, k) be the length of the shortest path that exists from node u to node v if you’re only allowed to use the k first nodes as intermediate nodes.

d(u,v,k) = min(d(u,v,k-1), d(u,k,k-1) + d(k,v,k-1))

#递归版本的Floyd-Warshall算法
from functools import wraps

def memo(func):
cache = {}                                  # Stored subproblem solutions
@wraps(func)                                # Make wrap look like func
def wrap(*args):                            # The memoized wrapper
if args not in cache:                   # Not already computed?
cache[args] = func(*args)           # Compute & cache the solution
return cache[args]                      # Return the cached solution
return wrap                                 # Return the wrapper

def rec_floyd_warshall(G):                                # All shortest paths
@memo                                                 # Store subsolutions
def d(u,v,k):                                         # u to v via 1..k
if k==0: return G[u][v]                           # Assumes v in G[u]
return min(d(u,v,k-1), d(u,k,k-1) + d(k,v,k-1))   # Use k or not?
return {(u,v): d(u,v,len(G)) for u in G for v in G}   # D[u,v] = d(u,v,n)

#测试代码
a, b, c, d, e = range(1,6) # One-based
W = {
a: {c:1, d:7},
b: {a:4},
c: {b:-5, e:2},
d: {c:6},
e: {a:3, b:8, d:-4}
}
for u in W:
for v in W:
if u == v: W[u][v] = 0
if v not in W[u]: W[u][v] = inf
D = rec_floyd_warshall(W)
print [D[a,v] for v in [a, b, c, d, e]] # [0, -4, 1, -1, 3]
print [D[b,v] for v in [a, b, c, d, e]] # [4, 0, 5, 3, 7]
print [D[c,v] for v in [a, b, c, d, e]] # [-1, -5, 0, -2, 2]
print [D[d,v] for v in [a, b, c, d, e]] # [5, 1, 6, 0, 8]
print [D[e,v] for v in [a, b, c, d, e]] # [1, -3, 2, -4, 0]

Floyd-Warshall算法的递推公式

$$d{ij}^{k}= \left{ \begin{array}{l l} \omega{ij} & \quad \text{如果k=0} min(d{ij}^{k-1},d{ik}^{k-1}+d_{kj}^{k-1}) & \quad \text{如果k \ge 1} \end{array} \right.$$

#空间优化后的Floyd-Warshall算法
def floyd_warshall1(G):
D = deepcopy(G)                             # No intermediates yet
for k in G:                                 # Look for shortcuts with k
for u in G:
for v in G:
D[u][v] = min(D[u][v], D[u][k] + D[k][v])
return D

#测试代码
a, b, c, d, e = range(1,6) # One-based
W = {
a: {c:1, d:7},
b: {a:4},
c: {b:-5, e:2},
d: {c:6},
e: {a:3, b:8, d:-4}
}
for u in W:
for v in W:
if u == v: W[u][v] = 0
if v not in W[u]: W[u][v] = inf
D = floyd_warshall1(W)
print [D[a][v] for v in [a, b, c, d, e]] # [0, -4, 1, -1, 3]
print [D[b][v] for v in [a, b, c, d, e]] # [4, 0, 5, 3, 7]
print [D[c][v] for v in [a, b, c, d, e]] # [-1, -5, 0, -2, 2]
print [D[d][v] for v in [a, b, c, d, e]] # [5, 1, 6, 0, 8]
print [D[e][v] for v in [a, b, c, d, e]] # [1, -3, 2, -4, 0]

#最终版本的Floyd-Warshall算法
def floyd_warshall(G):
D, P = deepcopy(G), {}
for u in G:
for v in G:
if u == v or G[u][v] == inf:
P[u,v] = None
else:
P[u,v] = u
for k in G:
for u in G:
for v in G:
shortcut = D[u][k] + D[k][v]
if shortcut < D[u][v]:
D[u][v] = shortcut
P[u,v] = P[k,v]
return D, P

#测试代码
a, b, c, d, e = range(5)
W = {
a: {c:1, d:7},
b: {a:4},
c: {b:-5, e:2},
d: {c:6},
e: {a:3, b:8, d:-4}
}
for u in W:
for v in W:
if u == v: W[u][v] = 0
if v not in W[u]: W[u][v] = inf
D, P = floyd_warshall(W)
print [D[a][v] for v in [a, b, c, d, e]]#[0, -4, 1, -1, 3]
print [D[b][v] for v in [a, b, c, d, e]]#[4, 0, 5, 3, 7]
print [D[c][v] for v in [a, b, c, d, e]]#[-1, -5, 0, -2, 2]
print [D[d][v] for v in [a, b, c, d, e]]#[5, 1, 6, 0, 8]
print [D[e][v] for v in [a, b, c, d, e]]#[1, -3, 2, -4, 0]
print [P[a,v] for v in [a, b, c, d, e]]#[None, 2, 0, 4, 2]
print [P[b,v] for v in [a, b, c, d, e]]#[1, None, 0, 4, 2]
print [P[c,v] for v in [a, b, c, d, e]]#[1, 2, None, 4, 2]
print [P[d,v] for v in [a, b, c, d, e]]#[1, 2, 3, None, 2]
print [P[e,v] for v in [a, b, c, d, e]]#[1, 2, 3, 4, None]

[算法导论在介绍所有节点对最短路径问题时先介绍了另一个基于动态规划的解法，但是那个算法时间复杂度较高，即使是使用了重复平方技术还是比较差，所以这里不介绍了，但是有意思的是书中将这个算法和矩阵乘法运算进行了对比，发现两者之间惊人的相似，其实同理，我们开始介绍的Bellman-Ford算法和矩阵与向量的乘法运算也有很多类似的地方，感兴趣可以自己探索下，也可以阅读算法导论了解下]

(1)Meeting in the middle

(2)Knowing where you’re going

By now you’ve seen that the basic idea of traversal is pretty versatile, and by simply using different queues, you get several useful algorithms. For example, for FIFO and LIFO queues, you get BFS and DFS, and with the appropriate priorities, you get the core of Prim’s and Dijkstra’s algorithms. The algorithm described in this section, called A*, extends Dijkstra’s, by tweaking the priority once again.

As mentioned earlier, the A* algorithm uses an idea similar to Johnson’s algorithm, although for a different purpose. Johnson’s algorithm transforms all edge weights to ensure they’re positive, while ensuring that the shortest paths are still shortest. In A*, we want to modify the edges in a similar fashion, but this time the goal isn’t to make the edges positive—we’re assuming they already are (as we’re building on Dijkstra’s algorithm). No, what we want is to guide the traversal in the right direction, by using information of where we’re going: we want to make edges moving away from our target node more expensive than those that take us closer to it.

0 条评论

• ### Gradle Plugin for Android Development User Guide 2

Gradle Plugin for Android Development User Guide (2)

• ### Android Training Summary (1) Getting Started

Android Training 中Getting Started部分的阅读笔记

• ### Lint Tool Analysis (3)

本系列的几篇源码分析文档意义不大，如果你正好也在研究lint源码，或者你想知道前面自定义lint规则中提出的那几个问题，抑或你只是想大致了解下lint的源码都有...

• ### 非线性最小二乘问题的自适应稳定核（CS RO）

状态估计是大多数机器人系统的关键组成部分。通常，状态估计是使用某种形式的最小二乘最小化来完成的。基本上，所有处理真实数据的错误最小化过程都使用稳定核作为处理数据...

• ### BP category 2 does not fit the data in category 1

版权声明：署名，允许他人基于本文进行创作，且必须基于与原先许可协议相同的许可协议分发本文 （Creative Commons）

• ### SAP CRM订单系统设计时关于用户权限(User Authorization)的一些考虑

In S4, there is a Tcode to trace authorization check - stauthtrace

版权声明：本文为博主原创文章，遵循 CC 4.0 BY-SA 版权协议，转载请附上原文出处链接和本声明。 ...

• ### Taking a more fundamental approach to regularization with LARS

To borrow from Gilbert Strang's evaluation of the Gaussian elimination, LARS is ...

• ### Code Forces 644A Parliament of Berland

A. Parliament of Berland time limit per test1 second memory limit per test25...

• ### What's the problem about Python packaging?

I have been using Python as my primary programming language for 4 years. It is e...