Algorithm Series 3
Greedy Algorithm
What is Greedy Algorithm?
A greedy algorithm is any algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding the global optimum.
Greedy algorithms do not always yield optimal solutions, but for many problems they do. For those problems which greedy algorithm does yield optimal solutions, we need prove its correctness.
How?
1. Cast the optimization problem as one in which we make a choice and are left with one subproblem to solve.
2. Prove that there is always an optimal solution to the original problem that makes the greedy choice, so that the greedy choice is always safe.
3. Demonstrate that, having made the greedy choice, what remains is a subproblem with the property that if we combine an optimal solution to the subproblem with the greedy choice we have made, we arrive at an optimal solution to the original problem.
4. Prove whether the solution of greedy algorithm is globally optimal solution.
We develop our substructure with an eye toward making a greedy choice that leaves just one subproblem to solve optimally.
When to use Greedy Algorithm?
Greedy-choice property
A globally optimal solution can be arrived at by making a locally optimal (greedy) choice.
Optimal substructure
Difference between Dynamic programming
Unlike dynamic programming, which solves the subproblems bottom up, a greedy strategy usually progresses in a top-down fashion, making one greedy choice after another, reducing each given problem instance to a smaller one.
Examples
To the traveling salesman problem, greedy algorithm means: "At each stage visit the unvisited city nearest to the current city".
The fractional knapsack problem is solvable by a greedy strategy, whereas the 0–1 problem is not.
Huffman code
Always merge 2 least-frequent nodes and re-put it to the priority queue.
HUFFMAN(C)
n ← |C|
Q ← C
for i 1 to n - 1
do allocate a new node z
left[z] ← x ← EXTRACT-MIN (Q)
right[z] ← y ← EXTRACT-MIN (Q)
f [z] ← f [x] + f [y]
INSERT(Q, z)
return EXTRACT-MIN(Q)
Activity-selection problem
Dynamic-programming solution
C[i,j] = max(i<k<j){C[i,k] + C[k,j] + 1}
Add fictitious activities a0 and an+1 and adopt the conventions that f0 = 0 and sn+1 = ∞.
Converting to a greedy solution
Consider any nonempty subproblem Sij, and let am be the activity in Sij with the earliest finish time (proof by contradiction):
1. Activity am is used in some maximum-size subset of mutually compatible activities of Sij.
2. The subproblem Sim is empty, so that choosing am leaves the subproblem Smj as the only one that may be nonempty.
GREEDY-ACTIVITY-SELECTOR(s, f)
n ← length[s]
A ← {a1}
i ← 1
for m ← 2 to n
do if sm ≥ fi
then A ← A U {am}
i ← m
return A
Greedy solution
Always pick the one with the earliest finish time that can be legally scheduled. It maximizes the amount of unscheduled time remaining.
Minimum Spanning Tree algorithm for a connected, undirected graph- O(E*lgV)
Let (u, v) be a light edge crossing (S, V - S). Then, edge (u, v) is safe for A. - Prove by cut-and-paste technique. At each step it adds to the forest an edge of least possible weight.
Prim's algorithm
The set A forms a single tree. The safe edge added to A is always a least-weight edge connecting the tree to a vertex not in the tree.
The tree starts from an arbitrary root vertex r.
At each step with an edge that contributes the minimum amount possible to the tree's weight.
By using Fibonacci heaps, Prim's algorithm can be sped up to run in time O(E + VlgV).
A = {(v, π[v]) : v ∈ V - {r}}.
MST-PRIM(G, w, r)
for each u ∈ V [G]
do key[u] ← ∞
π[u] ← NIL
key[r] ← 0
Q ← V [G]
while Q ≠ Ø
do u ← EXTRACT-MIN(Q)
for each v ∈ Adj[u]
do if v ∈ Q and w(u, v) < key[v]
then π[v] ← u
key[v] ← w(u, v)
Kruskal's algorithm
The set A is a forest. The safe edge added to A is always a least-weight edge in the graph that connects two distinct components.
MST-KRUSKAL(G, w)
A ← Ø
for each vertex v ∈ V[G]
do MAKE-SET(v)
sort the edges of E into nondecreasing order by weight w
for each edge (u, v) ∈ E, taken in nondecreasing order by weight
do if FIND-SET(u) ≠ FIND-SET(v)
then A ← A ∪ {(u, v)}
UNION(u, v)
return A
Single-Source Shortest Path
For shortest-path problem of unweighted graph, breadth-first-search can be used.
Positive-weighted shortest-path problem in a weighted, directed graph
Dijkstra's algorithm
Dijkstra's algorithm maintains a set S of vertices whose final shortest-path weights from the source s have already been determined. The algorithm repeatedly selects the vertex u ∈ V - S with the minimum shortest-path estimate, adds u to S, and relaxes all edges leaving u. In the following implementation, we use a min-priority queue Q of vertices, keyed by their d values.
DIJKSTRA(G, w, s)
INITIALIZE-SINGLE-SOURCE(G, s)
S ← Ø
Q ← V[G]
while Q ≠ Ø
do u ← EXTRACT-MIN(Q)
S ← S ∪{u}
for each vertex v ∈ Adj[u]
do RELAX(u, v, w)
Negative-weighted shortest-path problem in a directed acyclic graph
Bellman-Ford algorithm [TODO]
Topological Sorting
1. Find any vertex that has no incoming edges, process it and logically remove it - lower the incoming edges for each vertex adjacent to it. - apply this strategy to the rest of the graph.
Resources