Está en la página 1de 36

Dynamic Programming

Dynamic Programming
• Dynamic programming, like the divide-and-conquer method, solves
problems by combining the solutions to sub-problems.
• “Programming” in this context refers to a tabular method, not to
writing computer code.
• We typically apply dynamic programming to optimization problems
• Many possible solutions. Each solution has a value, and we wish to
find a solution with the optimal (minimum or maximum) value.
Dynamic Programming
• When developing a dynamic-programming algorithm, we follow a
sequence of four steps:
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution, typically in a bottom-up fashion.
4. Construct an optimal solution from computed information.
• If we need only the value of an optimal solution, and not the solution
itself, then we can omit step 4.
Rod cutting
• Some enterprises buys long steel rods and cuts them into shorter
rods, which it then sells. Each cut is free. The management of that
enterprise wants to know the best way to cut up the rods.
• We assume that we know the price of each cut
• The rod-cutting problem is the following. Given a rod of length n
inches and a table of prices pi for i = 1,2, … n, determine the
maximum revenue rn obtainable by cutting up the rod and selling the
pieces.
• Note that if the price pn for a rod of length n is large enough, an
optimal solution may require no cutting at all.
Rod cutting
Rod cutting
• We can cut up a rod of length n in 2n-1
• If an optimal solution cuts the rod into k pieces, for some 1 <= k <= n,
then an optimal decomposition

• of the rod into pieces of lengths i1, i2, . . . , ik provides maximum


corresponding revenue
rn = pi1 + pi2 + …. + pik
Rod cutting
• For our sample problem, we can determine the optimal revenue
figures ri, for i = 1,2, … ,10, by inspection,
Rod cutting
• Note that to solve the original problem of size n, we solve smaller
problems of the same type, but of smaller sizes.
• Rod-cutting problem exhibits optimal substructure: optimal solutions
to a problem incorporate optimal solutions to related subproblems,
which we may solve independently
• For a recursive structure of the rod cutting problem, we view a
decomposition as consisting of a first piece of length i cut off the left-
hand end, and then a right-hand remainder of length n - i.
• Thus a simpler version of (15.1), with r0 = 0
Recursive top-down implementation
Recursive top-down implementation
Using dynamic programming for optimal rod cutting

• We save the results of sub-problems for future use


• Dynamic programming thus uses additional memory to save
computation time; it serves an example of a time-memory trade-off.
• We will study two approaches:
• top-down with memorization
• bottom-up method
Top Down
Bottom Up

Both have the same running time ϴ(n2)


Subproblem graphs
• In order to use dynamic programming, we should understand the
nature of the problem
• The subproblem graph shows this nature
Reconstructing a solution
• Our dynamic-programming solutions to the rod-cutting problem return the value
of an optimal solution, but they do not return an actual solution: a list of piece
sizes.
Longest common subsequence
• Biological applications often need to compare the DNA of two (or
more) different organisms.
• A strand of DNA consists of a string of molecules called bases, where
the possible bases are adenine, guanine, cytosine, and thymine.
Representing each of these bases by its initial letter, we can express a
strand of DNA as a string over the finite set {A; C; G; T}.
• For example, the DNA of one organism may be
S1 = ACCGGTCGAGTGCGCGGAAGCCGGCCGAA,
and the DNA of another organism may be
S2 = GTCGTTCGGAATGCCGTTGCTCTGTAAA.
GTCGTCGGAAGCCGGCCGAA.
Longest common subsequence
• Formally, given a sequence X = {x1; x2; : : : ;xm}, another sequence Z =
{z1, z2, …. , zk} is a subsequence of X if there exists a strictly increasing
sequence {i1, i2, … ,ik} of indices of X such that for all j = 1,2,…,k, we
have xij = zj .
• For example, Z = {B; C; D; B} is a subsequence of X = {A; B;C; B;D;A;B}
with corresponding index sequence {2; 3; 5; 7}.
• Given two sequences X and Y , we say that a sequence Z is a common
subsequence of X and Y if Z is a subsequence of both X and Y .

Longest common subsequence (LCS) is {B, C, B, A}


longest-common-subsequence problem
• we are given two sequences X = {x1; x2; : : : ; xm} and Y = {y1; y2; : : : ;
yn} and wish to find a maximum length common subsequence of X
and Y .
Step 1: Characterizing a longest common subsequence

To be precise, given a sequence X = {x1; x2; : : : ;xm}, we define the ith prefix of X, for i
= { 0; 1; : : : ;m} as Xi = {x1; x2; : : : ; xi }.
For example, if X = {A; B; C; B; D; A; B}, then X4 = {A;B;C;B} and X0 is the empty
sequence.
Step 2: A recursive solution
• We can readily see the overlapping-subproblems property in the LCS
problem.
• To find an LCS of X and Y , we may need to find the LCSs of X and Yn-1
and of Xm-1 and Y . But each of these subproblems has the
subsubproblem of finding an LCS of Xm-1 and Yn-1. Many other
subproblems share subsubproblems.
Step 3: Computing the length of an LCS
• Procedure LCS-LENGTH takes two sequences X = {x1; x2; : : : ; xm} and Y
= {y1;y2; : : : ;yn} as inputs. It stores the c[I, j] values in a table c[0..m 0.. n],
• It computes the entries in row-major order.
• The procedure also maintains the table b[1 .. m 1 … n] to help us
construct an optimal solution.
Step 3: Computing the length of an LCS
Step 3: Computing the length of an LCS
Step 4: Constructing an LCS
Optimal binary search trees
• Given sequence K = k1, k2, . . . , kn of n distinct keys, sorted (k1 < k2 < · ·
· < kn).
• Want to build a binary search tree from the keys.
• For ki , have probability pi that a search is for ki .
• Want BST with minimum expected search cost.
Optimal binary search trees: Example
Example
Optimal substructure
Optimal substructure (1)
Recursive solution
Recursive solution (1)
Computing an optimal solution
Computing an optimal solution(1)
Computing an optimal solution(2)
Computing an optimal solution(3)
Construct an optimal solution

También podría gustarte