You are on page 1of 30

chapter 5

Dynamic Programming
Writes down "1+1+1+1+1+1+1+1 =" on a sheet of paper.
"What's that equal to?"
Counting "Eight!"
Writes down another "1+" on the left.
"What about that?"
"Nine!" " How'd you know it was nine so fast?"
"You just added one more!"
"So you didn't need to recount because you remembered there were
eight! Dynamic Programming is just a elaborate way to say
remembering things to save time later!"

CS AA GS 1
Introduction
Dynamic Programming(DP) applies to optimization problems
in which a set of choices must be made in order to arrive at an
optimal solution.
As choices are made, subproblems of the same form arise.
DP is effective when a given problem may arise from more
than one partial set of choices.
The key technique is to store the solution to each subproblem
in case it should appear.

2
Introduction (cont.)
Divide and Conquer algorithms partition the problem into independent
subproblems.
Dynamic Programming is applicable when the subproblems are not
independent
Dynamic Programming algorithm solves every subproblem just once and
then saves its answer in a table.
Dynamic Programming is an algorithm design technique for optimization
problems: often minimizing or maximizing.
Like divide and conquer, DP solves problems by combining solutions to
sub problems.
• Unlike divide and conquer, sub problems are not independent.
• Sub problems may share sub problems,
• However, solution to one sub problem may not affect the solutions to other sub problems of the
same problem.
A DAC algorithm does redundant work– Repeatedly solves common subproblems
• A DP algorithm solves each problem just once – Saves its result in a table.
3
Essence of DP
The Dynamic Programming (DP) is the most powerful design technique for solving optimization
problems.
The DP in closely related to divide and conquer techniques, where the problem is divided into
smaller sub-problems and each sub-problem is solved recursively.
 “Programming” refers to a tabular method with a series of choices, not “coding”
A set of choices must be made to arrive at an optimal solution.
As choices are made, sub problems of the same form arise frequently.
The key is to store the solutions of sub problems to be reused in the future.
Recall the divide-and-conquer approach: Partition the problem into independent sub problems.
Solve the sub problems recursively. Combine solutions of sub problems.
This contrasts with the dynamic programming approach.
The DP differs from divide and conquer in a way that instead of solving sub-problems recursively, it
solves each of the sub problems only once and stores the solution to the sub-problems in a table.
The solution to the main problem is obtained by the solutions of these sub problems.
Used for optimization problems.
Find a solution with the optimal value.
Minimization or maximization. CS AA GS 4
Algorithmic Paradigm Context
Divide & Dynamic
Conquer Programming
View problem as collection of
subproblems
“Recursive” nature

Independent subproblems

Overlapping subproblems

Number of subproblems depends on typically small


partitioning
factors
Preprocessing
Characteristic running time typically log depends on number
function of n and difficulty of
subproblems
Primarily for optimization
problems
Optimal substructure:
optimal solution to problem
contains within it optimal
solutions to subproblems
CS AA GS 5
Elements of Dynamic Programming
For dynamic programming to be applicable, an optimization problem must have:
Optimal substructure
An optimal solution to the problem contains within it optimal solution to sub problems (but this may
also mean a greedy strategy applies)
A problem exhibits os if an optimal solution to the problem contains within it optimal solutions to
subproblems.
Whenever a problem exhibits os, it is a good clue that dynamic programming might apply.In dynamic
programming, we build an optimal solution to the problem from optimal solutions to
subproblems.Dynamic programming uses optimal substructure in a bottom-up fashion.
Overlapping sub problems
The space of sub problems must be small; i.e., the same sub problems are encountered over and over.
When a recursive algorithm revisits the same problem over and over again, we say that the optimization
problem has overlapping subproblems.
In contrast , a divide-and-conquer approach is suitable usually generates brand new problems at each
step of recursion.
Dynamic programming algorithms take advantage of overlapping subproblems by solving each
subproblem once and then storing the solution in a table where it can be looked up when needed.
CS AA GS 6
Application of DP
1. Matrix chain Multiplication
2. Longest Common Subsequence
3. 0-1 Knapsack problem:
4. Traveling salesman Problem

CS AA GS 7
Dynamic Programming Approach to Optimization
Problems
1. Characterize structure of an optimal solution.
2. Recursively define value of an optimal solution.
3. Compute value of an optimal solution in bottom-up fashion.
4. Construct an optimal solution from computed information.
 Solves problems by combining the solution to sub-problems.
 it solves the sub problems just once and then saves its answer in a table
 A dynamic programming remembers past result and used them to find new result
 Multiple solution exist need to find the best one Requires ”optimal substructure” and
“overlapping sub-problems”
Optimal substructure: Optimal solution contains optimal solutions to sub-problems
Overlapping sub-problems: -Solutions to sub-problems can be stored and reused in a
bottom-up fashion

CS AA GS 8
Matrix-Chain multiplication

• We are given a sequence

A1, A2 , ..., An

• And we wish to compute

A1A2 ...An

9
Matrix-Chain multiplication (cont.)

• Matrix multiplication is assosiative, and so all parenthesizations yield the same


product.
• For example, if the chain of matrices is A1 A2 ...A4then the product A1 A2 A3 A4
can be fully paranthesized in five distinct way:
(A1(A2(A3A4)))
(A1((A2A3)A4))
((A1A2)(A3A4))
((A1(A2A3))A4)
(((A1A2)A3)A4)

NOTE: The way the chain is parenthesized can have a dramatic impact on the cost
of evaluating the product 10
Example:
• A[30][35], B[35][15], C[15][5]
minimum of A*B*C
A*(B*C) = 30*35*5 + 35*15*5 = 7,585
(A*B)*C = 30*35*15 + 30*15*5 = 18,000
• How to optimize:
• Brute force – look at every possible way to parenthesize : Ω(4n/n3/2)
• Dynamic programming – time complexity of Ω(n3) and space complexity of Θ(n2).

11
Matrix-Chain multiplication
MATRIX-MULTIPLY (A,B)
if columns [A] ≠ rows [B]
then error “incompatible dimensions”
else for i←1 to rows [A]
do for j←1 to columns [B]
do C[i, j]←0
for k←1 to columns [A]
do C[ i, j ]← C[ i, j] +A[ i, k]*B[ k, j]
return C
12
Matrix-Chain multiplication (cont.)
Cost of the matrix multiplication:

An example: A1 A2 A3
A1 : 10  100
A2 : 100  5
A3 : 5  50

13
Matrix-Chain multiplication (cont.)

If we multiply (( A1 A2 ) A3 ) we perform 10 100  5  5000


scalar multiplica tions to compute the 10  5 matrix product A1 A2 ,
plus another 10  5  50  2500 scalar multiplica tions to multiply
this matrix by A3 , for a total of 7500 scalar multiplica tions.

If we multiply ( A1 ( A2 A3 )) we perform 100  5  50  25 000


scalar multiplica tions to compute the 100  50 matrix product A2 A3 ,
plus another 10 100  50  50 000 scalar multiplica tions to multiply
A1 by this matrix, for a total of 75 000 scalar multiplica tions.

14
Matrix-Chain multiplication (cont.)

• The problem:
Given a chain A1, A2 , ..., An of n matrices, where matrix Ai has
dimension pi-1x pi, fully paranthesize the product A1A2 ...An
in a way that minimizes the number of scalar multiplications.

15
Matrix-Chain multiplication (cont.)

• Counting the number of alternative


paranthesization : bn

1 if n  1 , there is only one matrix


 n 1
bn  
 bk bn k if n  2
 k 1

bn  (2 n )

16
Matrix-Chain multiplication (cont.)

Step 1: The structure of an optimal paranthesization(op)

• Find the optimal substructure and then use it to construct


an optimal solution to the problem from optimal solutions
to subproblems.
• Let Ai...j where i ≤ j, denote the matrix product Ai
Ai+1 ... Aj
• Any parenthesization of Ai Ai+1 ... Aj must split the product
between Ak and Ak+1 for i ≤ k < j.

17
Matrix-Chain multiplication (cont.)

The optimal substructure of the problem:


• Suppose that an op of Ai Ai+1 ... Aj splits the product
between Ak and Ak+1 then the paranthesization of the
subchain Ai Ai+1 ... Ak within this parantesization of Ai
Ai+1 ... Aj must be an op of Ai Ai+1 ... Ak

18
Matrix-Chain multiplication (cont.)

Step 2: A recursive solution:


• Let m[i,j] be the minimum number of scalar multiplications
needed to compute the matrix Ai...j where 1≤ i ≤ j ≤ n.
• Thus, the cost of a cheapest way to compute A1...n would be
m[1,n].
• Assume that the op splits the product Ai...j between Ak and
Ak+1.where i ≤ k <j.
• Then m[i,j] =The minimum cost for computing Ai...k and
Ak+1...j + the cost of multiplying these two matrices.

19
Matrix-Chain multiplication (cont.)

Recursive defination for the minimum cost of


paranthesization:

0 if i  j

m[i, j ]   min m[i, k ]  m[k  1, j ]  p p p }
i  k  j i 1 k j if i  j.

20
Matrix-Chain multiplication (cont.)

To help us keep track of how to constrct an optimal solution we define


s[ i,j] to be a value of k at which we can split the product Ai...j to obtain
an optimal paranthesization.

That is s[ i,j] equals a value k such that

m[i, j ]  m[i, k ]  m[k  1, j ]  pi 1 pk p j


s[i, j ]  k

21
Matrix-Chain multiplication (cont.)

Step 3: Computing the optimal costs

It is easy to write a recursive algorithm based on recurrence for


computing m[i,j].

But the running time will be exponential!...

22
Matrix-Chain multiplication (cont.)

Step 3: Computing the optimal costs

We compute the optimal cost by using a tabular, bottom-up


approach.

23
Matrix-Chain multiplication (Contd.)
MATRIX-CHAIN-ORDER(p)
n←length[p]-1
for i←1 to n
do m[i,i]←0
for l←2 to n
do for i←1 to n-l+1
do j←i+l-1
m[i,j]← ∞
for k←i to j-1
do q←m[i,k] + m[k+1,j]+pi-1 pk pj
if q < m[i,j]
then m[i,j] ←q
s[i,j] ←k
return m and s
24
Matrix-Chain multiplication (cont.)
An example: matrix dimension
A1 30 x 35
A2 35 x 15
A3 15 x 5
A4 5 x 10
A5 10 x 20
A6 20 x 25

m[2,2]  m[3,5]  p1 p2 p5  0  2500  35  15  20  13000



m[2,5]  min m[2,3]  m[4,5]  p1 p3 p5  2625  100  35  5  20  7125
m[2,4]  m[5,5]  p p 4 p  4375  0  35  10  20  11375
 1 5
 7125 25
Matrix-Chain multiplication (cont.)
s
m
3

15125 3 3

11875 10500 3 3
3

9375 7125 575 3 3 5


1
4375 2500
2500 3500 4 5
7875 1 2 3

2625 750 1000


1000 5000
15750
0 0 0
0 0 0

A1 A2 A3 A4 A5 A6

26
Matrix-Chain multiplication (cont.)

Step 4: Constructing an optimal solution

An optimal solution can be constructed from the computed


information stored in the table s[1...n, 1...n].
We know that the final matrix multiplication is

A1...s[1, n] As[1, n]1...n

The earlier matrix multiplication can be computed recursively.

27
Matrix-Chain multiplication (Contd.)

PRINT-OPTIMAL-PARENS (s, i, j)
1 if i=j
2 then print “Ai”
3 else print “ ( “
4 PRINT-OPTIMAL-PARENS (s, i, s[i,j])
5 PRINT-OPTIMAL-PARENS (s, s[i,j]+1, j)
6 Print “ ) ”

28
Matrix-Chain multiplication (Contd.)

RUNNING TIME:

Recursive solution takes exponential time.

Matrix-chain order yields a running time of O(n3)

29
CS AA GS 30

You might also like