## What is dynamic programming?

Dynamic programming is an optimization technique used to solve complex problems by breaking them down into smaller, overlapping subproblems with optimal substructure. It solves each subproblem once and stores its solution in a table or memo, reducing the time complexity of the algorithm. It can be implemented using top-down (memoization) or bottom-up (tabulation) approaches and is widely used in various fields to solve problems such as shortest path, sequence alignment, and knapsack problems.

It is often used to solve problems that exhibit the following properties:

**Overlapping subproblems**is a property of a problem that means it can be broken down into smaller subproblems that have common or repeated subsubproblems. In dynamic programming, this property allows the algorithm to solve each subproblem once and store the solution in a table or memo, reducing the time complexity of the algorithm. It is a key feature of many problems that can be solved using dynamic programming, such as the Fibonacci sequence, longest common subsequence, coin change, and edit distance problems.**For example**, in the problem of computing the nth Fibonacci number, the subproblem of computing the (n-1)th Fibonacci number is repeated multiple times since it is needed to compute the nth Fibonacci number as well as the (n-2)th Fibonacci number. Similarly, the subproblem of computing the (n-2)th Fibonacci number is also needed for both the nth and (n-1)th Fibonacci numbers.

In dynamic programming, this property is important because it allows the algorithm to solve each subproblem once and store the solution in a table or memo, rather than repeatedly solving the same subproblem each time it is encountered. This reduces the time complexity of the algorithm and makes it more efficient.**Optimal substructure**is a property of a problem that means the optimal solution to the problem can be obtained by combining the optimal solutions to its subproblems. In other words, the solution to the problem exhibits a recursive structure, where the optimal solution to a larger problem can be computed from the optimal solutions to its smaller subproblems.**For example**, in the shortest path problem, the optimal solution to finding the shortest path from vertex A to vertex B can be obtained by combining the optimal solutions to finding the shortest path from A to all other vertices in the graph. The optimal solution to finding the shortest path from A to any vertex can then be used to find the optimal solution to finding the shortest path from A to B.

## Tabulation & Memoization

Tabulation and memoization are two techniques used in dynamic programming to solve problems by breaking them down into smaller subproblems.**Tabulation **involves solving the subproblems iteratively in a bottom-up manner and storing solutions in an array or data structure. It is typically more efficient in terms of time and space complexity, especially for large problems or deep recursion. However, it can be more difficult to implement for some recursive algorithms, especially when order matters.**Memoization **involves solving the subproblems recursively in a top-down manner and caching solutions in a cache or dictionary. It is more intuitive and easier to implement, potentially saving time by avoiding redundant computations. However, it can be less efficient in terms of time and space complexity, especially for large problems or deep recursion.

The choice between tabulation and memoization depends on the specific problem being solved and the specific requirements of the implementation.

*Here’s a tabular comparison of tabulation and memoization:*

Items | Tabulation | Memoization |
---|---|---|

Definition | Solve subproblems iteratively in a bottom-up manner, storing solutions in an array or data structure | Solve subproblems recursively in a top-down manner, caching solutions in a cache or dictionary |

Implementation | Typically implemented using loops | Typically implemented using recursion |

Advantages | More efficient in terms of time and space complexity, especially for large problems or deep recursion | More intuitive and easier to implement, potentially saving time by avoiding redundant computations |

Disadvantages | Can be more difficult to implement for some recursive algorithms, especially when order matters | Can be less efficient in terms of time and space complexity, especially for large problems or deep recursion |

Key Considerations | Useful when the order in which subproblems are solved is not important | Useful when the order in which subproblems are solved matters or when the same subproblems are likely to be encountered multiple times |

##### Examples:

Let’s consider the problem of finding the nth Fibonacci number, where n is a non-negative integer. The Fibonacci sequence is defined as follows:

F(0) = 0

F(1) = 1

F(n) = F(n-1) + F(n-2), for n > 1

We can use dynamic programming to compute the nth Fibonacci number efficiently. There are two ways to implement dynamic programming for this problem: memoization and tabulation.

**Memoization approach:**

def fibonacci(n): if n in fibonacci.memo: return fibonacci.memo[n] else: fibonacci.memo[n] = fibonacci(n-1) + fibonacci(n-2) return fibonacci.memo[n] fibonacci.memo = {0: 0, 1: 1}

In this implementation, we define the ** fibonacci **function with a cache attribute, which is a dictionary that stores previously computed Fibonacci numbers. When we call the function with a value of

`n`

, we first check if that value is in the cache. If it is, we return the corresponding value from the cache. Otherwise, we compute the value recursively, store it in the cache, and return it.**Note **that we need to define the `cache`

attribute outside of the function, since we want it to persist across multiple function calls. We also need to initialize the cache with the base cases for the Fibonacci sequence.

**Tabulation approach:**

def fibonacci(n): # base cases if n == 0: return 0 if n == 1: return 1 # initialize the table with base cases table = [0] * (n + 1) table[0] = 0 table[1] = 1 # fill in the table with computed values for i in range(2, n+1): table[i] = table[i-1] + table[i-2] # return the final value return table[n]

In this implementation, we first handle the base cases for the Fibonacci sequence (`n=0`

and `n=1`

). We then initialize a list called `table`

with `n+1`

elements, and set the first two elements to their respective values for the base cases. We then use a loop to fill in the remaining values in the `table`

, computing each value as the sum of the two previous values. Finally, we return the last value in the `table`

, which is the `n`

-th Fibonacci number.