Dynamic Programming
Dynamic programming is a technique that combines the correctness of complete search and the efficiency of greedy algorithms.
Dynamic Programming is “Smart Brute-Forcing”.
There are two uses for dynamic programming:
- Finding an optimal solution: We want to find a solution that is as large as possible or as small as possible.
- Counting the number of solutions: We want to calculate the total number of possible solutions.
Dynamic programming can be applied if the problem can be divided into overlapping subproblems that can be solved independently.
Simple. most of the Dynamic Programming problems are solved in two ways:
- Tabulation: Bottom Up
- Memoization: Top Down
- I think I’ve gotten relatively good at DP thanks to CS341, forced me to think about formulating the subproblems fast
More advanced concepts are explored for Competitive Programming https://codeforces.com/blog/entry/8219
- Top-Down DP
- Bottom-Up DP
- Divide and Conquer DP
- Convex Hull DP
- Knuth’s Optimization DP
- Bitmask DP
- Knapsack DP
- Range DP
DP on trees: https://codeforces.com/blog/entry/20935
Applications
- Coin Problem
- Interval Scheduling
- Knapsack DP
- Longest Increasing Subsequence
- Longest Common Subsequence
- Longest Palindromic Sequence
- Optimal BST
Other:
- Reinforcement Learning
- Grid problems
- Backtracking
Other
My Weakness and direction of updates
Your weakness
I’m weak at reasoning about whether we should use 1 indexing or 0-indexing on these problems. For example, https://codeforces.com/contest/2050/problem/E.
- Sometimes, you do updates like
dp[i][j] = dp[i-1][j]
(this is what you are used to), a Backward Update- Knapsack problem is like this
- However, other times, you’ll find yourself doing
dp[i+1][j] = dp[i][j]
, a Forward Update- A Range Propagation Problem:
- Imagine you have a task where state i enables state i+1 to have the same value.
- If the DP table represents “reachable states” (e.g., can you reach index j at step i), you might propagate the current state forward.
- This happens when you’re propagating the effect of the current state forward to later states. This is common when the problem requires you to “spread” a decision or value to future states.
Both approaches ultimately solve the same recurrence relationship because they are rooted in the same logic: state transitions.
So figure out your state transition first, and then figure out which direction is easier.
I’m very used to backward
Consider this problem https://codeforces.com/contest/2050/problem/E
Forward solve
Backward solve
- To me, the backward solve still makes more sense
- However, the forward solve gets rid of 3 extra ugly checks (base cases when i == 0)
Recursive vs. Iterative DP
- Errichto says this iterative, bottom-up approach, is preferred in Competitive Programming
- The Competitive Programmer’s Handbook also says the same, because it is shorter and has lower constant factors.
- However, its often easier to think about DP solutions in terms of Recursive Algorithms
Older Notes
These were taken from the freecodecamp DP video.
Steps (more application to top-down DP)
- Recursion
- Define Subproblems count # subproblems
- Guess (part of solution) count # choices
- Relate Subproblem Solutions compute time/subproblem
- Apply Memoization (top-down DP), OR
- Apply Tabulation (Iterative / Bottom-Up DP)
- Make sure to check Topological Order
Dynamic programming vs. Divide-and-Conquer?
- dynamic programming usually deals with all input sizes
- DAC may not solve “subproblems”
- DAC algorithms not easy to rewrite iteratively