Follow these steps to solve any Dynamic Programming interview problem
by Nikola Otasevic
Despite having significant experience building software products, many engineers feel jittery at the thought of going through a coding interview that focuses on algorithms. I’ve interviewed hundreds of engineers at Refdash , Google, and at startups I’ve been a part of, and some of the most common questions that make engineers uneasy are the ones that involve Dynamic Programming (DP).
Many tech companies like to ask DP questions in their interviews. While we can debate whether they’re effective in evaluating someone’s ability to perform in an engineering role, DP continues to be an area that trips engineers up on their way to finding a job that they love.

Dynamic Programming — Predictable and Preparable
One of the reasons why I personally believe that DP questions might not be the best way to test engineering ability is that they’re predictable and easy to pattern match. They allow us to filter much more for preparedness as opposed to engineering ability.
These questions typically seem pretty complex on the outside, and might give you an impression that a person who solves them is very good at algorithms. Similarly, people who may not be able to get over some mind-twisting concepts of DP might seem pretty weak in their knowledge of algorithms.
The reality is different, and the biggest factor in their performance is preparedness. So let’s make sure everyone is prepared for it. Once and for all.
7 Steps to solve a Dynamic Programming problem
In the rest of this post, I will go over a recipe that you can follow to figure out if a problem is a “DP problem”, as well as to figure out a solution to such a problem. Specifically, I will go through the following steps:
- How to recognize a DP problem
- Identify problem variables
- Clearly express the recurrence relation
- Identify the base cases
- Decide if you want to implement it iteratively or recursively
- Add memoization
- Determine time complexity
Sample DP Problem
For the purpose of having an example for abstractions that I am going to make, let me introduce a sample problem. In each of the sections, I will refer to the problem, but you could also read the sections independently of the problem.
Problem statement:
In this problem, we’re on a crazy jumping ball, trying to stop, while avoiding spikes along the way.
Here are the rules:
1) You’re given a flat runway with a bunch of spikes in it. The runway is represented by a boolean array which indicates if a particular (discrete) spot is clear of spikes. It is True for clear and False for not clear.
Example array representation:
2) You’re given a starting speed S. S is a non-negative integer at any given point, and it indicates how much you will move forward with the next jump.
3) Every time you land on a spot, you can adjust your speed by up to 1 unit before the next jump.
4) You want to safely stop anywhere along the runway (does not need to be at the end of the array). You stop when your speed becomes 0. However, if you land on a spike at any point, your crazy bouncing ball bursts and it’s game over.
The output of your function should be a boolean indicating whether we can safely stop anywhere along the runway.
Step 1: How to recognize a Dynamic Programming problem
First, let’s make it clear that DP is essentially just an optimization technique. DP is a method for solving problems by breaking them down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions. The next time the same subproblem occurs, instead of recomputing its solution, you simply look up the previously computed solution. This saves computation time at the expense of a (hopefully) modest expenditure in storage space.
Recognizing that a problem can be solved using DP is the first and often the most difficult step in solving it. What you want to ask yourself is whether your problem solution can be expressed as a function of solutions to similar smaller problems.
In the case of our example problem, given a point on the runway, a speed, and the runway ahead, we could determine the spots where we could potentially jump next. Furthermore, it seems that whether we can stop from the current point with the current speed depends only on whether we could stop from the point we choose to go to next.
That is a great thing, because by moving forward, we shorten the runway ahead and make our problem smaller. We should be able to repeat this process all the way until we get to a point where it is obvious whether we can stop.
Recognizing a Dynamic Programming problem is often the most difficult step in solving it. Can the problem solution be expressed as a function of solutions to similar smaller problems?
Step 2: Identify problem variables
Now we have established that there is some recursive structure between our subproblems. Next, we need to express the problem in terms of the function parameters and see which of those parameters are changing.
Typically in interviews, you will have one or two changing parameters, but technically this could be any number. A classic example of a one-changing-parameter problem is “determine an n-th Fibonacci number”. Such an example for a two-changing-parameters problem is “Compute edit distance between strings”. If you’re not familiar with these problems, don’t worry about it.
A way to determine the number of changing parameters is to list examples of several subproblems and compare the parameters. Counting the number of changing parameters is valuable to determine the number of subproblems we have to solve. It’s also important in its own right in helping us strengthen the understanding of the recurrence relation from step 1.
In our example, the two parameters that could change for every subproblem are:
- Array position (P)
One could say that the runway ahead is changing as well, but that would be redundant considering that the entire non-changing runway and the position (P) carry that information already.
Now, with these 2 changing parameters and other static parameters, we have the complete description of our sub-problems.
Identify the changing parameters and determine the number of subproblems.
Step 3: Clearly express the recurrence relation
This is an important step that many rush through in order to get into coding. Expressing the recurrence relation as clearly as possible will strengthen your problem understanding and make everything else significantly easier.
Once you figure out that the recurrence relation exists and you specify the problems in terms of parameters, this should come as a natural step. How do problems relate to each other? In other words, let’s assume that you have computed the subproblems. How would you compute the main problem?
Here is how we think about it in our sample problem:
Because you can adjust your speed by up to 1 before jumping to the next position, there are only 3 possible speeds, and therefore 3 spots in which we could be next.
More formally, if our speed is S, position P, we could go from (S, P) to:
- (S, P + S) ; # if we do not change the speed
- (S — 1, P + S — 1) ; # if we change the speed by -1
- (S + 1, P + S + 1) ; # if we change the speed by +1
If we can find a way to stop in any of the subproblems above, then we can also stop from (S, P). This is because we can transition from (S, P) to any of the above three options.
This is typically a fine level of understanding of the problem (plain English explanation), but you sometimes might want to express the relation mathematically as well. Let’s call a function that we’re trying to compute canStop. Then:
canStop(S, P) = canStop(S, P + S) || canStop(S — 1, P + S — 1) || canStop(S + 1, P + S + 1)
Woohoo, it seems like we have our recurrence relation!
Recurrence relation: Assuming you have computed the subproblems, how would you compute the main problem?
Step 4: Identify the base cases
A base case is a subproblem that doesn’t depend on any other subproblem. In order to find such subproblems, you typically want to try a few examples, see how your problem simplifies into smaller subproblems, and identify at what point it cannot be simplified further.
The reason a problem cannot be simplified further is that one of the parameters would become a value that is not possible given the constraints of the problem.
In our example problem, we have two changing parameters, S and P. Let’s think about what possible values of S and P might not be legal:
- P should be within the bounds of the given runway
- P cannot be such that runway[P] is false because that would mean that we’re standing on a spike
- S cannot be negative, and a S==0 indicates that we’re done
Sometimes it can be a little challenging to convert assertions that we make about parameters into programmable base cases. This is because, in addition to listing the assertions if you want to make your code look concise and not check for unnecessary conditions, you also need to think about which of these conditions are even possible.
In our example:
- P < 0 || P >= length of runway seems like the right thing to do. An alternative could be to consider m aking P == end of runway a base case. However, it is possible that a problem splits into a subproblem which goes beyond the end of the runway, so we really need to check for inequality.
- This seems pretty obvious. We can simply check if runway[P] is false .
- Similar to #1, we could simply check for S < 0 and S == 0. However, here we can reason that it is impossible for S to be < 0 because S decreases by at most 1, so it would have to go through S == 0 case beforehand. Ther efore S == 0 is a sufficient base case for the S parameter.
Step 5: Decide if you want to implement it iteratively or recursively
The way we talked about the steps so far might lead you to think that we should implement the problem recursively. However, everything that we’ve talked about so far is completely agnostic to whether you decide to implement the problem recursively or iteratively. In both approaches, you would have to determine the recurrence relation and the base cases.
To decide whether to go iteratively or recursively, you want to carefully think about the trade-offs .
Stack overflow issues are typically a deal breaker and a reason why you would not want to have recursion in a (backend) production system. However, for the purposes of the interview, as long as you mention the trade-offs, you should typically be fine with either of the implementations. You should feel comfortable implementing both.
In our particular problem, I implemented both versions. Here is python code for that: A recursive solution: (original code snippets can be found here )
An iterative solution: (original code snippets can be found here )
Step 6: Add memoization
Memoization is a technique that is closely associated with DP. It is used for storing the results of expensive function calls and returning the cached result when the same inputs occur again.
Why are we adding memoization to our recursion? We encounter the same subproblems which, without memoization, are computed repeatedly. Those repetitions very often lead to exponential time complexities.
In recursive solutions, adding memoization should feel straightforward. Let’s see why. Remember that memoization is just a cache of the function results. There are times when you want to deviate from this definition in order to squeeze out some minor optimizations, but treating memoization as a function result cache is the most intuitive way to implement it.
This means that you should:
- Store your function result into your memory before every return statement
- Look up the memory for the function result before you start doing any other computation
Here is the code from above with added memoization (added lines are highlighted): (original code snippets can be found here )
In order to illustrate the effectiveness of memoization and different approaches, let’s do some quick tests. I will stress test all three methods that we have seen so far. Here is the set up:
- I created a runway of length 1000 with spikes in random places (I chose to have a probability of a spike being in any given spot to be 20%)
- initSpeed = 30
- I ran all functions 10 times and measured the average time of execution
Here are the results (in seconds):
You can see that the pure recursive approach takes about 500x more time than the iterative approach and about 1300x more time than the recursive approach with memoization. Note that this discrepancy would grow rapidly with the length of the runway. I encourage you to try running it yourself.
Step 7: Determine Time complexity
There are some simple rules that can make computing time complexity of a dynamic programming problem much easier. Here are two steps that you need to do:
- Count the number of states — this will depend on the number of changing parameters in your problem
- Think about the work done per each state. In other words, if everything else but one state has been computed, how much work do you have to do to compute that last state?
In our example problem, the number of states is |P| * |S|, where
- P is the set of all positions (|P| indicates the number of elements in P)
- S is the set of all speeds
The work done per each state is O(1) in this problem because, given all other states, we simply have to look at 3 subproblems to determine the resulting state.
As we noted in the code before, |S| is limited by length of the runway (|P|), so we could say that the number of states is |P|² and because work done per each state is O(1), then the total time complexity is O(|P|²).
However, it seems that |S| can be further limited, because if it were really |P|, it is very clear that stopping would not be possible because you would have to jump the length of the entire runway on the first move.
So let’s see how we can put a tighter bound on |S|. Let’s call maximum speed S. Assume that we’re starting from position 0. How quickly could we stop if we were trying to stop as soon as possible and if we ignore potential spikes?
In the first iteration, we would have to come at least to the point (S-1), by adjusting our speed at zero by -1. From there we would at a minimum go by (S-2) steps forward, and so on.
For a runway of length L , the following has to hold:
=> (S-1) + (S-2) + (S-3) + ….+ 1 < L
=> S*(S-1) / 2 < L
=> S² — S — 2L < 0
If you find roots of the above function, they will be:
r1 = 1/2 + sqrt(1/4 + 2L) and r2 = 1/2 — sqrt(1/4 + 2L)
We can write our inequality as:
(S — r1) * (S — r2) < ; 0
Considering that S — r2 > 0 for any S > 0 and L > 0, we need the following:
S — 1/2 — sqrt(1/4 + 2L) < ; 0
=> S < 1/2 + sqrt(1/4 + 2L)
That is the maximum speed that we could possibly have on a runway of a length L. If we had a speed higher than that, we could not stop even theoretically, irrespective of the position of the spikes.
That means that the total time complexity depends only on the length of the runway L in the following form:
O(L * sqrt(L)) which is better than O(L²)
O(L * sqrt(L)) is the upper bound on the time complexity
Awesome, you made it through! :)
The 7 steps that we went through should give you a framework for systematically solving any dynamic programming problem. I highly recommend practicing this approach on a few more problems to perfect your approach.
Here are some next steps that you can take
- Extend the sample problem by trying to find a path to a stopping point. We solved a problem that tells you whether you can stop, but what if you wanted to also know the steps to take in order to stop eventually along the runway? How would you modify the existing implementation to do that?
- If you want to solidify your understanding of memoization, and understand that it is just a function result cache, you should read about decorators in Python or similar concepts in other languages. Think about how they would allow you to implement memoization in general for any function that you want to memoize.
- Work on more DP problems by following the steps we went through. You can always find a bunch of them online (ex. LeetCode or GeeksForGeeks ). As you practice, keep in mind one thing: learn ideas, don’t learn problems. The number of ideas is significantly smaller and it’s an easier space to conquer which will also serve you much better.
When you feel like you’ve conquered these ideas, check out Refdash where you are interviewed by a senior engineer and get a detailed feedback on your coding, algorithms, and system design.
Originally published at Refdash blog . Refdash is an interviewing platform that helps engineers interview anonymously with experienced engineers from top companies such as Google, Facebook, or Palantir and get a detailed feedback. Refdash also helps engineers discover amazing job opportunities based on their skills and interests.
If this article was helpful, tweet it .
Learn to code for free. freeCodeCamp's open source curriculum has helped more than 40,000 people get jobs as developers. Get started
- Data Structure & Algorithm Classes (Live)
- System Design (Live)
- DevOps(Live)
- Explore More Live Courses
- Interview Preparation Course
- Data Science (Live)
- GATE CS & IT 2024
- Data Structure & Algorithm-Self Paced(C++/JAVA)
- Data Structures & Algorithms in Python
- Explore More Self-Paced Courses
- C++ Programming - Beginner to Advanced
- Java Programming - Beginner to Advanced
- C Programming - Beginner to Advanced
- Full Stack Development with React & Node JS(Live)
- Java Backend Development(Live)
- Android App Development with Kotlin(Live)
- Python Backend Development with Django(Live)
- Complete Data Science Program(Live)
- Mastering Data Analytics
- DevOps Engineering - Planning to Production
- CBSE Class 12 Computer Science
- School Guide
- All Courses
- Linked List
- Binary Tree
- Binary Search Tree
- Advanced Data Structure
- All Data Structures
- Asymptotic Analysis
- Worst, Average and Best Cases
- Asymptotic Notations
- Little o and little omega notations
- Lower and Upper Bound Theory
- Analysis of Loops
- Solving Recurrences
- Amortized Analysis
- What does 'Space Complexity' mean ?
- Pseudo-polynomial Algorithms
- Polynomial Time Approximation Scheme
- A Time Complexity Question
- Searching Algorithms
- Sorting Algorithms
- Graph Algorithms
- Pattern Searching
- Geometric Algorithms
- Mathematical
- Bitwise Algorithms
- Randomized Algorithms
- Greedy Algorithms
- Dynamic Programming
- Divide and Conquer
- Backtracking
- Branch and Bound
- All Algorithms
- Company Preparation
- Practice Company Questions
- Interview Experiences
- Experienced Interviews
- Internship Interviews
- Competitive Programming
- Design Patterns
- System Design Tutorial
- Multiple Choice Quizzes
- Go Language
- Tailwind CSS
- Foundation CSS
- Materialize CSS
- Semantic UI
- Angular PrimeNG
- Angular ngx Bootstrap
- jQuery Mobile
- jQuery EasyUI
- React Bootstrap
- React Rebass
- React Desktop
- React Suite
- ReactJS Evergreen
- ReactJS Reactstrap
- BlueprintJS
- TensorFlow.js
- English Grammar
- School Programming
- Number System
- Trigonometry
- Probability
- Mensuration
- Class 8 Syllabus
- Class 9 Syllabus
- Class 10 Syllabus
- Class 11 Syllabus
- Class 8 Notes
- Class 9 Notes
- Class 10 Notes
- Class 11 Notes
- Class 12 Notes
- Class 8 Formulas
- Class 9 Formulas
- Class 10 Formulas
- Class 11 Formulas
- Class 8 Maths Solution
- Class 9 Maths Solution
- Class 10 Maths Solution
- Class 11 Maths Solution
- Class 12 Maths Solution
- Class 7 Notes
- History Class 7
- History Class 8
- History Class 9
- Geo. Class 7
- Geo. Class 8
- Geo. Class 9
- Civics Class 7
- Civics Class 8
- Business Studies (Class 11th)
- Microeconomics (Class 11th)
- Statistics for Economics (Class 11th)
- Business Studies (Class 12th)
- Accountancy (Class 12th)
- Macroeconomics (Class 12th)
- Machine Learning
- Data Science
- Mathematics
- Operating System
- Computer Networks
- Computer Organization and Architecture
- Theory of Computation
- Compiler Design
- Digital Logic
- Software Engineering
- GATE 2024 Live Course
- GATE Computer Science Notes
- Last Minute Notes
- GATE CS Solved Papers
- GATE CS Original Papers and Official Keys
- GATE CS 2023 Syllabus
- Important Topics for GATE CS
- GATE 2023 Important Dates
- Software Design Patterns
- HTML Cheat Sheet
- CSS Cheat Sheet
- Bootstrap Cheat Sheet
- JS Cheat Sheet
- jQuery Cheat Sheet
- Angular Cheat Sheet
- Facebook SDE Sheet
- Amazon SDE Sheet
- Apple SDE Sheet
- Netflix SDE Sheet
- Google SDE Sheet
- Wipro Coding Sheet
- Infosys Coding Sheet
- TCS Coding Sheet
- Cognizant Coding Sheet
- HCL Coding Sheet
- FAANG Coding Sheet
- Love Babbar Sheet
- Mass Recruiter Sheet
- Product-Based Coding Sheet
- Company-Wise Preparation Sheet
- Array Sheet
- String Sheet
- Graph Sheet
- ISRO CS Original Papers and Official Keys
- ISRO CS Solved Papers
- ISRO CS Syllabus for Scientist/Engineer Exam
- UGC NET CS Notes Paper II
- UGC NET CS Notes Paper III
- UGC NET CS Solved Papers
- Campus Ambassador Program
- School Ambassador Program
- Geek of the Month
- Campus Geek of the Month
- Placement Course
- Testimonials
- Student Chapter
- Geek on the Top
- Geography Notes
- History Notes
- Science & Tech. Notes
- Ethics Notes
- Polity Notes
- Economics Notes
- UPSC Previous Year Papers
- SSC CGL Syllabus
- General Studies
- Subjectwise Practice Papers
- Previous Year Papers
- SBI Clerk Syllabus
- General Awareness
- Quantitative Aptitude
- Reasoning Ability
- SBI Clerk Practice Papers
- SBI PO Syllabus
- SBI PO Practice Papers
- IBPS PO 2022 Syllabus
- English Notes
- Reasoning Notes
- Mock Question Papers
- IBPS Clerk Syllabus
- Apply for a Job
- Apply through Jobathon
- Hire through Jobathon
- All DSA Problems
- Problem of the Day
- GFG SDE Sheet
- Top 50 Array Problems
- Top 50 String Problems
- Top 50 Tree Problems
- Top 50 Graph Problems
- Top 50 DP Problems
- Solving For India-Hackthon
- GFG Weekly Coding Contest
- Job-A-Thon: Hiring Challenge
- BiWizard School Contest
- All Contests and Events
- Saved Videos
- What's New ?
- Divide & Conquer
Related Articles
- Write Articles
- Pick Topics to write
- Guidelines to Write
- Get Technical Writing Internship
- Write an Interview Experience
- Optimal Substructure Property in Dynamic Programming | DP-2
- Overlapping Subproblems Property in Dynamic Programming | DP-1
How to solve a Dynamic Programming Problem ?
- Tabulation vs Memoization
- Bitmasking and Dynamic Programming | Set 1 (Count ways to assign unique cap to every person)
- Digit DP | Introduction
- Ugly Numbers
- Program for nth Catalan Number
- Bell Numbers (Number of ways to Partition a Set)
- Coin Change | DP-7
- Subset Sum Problem | DP-25
- Subset Sum Problem in O(sum) space
- Introduction and Dynamic Programming solution to compute nCr%p
- Cutting a Rod | DP-13
- Painting Fence Algorithm
- Longest Common Subsequence | DP-4
- Count all subsequences having product less than K
- Maximum sum in a 2 x n grid such that no two elements are adjacent
- Min Cost Path | DP-6
- Longest Common Substring | DP-29
- Count ways to reach the nth stair using step 1, 2 or 3
- Count Balanced Binary Trees of Height h
- Floyd Warshall Algorithm | DP-16
- 0/1 Knapsack Problem
- Egg Dropping Puzzle | DP-11
- Box Stacking Problem | DP-22
- Partition problem | DP-18
- Travelling Salesman Problem using Dynamic Programming
- Longest Palindromic Subsequence | DP-12
- Longest alternating subsequence
- Find all distinct subset (or subsequence) sums of an array
- Weighted Job Scheduling
- Number of paths with exactly k coins
- Count number of ways to jump to reach end
- Count number of ways to partition a set into k subsets
- Maximum subarray sum in O(n) using prefix sum
- Maximum number of trailing zeros in the product of the subsets of size k
- Minimum number of deletions to make a string palindrome
- Find if string is K-Palindrome or not | Set 1
- Find the longest path in a matrix with given constraints
- Find minimum sum such that one of every three consecutive elements is taken
- Dynamic Programming | Wildcard Pattern Matching | Linear Time and Constant Space
- Dynamic Programming | Building Bridges
- Palindrome Partitioning | DP-17
- Mobile Numeric Keypad Problem
- Longest Common Subsequence with at most k changes allowed
- The painter’s partition problem
- Matrix Chain Multiplication | DP-8
- Largest rectangular sub-matrix whose sum is 0
- Maximum profit by buying and selling a share at most k times
- Introduction to Dynamic Programming on Trees
- Traversal of tree with k jumps allowed between nodes of same height
- Difficulty Level : Medium
- Last Updated : 10 Jan, 2023
- Discuss(50+)
Dynamic Programming (DP) is a technique that solves some particular type of problems in Polynomial Time . Dynamic Programming solutions are faster than the exponential brute method and can be easily proved their correctness.
To dynamically solve a problem, we need to check two necessary conditions:
- Overlapping Subproblems : W hen the solutions to the same subproblems are needed repetitively for solving the actual problem. The problem is said to have overlapping subproblems property.
- Optimal Substructure Property : I f the optimal solution of the given problem can be obtained by using optimal solutions of its subproblems then the problem is said to have Optimal Substructure Property.
Steps to solve a Dynamic programming problem:
- Identify if it is a Dynamic programming problem.
- Decide a state expression with the Least parameters.
- Formulate state and transition relationship.
- Do tabulation (or memorization).
Step 1: How to classify a problem as a Dynamic Programming Problem?
- Typically, all the problems that require maximizing or minimizing certain quantities or counting problems that say to count the arrangements under certain conditions or certain probability problems can be solved by using Dynamic Programming.
- All dynamic programming problems satisfy the overlapping subproblems property and most of the classic Dynamic programming problems also satisfy the optimal substructure property. Once we observe these properties in a given problem be sure that it can be solved using Dynamic Programming.
Step 2: Deciding the state
Dynamic Programming problems are all about the state and its transition . This is the most basic step which must be done very carefully because the state transition depends on the choice of state definition you make.
A state can be defined as the set of parameters that can uniquely identify a certain position or standing in the given problem. This set of parameters should be as small as possible to reduce state space.
In our famous Knapsack problem , we define our state by two parameters index and weight i.e DP[index][weight]. Here DP[index][weight] tells us the maximum profit it can make by taking items from range 0 to index having the capacity of sack to be weight. Therefore, here the parameters index and weight together can uniquely identify a subproblem for the knapsack problem.
The first step to solving a Dynamic Programming problem will be deciding on a state for the problem after identifying that the problem is a Dynamic Programming problem. As we know Dynamic Programming is all about using calculated results to formulate the final result. So, our next step will be to find a relation between previous states to reach the current state .
Step 3: Formulating a relation among the states
This part is the hardest part of solving a Dynamic Programming problem and requires a lot of intuition, observation, and practice.
Example:
Given 3 numbers {1, 3, 5}, The task is to tell the total number of ways we can form a number N using the sum of the given three numbers. (allowing repetitions and different arrangements).
The total number of ways to form 6 is: 8 1+1+1+1+1+1 1+1+1+3 1+1+3+1 1+3+1+1 3+1+1+1 3+3 1+5 5+1
The steps to solve the given problem will be:
- We decide a state for the given problem.
- We will take a parameter N to decide the state as it uniquely identifies any subproblem.
- DP state will look like state(N), state(N) means the total number of arrangements to form N by using {1, 3, 5} as elements. Derive a transition relation between any two states.
- Now, we need to compute state(N).
How to Compute the state?
As we can only use 1, 3, or 5 to form a given number N . Let us assume that we know the result for N = 1,2,3,4,5,6 Let us say we know the result for: state (n = 1), state (n = 2), state (n = 3) ……… state (n = 6) Now, we wish to know the result of the state (n = 7). See, we can only add 1, 3, and 5. Now we can get a sum total of 7 in the following 3 ways:
1) Adding 1 to all possible combinations of state (n = 6) Eg : [ (1+1+1+1+1+1) + 1] [ (1+1+1+3) + 1] [ (1+1+3+1) + 1] [ (1+3+1+1) + 1] [ (3+1+1+1) + 1] [ (3+3) + 1] [ (1+5) + 1] [ (5+1) + 1] 2) Adding 3 to all possible combinations of state (n = 4); [(1+1+1+1) + 3] [(1+3) + 3] [(3+1) + 3] 3) Adding 5 to all possible combinations of state(n = 2) [ (1+1) + 5] (Note how it sufficient to add only on the right-side – all the add-from-left-side cases are covered, either in the same state, or another, e.g. [ 1+(1+1+1+3)] is not needed in state (n=6) because it’s covered by state (n = 4) [(1+1+1+1) + 3]) Now, think carefully and satisfy yourself that the above three cases are covering all possible ways to form a sum total of 7; Therefore, we can say that result for state(7) = state (6) + state (4) + state (2) OR state(7) = state (7-1) + state (7-3) + state (7-5) In general, state(n) = state(n-1) + state(n-3) + state(n-5)
Below is the implementation of the above approach:
Time Complexity: O(3 N ), As at every stage we need to take three decisions and the height of the tree will be of the order of n. Auxiliary Space: O(N), The extra space is used due to the recursion call stack.
The above code seems exponential as it is calculating the same state again and again. So, we just need to add memoization .
Step 4: Adding memoization or tabulation for the state
This is the easiest part of a dynamic programming solution. We just need to store the state answer so that the next time that state is required, we can directly use it from our memory.
Adding memoization to the above code
Time Complexity: O(n), As we just need to make 3n function calls and there will be no repetitive calculations as we are returning previously calculated results. Auxiliary Space: O(n), The extra space is used due to the recursion call stack.
Another way is to add tabulation and make the solution iterative. Please refer to tabulation and memoization for more details. Dynamic Programming comes with lots of practice. One must try solving various classic DP problems that can be found here .
You may check the below problems first and try solving them using the above-described steps:-
This article is contributed by Nitish Kumar . If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected] See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
Solve DSA problems on GfG Practice.
Please Login to comment...
- prashant11_4
- ayushbhardwaj1
- GauriShankarBadola
- dharanendralv23
- shankarmoturi29
- srivastavaharshit848
- avanitrachhadiya2155
- 4wwcz6wjzcry0i65yxjbviufuxk0z8itc3ix23cn
- abhijeet19403
- aashutoshparoha
In JAVA/C++ Language
New Course Launch!
Improve your Coding Skills with Practice
Start your coding journey now.

Aug 24, 2019
Steps to solve any Dynamic Programming Problem
Step 1: recognize a dp problem.
Recognizing that a problem can be solved using DP is the first and often the most difficult step in solving it. What you want to ask yourself is whether your problem solution can be expressed as a function of solutions to similar smaller problems.
Step 2: Identify problem variables
Now we have established that there is some recursive structure between our subproblems. Next, we need to express the problem in terms of the function parameters and see which of those parameters are changing. If only 1 parameter is changing with each recursive call, the problem is 1D DP, similarly, if 2, 3 changing parameters are there, we have a 2D, 3D DP problem respectively. A way to determine the number of changing parameters is to list examples of several subproblems and compare the parameters. Counting the number of changing parameters is valuable to determine the number of subproblems we have to solve. It’s also important in its own right in helping us strengthen the understanding of the recurrence relation from step 1.
Identify the changing parameters and determine the number of subproblems.
Step 3: Clearly express the recurrence relation
This is an important step that many rush through in order to get into coding. Expressing the recurrence relation as clearly as possible will strengthen your problem understanding and make everything else significantly easier.
Once you figure out that the recurrence relation exists and you specify the problems in terms of parameters, this should come as a natural step. How do problems relate to each other? In other words, let’s assume that you have computed the subproblems. How would you compute the main problem?
Recurrence relation: Assuming you have computed the subproblems, how would you compute the main problem?
Step 4: Identify the base cases
A base case is a subproblem that doesn’t depend on any other subproblem. In order to find such subproblems, you typically want to try a few examples, see how your problem simplifies into smaller subproblems, and identify at what point it cannot be simplified further.
The reason a problem cannot be simplified further is that one of the parameters would become a value that is not possible given the constraints of the problem.
Step 5: Decide if you want to implement it iteratively or recursively
The way we talked about the steps so far might lead you to think that we should implement the problem recursively. However, everything that we’ve talked about so far is completely agnostic to whether you decide to implement the problem recursively or iteratively. In both approaches, you would have to determine the recurrence relation and the base cases.
To decide whether to go iteratively or recursively, you want to carefully think about the trade-offs .
Step 6: Add memoization
Memoization is a technique that is closely associated with DP. It is used for storing the results of expensive function calls and returning the cached result when the same inputs occur again.
Step 7: Determine Time complexity
There are some simple rules that can make computing time complexity of a dynamic programming problem much easier. Here are two steps that you need to do:
- Count the number of states — this will depend on the number of changing parameters in your problem
- Think about the work done per each state. In other words, if everything else but one state has been computed, how much work do you have to do to compute that last state?
Lastly, Work on more DP problems by following the steps we went through. You can always find a bunch of them online (ex. LeetCode or GeeksForGeeks ). As you practice, keep in mind one thing: learn ideas, don’t learn problems. The number of ideas is significantly smaller and it’s an easier space to conquer which will also serve you much better.
More from Manvi Tyagi
Software Developer working to become a better Software Developer everyday :)
About Help Terms Privacy
Get the Medium app

Manvi Tyagi
Text to speech
LTCWM > Blog > Resources > Technical guides > How to Solve Any Dynamic Programming Problem

How to Solve Any Dynamic Programming Problem
Updated on April 12th, 2020 | Sign up for learn to code tips
If you’re aiming for a top-tier tech job, you have to face the coding interview —and come out on top. And one sure-fire way to impress is to have dynamic programming down pat.
In today’s special guest post, Sam Gavis-Hughson guides us through his formula for solving any dynamic programming problem.
Take it away, Sam!
Disclosure: I’m an affiliate for Sam's courses. While the resources mentioned in this post are free, I may get a small commission if you click the links below and later buy one of his products. Thanks!
When it comes to coding interviews , not all topics are created equal.
Some are relatively easy. For example, even the hardest linked list problems don’t tend to be that difficult because the concept is on the simpler side.
But then there are some topics where even the easiest variations strike fear into the hearts of interviewees everywhere.
One such topic is dynamic programming—an optimization technique programmers can use to speed up our code when we are repeatedly solving the same problem.

Dynamic programming has truly become the defacto hard topic that your interviewer can ask you. If they want to really put you through your paces, that’s what they’ll ask about.
This means that if you’re interviewing for any top tech company , dynamic programming should be at the top of your list of topics to prepare.
What is Dynamic Programming?
Essentially, dynamic programming is a way of making a recursive algorithm more efficient by making sure it doesn’t have to solve the same subproblem twice.
When you use dynamic programming, it stores the results of subproblems so you don’t have to re-compute those results next time they’re needed. It’s an alternative to plain recursion, which requires repeating the solution process every time the subproblem is encountered.
This Stack Overflow answer words it well: “Dynamic programming is when you use past knowledge to make solving a future problem easier.”
Some benefits of dynamic programming are that it saves you coding time, reduces lines of code, and speeds up an algorithm’s processing time.
Now, here’s the crazy thing…
Dynamic Programming Doesn’t Have to Be Hard
The real challenge with dynamic programming is that it is counterintuitive. When you’re trying to solve dynamic programming problems, all the obvious steps that you would normally take actually pull you further away from the correct solution:
- Want to find the optimal solution? You actually need to start with the brute force approach.
- Want to find an iterative solution? You have to start with recursion.
- Want to solve the problem as quickly as possible? You need to slow it down and go step by step.
Click To Tweet
So if dynamic programming is so counterintuitive, how are we ever supposed to solve these problems effectively?
To start, let’s look at how most people prepare for their coding interviews:
They focus on memorizing solutions.
Rather than being strategic and understanding the underlying techniques, most people focus on simply memorizing as many solutions as they can.
Memorizing gives you a quick and easy win. But the problem is that it ultimately handicaps you.
When you focus on memorizing, your interview prep strategy becomes very simple: just go through as many problems as you can. When you go into an interview, you hope that you see a problem that you’ve studied before.
But this approach quickly leads to diminishing returns. There’s only so much that you can actually memorize, and the number of problems that you could be asked is very large.
Not to mention that this approach prevents you from actually being able to connect the dots.
Imagine learning a new language (let’s say French). You decide that you are going to create a massive deck of flashcards and simply memorize individual words. This is your plan to get to fluency.

So you start memorizing. Here’s one sample set of words:
“suis”, “es”, “est”, “sommes”, “êtez”, “sont”
What is the connection between these words (if you already know French, pretend you don’t for a sec)?
On the surface, it’s not obvious. So if you were just memorizing, you would be memorizing 6 discrete words.
However, there actually is a very close connection between these words: They’re all different conjugations of the verb “to be.”
If we look at the translations, we see:
- “Je suis” – “I am”
- “Tu es” – “You are”
- “Elle est” – “She is”
- “Nous sommes” – “We are”
- “Vous êtez” – “You all are”
- “Ils sont” – “They are”
Notice how much easier this is now that we’ve connected them all in some way that is meaningful to us? Yes we still need to memorize the specifics, but now we can see what connects them.
So what if we could do the same thing with dynamic programming?
Well, it’s never going to happen if we just try to memorize solutions to different problems. The issue is that the similarity between these different problems ISN’T in the solution itself.
The similarity between all dynamic programming problems is in the process.

For the rest of this post, I’m going to show you the exact strategy that you can use to solve any dynamic programming problem, even if you’ve never seen the problem before.
The remainder of this post is excerpted from my free ebook, Dynamic Programming for Interviews, which you can download here .
How to Solve Dynamic Programming Problems Using the Fast Method
What is the most important characteristic of any successful interviewee?
They have a repeatable strategy.
That’s exactly what the FAST Method is. It’s a repeatable strategy for solving any dynamic programming problem, whether you’ve seen the problem before or not.
The FAST Method is an acronym for the 4 steps you need to solve any dynamic programming problem:
- F ind the First Solution
- A nalyze the First Solution
- Identify the S ubproblems
- T urn around the solution
By following these 4 steps, you’ll be able to find the optimal dynamic programming solution for any problem.
Start coding now
Stop waiting and start learning! Get my 10 tips on teaching yourself how to code.
Success! Now check your email to confirm your subscription.
There was an error submitting your subscription. Please try again.
1. Find the First Solution
The first step for any dynamic programming problem (and the step that most people skip) is to find an initial brute-force solution to the problem.
The goal here is to just get something down on paper without any concern for efficiency. To be honest, this should be the first step for any problem you might solve, but it is particularly important for dynamic programming.

There are several keys to think about while doing this step that you’ll want to keep in mind:
- The solution should be recursive. If not, the problem probably isn’t a good candidate for dynamic programming. For a lot more info on effectively coming up with a recursive solution, look here .
- Each recursive call must be self-contained. This means that you cannot store your results to a global variable or pass a results variable into your function. I often refer to the required approach as “building up as you return” and you can learn more about that here .
- It is important that we minimize the number of variables that we are passing into our recursive function. Dynamic programming solutions rely on there being multiple recursive calls with the same input, and the more variables there are, the less the inputs will overlap.
With these basic rules in mind, you will have no trouble using this brute force solution in the FAST Method.
Check out Dynamic Programming for Interviews for detailed walkthroughs of 5 of the most popular dynamic programming problems.

2. Analyze the First Solution
This is the step where we decide whether we can actually use dynamic programming to solve a problem. To do this, we’re going to look at a couple of specific things.
First off, we should be sure to determine what the actual time complexity of our code is currently. One mistake that I see fairly often is attempting to optimize something that doesn’t need to be optimized. If our solution is already efficient, spending a lot of time optimizing is a real waste of time.
With most of our recursive functions, we can use a pretty simple heuristic to compute the runtime. We simply look at the branching factor of our recursive function raised to the depth.
For example, if we were finding all combinations of an input, that would give us a time complexity of `O(2n)`. You can read a lot more about this here .

Next up, if our solution is in fact inefficient (we’re most likely looking for something that is exponential time or worse as being inefficient), we want to see if we can optimize it using dynamic programming.
Dynamic programming actually requires us to meet 2 specific criteria. If we don’t, then it is not possible for us to optimize our problem using dynamic programming. However, if we do, we will likely see a major improvement.
These are the criteria that we need to look for:
Optimal Substructure
The first criterion is that our problem must have optimal substructure. Technically speaking, this means that we must be able to find an optimal solution to a problem by solving for its subproblems. An easier way to think about this is simply that we must be able to solve the problem recursively. If we already found a recursive problem in the previous step, then we can safely assume we have optimal substructure.
Overlapping Subproblems
This is where the actual optimization comes in. If our problem has overlapping subproblems, that means that we are calling the same function with the exact same inputs multiple times. So we’re doing repetitive work for no reason. If we were to cache (or “memoize”) the results, we would be able to save a lot of time.
If we meet these two criteria, then we know that we can optimize our solution using dynamic programming.
3. Identify the Subproblems
If we find that we are able to use dynamic programming, the next step is to clearly identify the subproblems. Defining the subproblem in plain English is going to make it much easier for us to understand everything that is going on.

To approach this step, we are going to specifically look at our recursive code and see what recursive calls are being made. We then want to define in plain English what the actual meaning of that recursive call is.
For example, if we are computing the nth Fibonacci number, we have 2 recursive calls, fib(n-1) and fib(n-2). To define these in plain English, the function simply returns the nth Fibonacci number. Pretty simple.
Once we’ve identified what the subproblems are, we can also memoize our recursive solution to make it more efficient. All this means is that we are going to add an array or HashMap that stores all of the values that we’ve previously computed.
Each time we make a function call, we will look in our array to see if a result has already been computed for the current inputs. If so, then we can return it without actually computing anything. If not, we compute it and then save it into our array.
For full code and example problems, download Dynamic Programming for Interviews .
4. Turn Around the Solution
If you’ve ever spent any serious time studying dynamic programming solutions in the past, you may have noticed that the vast majority of them are iterative, not recursive. And yet up to this point in the FAST Method, we have only generated recursive solutions.
In this optional final step of the process, we will take our recursive solution and make it into an iterative solution.
When doing dynamic programming, we really have two different options:
- A top-down (memoized) solution
- A bottom-up (tabular) solution
A top-down solution is the recursive solution that we found in the previous step. We call this top-down because we are starting with the goal result that we’re trying to get (ie. fib(5)) and then repeatedly split it into smaller subproblems until we reach our base case.
Bottom-up is simply the opposite of that. Rather than starting with our target input, we start with the base cases. We iteratively compute larger and larger subproblems until we reach our target result.
Both of these approaches will give us the same worst case complexity. However, many people prefer the bottom-up approach because recursive code tends to execute slower than iterative code that does the same work, given that it requires additional overhead.

The key to turning around the solution and finding a bottom-up solution is to look at what the smallest subproblems are. This is why we needed to carefully identify and define the subproblems in the previous step.
We can start with computing our base case. Then we need to determine how to compute a given subproblem, assuming all the smaller subproblems have already been computed. We’ll save all of these subproblem solutions into an array so that we can easily look them up.
See examples of exactly how to do this in my free ebook, Dynamic Programming for Interviews .
At the end of the day, dynamic programming is a challenging topic. It’s not something that you can magically become a master at overnight.
However, it also isn’t something you have to be afraid of. If you nail down your recursion skills and understand the FAST Method, even the most challenging dynamic programming problems can be easily solved during your interview.
Next Steps : Where to Learn More About Dynamic Programming
Want to learn more about dynamic programming? Check out these online courses:
- Intro To Dynamic Programming – Coding Interview Preparation (Udemy): Specifically designed to help you gain foundational knowledge for your software engineering coding interview
- Dynamic Programming – I (Udemy): Also intro-level and geared toward coding interview basics
- Master the Art of Dynamic Programming (Udemy): In-depth theory and practice of dynamic programming
- Learn Dynamic Programming Using Python (Udemy): Hones in specifically on dynamic programming in the Python programming language
- Greedy Algorithms, Minimum Spanning Trees, and Dynamic Programming (Coursera): A course offered by Stanford about dynamic programming and other algorithm-related topics
- Dynamic Programming: Applications in Machine Learning and Genomics (edX): Learn about two interesting applications of dynamic programming that meld science and technology, from UC San Diego
About the Author

Sam Gavis-Hughson, founder of Byte by Byte, helps software engineers successfully interview for jobs at top tech companies. He is the author of Dynamic Programming for Interviews , the ebook that shows anyone how to succeed at dynamic programming interviews.
Dynamic Programming: An Approach to Solving Computing Problems

Sometimes in computer science, you will run into problems. You can divide these into subproblems, which can, in turn, be divided into smaller subproblems. If the smaller subproblems overlap, then you can save the result in memory for future reference. This way, you don’t need to compute the same result multiple times, thus increasing the efficiency of the program substantially. This way of solving these problems is referred to as dynamic programming.
In this article, you will learn what dynamic programming is. I will also show how to compute Fibonacci numbers, which is a simple problem that dynamic programming can solve. I will compare the dynamic programming solutions to the naive solution that uses recursion. These examples are written in Python syntax. Finally, I’ll also give some general pointers to keep in mind when attempting to solve problems using dynamic programming
Dynamic programming
More From Artturi Jalli: Python Cheat Sheet: A Handy Guide to Python
What Types of Problems Can Dynamic Programming Solve?
Dynamic programming is typically a way to optimize solutions to certain problems that use recursion. If a recursive solution to a problem has to compute solutions for subproblems with the same inputs repeatedly, then you can optimize it through dynamic programming. As mentioned earlier, in this case, you would simply save the result of the computation for use later if and when it’s needed. This optimization can reduce the time complexity of an algorithm from exponential time to polynomial time. This means that the number of computations n scales like a polynomial expression instead of scaling like an exponential expression as n increases. In general, polynomial expressions grow much slower than exponential expressions.
There are two conditions that need to be satisfied to use dynamic programming:
- Overlapping subproblems
- Optimal substructure property
What Are Overlapping Subproblems?
I alluded to the overlapping subproblems condition earlier. It simply means that when solving the problem in question, the solutions for the same subproblems are repeatedly necessary. In this case, the solutions to these subproblems can be stored for later use to skip recomputation.
Another way to think about this condition is to turn it upside down. If there are no overlapping subproblems, then there’s no point in saving the solutions for the subproblems, and you can’t use dynamic programming.
There are two different ways of storing the solutions to the overlapping subproblems:
- Memoization (top-down)
- Tabulation (bottom-up)
What Is Memoization?
The memoization approach to dynamic programming is very similar to the naive recursion approach, with only a small change. The difference is that we use a lookup table to store solutions to the subproblems, and then use this table to check whether that solution already exists.
If the solution for a certain subproblem already exists in the lookup table, then that solution is returned from the lookup table. If it does not, then it needs to be computed (through recursion) and added to the lookup table.
For the sake of clarity, let’s define a solution for a subproblem in our dynamic programming problem to be DP[X] ., with DP[N] being the desired solution and DP[0] being the base solution. In the memoization approach, the program starts from DP[N] and asks for the solutions from which DP[N] can be reached (these should be subproblems of lower order DP[n<N]) . Then, from these states, the same process is repeated recursively, until reaching the base solution DP[0] .
If this feels a little too abstract, don’t worry. The examples introduced later in this article should clarify what I mean.
Memoization is known as a top-down approach to dynamic programming since the program will start from the desired, last (top) state and work its way down recursively.
What Is Tabulation?
The tabulation approach to dynamic programming works in a reverse manner compared to the memoization approach. The program will start from the base (or bottom) solution for the subproblem and work its way up, solving the subproblems one by one until reaching the desired solution.
In terms of solutions to subproblems, the tabulation approach starts from the base solution DP[0] and then calculates DP[1], DP[2], … DP[N] until reaching the desired solution for the subproblem DP[N] . Since we started from the base solution DP[0] and worked towards the desired solution DP[N] , the tabulation approach is also known as a bottom-up approach.
Again, the examples below should make this easier to understand.
What Is Optimal Substructure Property?
If the optimal solution to a problem can be obtained using the optimal solution to its subproblems, then the problem is said to have optimal substructure property .
As an example, let’s consider the problem of finding the shortest path between ‘Start’ and ‘Goal’ nodes in the graph below. The nodes are connected to other nodes via edges, and the distance between two connected nodes is marked with a number next to the edge.

The shortest path from the Start node to the Goal node is through nodes three and four.
This problem clearly has optimal substructure property. Since the shortest path from the Start node to the Goal node goes through Node Four, it clearly follows that this path is a combination of the shortest path from the Start node to Node Four and the shortest path from Node Four to the Goal node.
Many problems do not have optimal substructure property. For example, the problem of finding the longest path (with no cycles) between the Start node and the Goal node in the above graph doesn’t. Here’s how you can see this:
The longest path is: Start - Node Three - Node Two - Node One - Node Four - Goal. However, this does not imply that the longest path from the Start node to Node Two is Start - Node Three - Node Two. The longest path from the Start node to Node Two is actually Start - Node One - Node Four - Node Three - Node Two (and Start - Node One - Node Four - Goal - Node Two, which has equal length to the first one).
Dynamic Programming Example: Calculating Fibonacci Numbers
One of the simplests examples of dynamic programming is the computation of Fibonacci numbers, which are numbers from the Fibonacci sequence. The first Fibonacci number is zero, the second is one, and the subsequent numbers are the sum of the previous two Fibonacci numbers. The 10 first Fibonacci numbers are zero, one, one, two, three, five, eight, 13, 21, and 34.
Let’s first start with the naive, recursive solution. Here’s a Python function to calculate the nth Fibonacci number (indexing starts from zero):
From this example it is easy to see that this problem satisfies the overlapping subproblems condition since the function clearly has to calculate the previous Fibonacci numbers multiple times (when n > three). The smallest Fibonacci numbers are computed most often, when this function is called for a large n.
This problem also clearly has optimal substructure since there is only a single solution for the subproblems, which are used to compute the solution to the desired problem.
Due to the recursion, this function runs in exponential time.
Let’s next look at how this could be solved using dynamic programming. Let’s start with a top-down solution using memoization. Here’s a Python function that calculates the nth Fibonacci number that implements dynamic programming through memoization:
This approach is quite similar to the recursive approach since it starts from calculating the nth Fibonacci number and, in order to calculate it, goes onto calculating the n-1st and n-2nd Fibonacci numbers. The difference is in the lookup table, where the smaller Fibonacci numbers will be saved as they are calculated, so that they do not need to be calculated again and again.
This makes this function actually run in linear time instead of exponential time.
For the sake of an example, let’s also look at a Python function that solves the same problem in a bottom-up manner with dynamic programming using tabulation:
In this solution, the nth Fibonacci number is calculated in a bottom-up manner, starting by calculating the first Fibonacci number, then the second, third and so on until reaching the nth Fibonacci number. This function also runs in linear time.
More in Software Engineering: Glob Module in Python: Explained
How to Solve Problems Using Dynamic Programming
The first step to solving a problem using dynamic programming is to identify it as a dynamic programming problem. If you can validate that the problem has overlapping subproblems and that it satisfies the optimal substructure property, you can be sure that you can solve it with dynamic programming.
The second step is to decide on a state expression. This state is the set of parameters that uniquely identifies the different subproblems.
In the Fibonacci numbers example, the parameter identifying the state would just be the serial number of a certain Fibonacci number.
There can be more than one parameter identifying a state. You should always use as few parameters as possible, however.
The third — and probably hardest — step in solving problems using dynamic programming is formulating the relation between the different states.
In the Fibonacci number case this is simple, however, since the nth Fibonacci number is the sum of the n-1st and n-2nd Fibonacci numbers. So F[n] = F[n-1] + F[n-2].
The fourth step is to implement memoization or tabulation for the states that you decided upon, using the relation you discovered between the states. This means simply saving the state (or in other words the solution to a certain subproblem) so it can be recovered from memory without recomputation when it is needed again. This should be fairly simple, if you did steps one to three well
Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.
Great Companies Need Great People. That's Where We Come In.

Towards Data Science

Aug 4, 2020
Member-only
PROGRAMMING TUTORIAL
Beginners guide to dynamic programming, optimize your code using some simple techniques.
Dynamic programming is an art, the more problems you solve easier it gets.
Sometimes when you write code it might take some time to execute or it may never run even if your logic is fine. The same problem occurred to me while solving Google Foobar challenge questions and I realized that the solution was not optimized and was using all available RAM (for large values).
An entirely different approach is required to solve such kinds of problems i.e. “optimization of code” by following the concept of dynamic programming.
What is Dynamic Programming?
Dynamic programming is a terrific approach that can be applied to a class of problems for obtaining an efficient and optimal solution.
In simple words, the concept behind dynamic programming is to break the problems into sub-problems and save the result for the future so that we will not have to compute that same problem again. Further optimization of sub-problems which optimizes the overall solution is known as optimal substructure property.
Two ways in which dynamic programming can be applied:
In this method, the problem is broken down and if the problem is solved already then saved value is returned, otherwise, the value of the function is memoized i.e. it will be calculated for the first time; for every other time, the stored value will be called back. Memoization is a great way for computationally expensive programs. Don’t confuse memoization with memorize.
Memoize != memorize
This is an effective way of avoiding recursion by decreasing the time complexity that recursion builds up (i.e. memory cost because of recalculation of the same values). Here, the solutions to small problems are calculated which builds up the solution to the overall problem. (You will have more clarity on this with the examples explained later in the article).
Understanding Where to Use This Technique
As mentioned above, if you notice that the problem can be broken down into sub-problems and these can be broken into much smaller ones and some of these have overlap (i.e. requires the computation of previously calculated values). The main goal is to optimize the code by reducing the repetition of values by storing the results of sub-problems.
Dynamic Programming can be applied to any such problem that requires the re-calculation of certain values to reach the final solution.
Recursion and Dynamic Programming
Remember, dynamic programming should not be confused with recursion.
Recursion is a way of finding the solution by expressing the value of a function in terms of other values of that function directly or indirectly and such function is called a recursive function. It follows a top-down approach.
Dynamic programming is nothing but recursion with memoization i.e. calculating and storing values that can be later accessed to solve subproblems that occur again, hence making your code faster and reducing the time complexity (computing CPU cycles are reduced).
Here, the basic idea is to save time by efficient use of space. Recursion takes time but no space while dynamic programming uses space to store solutions to subproblems for future reference thus saving time.
Understanding Dynamic Programming With Examples
Let’s start with a basic example of the Fibonacci series.
Fibonacci series is a sequence of numbers in such a way that each number is the sum of the two preceding ones, starting from 0 and 1.
F(n) = F(n-1) + F(n-2)
- Recursive method :
Here, the program will call itself, again and again, to calculate further values. The calculation of the time complexity of the recursion based approach is around O(2^N). The space complexity of this approach is O(N) as recursion can go max to N.
For example-
F(4) = F(3) + F(2) = ((F(2) + F(1)) + F(2) = ((F(1) + F(0)) + F(1)) + (F(1) + F(0))
In this method values like F(2) are computed twice and calls for F(1) and F(0) are made multiple times. Imagine the number of repetitions if you have to calculate it F(100). This method is ineffective for large values.
- Top-Down Method
Here, the computation time is reduced significantly as the outputs produced after each recursion are stored in a list which can be reused later. This method is much more efficient than the previous one.
- Bottom down
This code doesn’t use recursion at all. Here, we create an empty list of length (n+1) and set the base case of F(0) and F(1) at index positions 0 and 1. This list is created to store the corresponding calculated values using a for loop for index values 2 up to n.
Unlike in the recursive method, the time complexity of this code is linear and takes much less time to compute the solution, as the loop runs from 2 to n, i.e., it runs in O ( n ). This approach is the most efficient way to write a program.
Time complexity: O(n) <<< O(2^N)
Now, let’s see another example (this is an intermediate level problem):
Problem statement: You have to build a staircase in such a way that, each type of staircase should consist of 2 or more steps. No two steps are allowed to be at the same height — each step must be lower than the previous one. All steps must contain at least one brick. A step’s height is classified as the total amount of bricks that make up that step. For example, when N = 3, you have only 1 choice of how to build the staircase, with the first step having a height of 2, and the second step having a height of 1 i.e.(2,1). But when N = 5, there are two ways you can build a staircase from the given bricks. The two staircases can have heights (4, 1) or (3, 2).
Write a function called solution(n) that takes a positive integer n and returns the number of different staircases that can be built from exactly n bricks. n will always be at least 3 (so you can have a staircase at all), but no more than 200.
This is a problem I had to solve at level 3 of Google Foobar Challenge . I would suggest you try this question on your own before reading the solution, it will help you understand the concept better.
An intuitive approach to this problem:
- My first intuitive approach was to create a list l of integers till n .
- Then append all the possible combinations of integers of list l into a new list sol .
- And, at the final step, I used a for loop to check the sum of every element of the list sol that if it is equal to the required value. If the condition is true that element is appended to another new list final . And the length of final is returned as a final solution to the problem.
This code turned out to be very ineffective and didn’t work for large values because of the same reason i.e. hight time complexity and repeated calculations of certain values. Running this code for large values(like 100) will use all available RAM and code will eventually crash.
Bottom-up approach for the same problem:
- At the first step, an empty list ‘a’ is initiated to store all the values from the further loops.
- After each iteration of the outer loop, a[j] is the number of staircases you can make with height at most i where j is the number of bricks used.
- List a is initiated to [1,0,0,...] because there can be only one stair with 0 blocks and 0 height.
- In each iteration of the inner loop, list a is transformed from representing max-height i-1 to representing max-height i , by incorporating the possibility of adding a step of height i to any shorter staircase that leaves you with at least i blocks.
- In the final step, the number of different staircases that can be built from exactly n bricks is returned by the function (1 is subtracted at the end to exclude the case of single stair of height n ).
This method is effective for large values as well since the time complexity is traded for space here.
This kind of approach can be applied to other problems as well, you just need to identify them and apply the basics of dynamic programming and you will be able to solve the problems efficiently.
Dynamic programming is a very effective technique for the optimization of code. This technique is really simple and easy to learn however it requires some practice to master.
“Those who cannot remember the past are condemned to repeat it.” -George Santayana
Check this out: https://www.fiverr.com/share/LDDp34
More from Towards Data Science
Your home for data science. A Medium publication sharing concepts, ideas and codes.
About Help Terms Privacy
Get the Medium app

Shantnu & Kartik
Fashion Technology & Communication Students at NIFT India| Technology&Programming Enthusiasts| Blog at https://patataeater.blogspot.com
Text to speech
Introduction to Dynamic Programming
Cold war between systematic recursion and dynamic programming, minimum steps to one, problem : longest increasing subsequence, problem : longest common subsequence (lcs), memory constrained dp, practice problems, finding n th finobacci number in o(log n).
Dynamic programming (usually referred to as DP ) is a very powerful technique to solve a particular class of problems. It demands very elegant formulation of the approach and simple thinking and the coding part is very easy. The idea is very simple, If you have solved a problem with the given input, then save the result for future reference, so as to avoid solving the same problem again.. shortly 'Remember your Past' :) . If the given problem can be broken up in to smaller sub-problems and these smaller subproblems are in turn divided in to still-smaller ones, and in this process, if you observe some over-lapping subproblems, then its a big hint for DP . Also, the optimal solutions to the subproblems contribute to the optimal solution of the given problem ( referred to as the Optimal Substructure Property ).
There are two ways of doing this.
Start solving the given problem by breaking it down. If you see that the problem has been solved already, then just return the saved answer. If it has not been solved, solve it and save the answer. This is usually easy to think of and very intuitive. This is referred to as Memoization .
Analyze the problem and see the order in which the sub-problems are solved and start solving from the trivial subproblem, up towards the given problem. In this process, it is guaranteed that the subproblems are solved before solving the problem. This is referred to as Dynamic Programming .
Note that divide and conquer is slightly a different technique. In that, we divide the problem in to non-overlapping subproblems and solve them independently, like in mergesort and quick sort .
In case you are interested in seeing [[https://www.thelearningpoint.net/computer-science/dynamic-programming|visualizations related to Dynamic Programming try this out]].
Complementary to Dynamic Programming are [[https://www.thelearningpoint.net/computer-science/greedy-algorithms|Greedy Algorithms]] which make a decision once and for all every time they need to make a choice, in such a way that it leads to a near-optimal solution. A Dynamic Programming solution is based on the principal of Mathematical Induction greedy algorithms require other kinds of proof.
Recursion uses the top-down approach to solve the problem i.e. It begin with core(main) problem then breaks it into subproblems and solve these subproblems similarily. In this approach same subproblem can occur multiple times and consume more CPU cycle ,hence increase the time complexity. Whereas in Dynamic programming same subproblem will not be solved multiple times but the prior result will be used to optimise the solution. eg. In fibonacci series :-
Fib(4) = Fib(3) + Fib(2)
= (Fib(2) + Fib(1)) + Fib(2)
l"> =((Fib(1) + Fib(0)) + Fib(1) ) + Fib(2)
=((Fib(1) + Fib(0)) + Fib(1) ) + ( Fib(1) + Fib(0) )
Here, call to Fib(1) and Fib(0) is made multiple times.In the case of Fib(100) these calls would be count for million times. Hence there is lots of wastage of resouces(CPU cycles & Memory for storing information on stack).
In dynamic Programming all the subproblems are solved even those which are not needed, but in recursion only required subproblem are solved. So solution by dynamic programming should be properly framed to remove this ill-effect.
For ex. In combinatorics, C(n.m) = C(n-1,m) + C(n-1,m-1).
In simple solution, one would have to construct the whole pascal triangle to calcute C(5,4) but recursion could save a lot of time.
Dynamic programming and recursion work in almost similar way in the case of non overlapping subproblem. In such problem other approaches could be used like “divide and conquer” .
Even some of the high-rated coders go wrong in tricky DP problems many times. DP gurus suggest that DP is an art and its all about Practice. The more DP problems you solve, the easier it gets to relate a new problem to the one you solved already and tune your thinking very fast. It looks like a magic when you see some one solving a tricky DP so easily. Its time for you to learn some magic now :). Lets start with a very simple problem.
Problem Statement:
On a positive integer, you can perform any one of the following 3 steps. 1.) Subtract 1 from it. ( n = n - 1 ) , 2.) If its divisible by 2, divide by 2. ( if n % 2 == 0 , then n = n / 2 ) , 3.) If its divisible by 3, divide by 3. ( if n % 3 == 0 , then n = n / 3 ). Now the question is, given a positive integer n , find the minimum number of steps that takes n to 1
eg: 1.)For n = 1 , output: 0 2.) For n = 4 , output: 2 ( 4 /2 = 2 /2 = 1 ) 3.) For n = 7 , output: 3 ( 7 -1 = 6 /3 = 2 /2 = 1 )
Approach / Idea:
One can think of greedily choosing the step, which makes n as low as possible and conitnue the same, till it reaches 1. If you observe carefully, the greedy strategy doesn't work here. Eg: Given n = 10 , Greedy --> 10 /2 = 5 -1 = 4 /2 = 2 /2 = 1 ( 4 steps ). But the optimal way is --> 10 -1 = 9 /3 = 3 /3 = 1 ( 3 steps ). So, we need to try out all possible steps we can make for each possible value of n we encounter and choose the minimum of these possibilities.
It all starts with recursion :). F(n) = 1 + min{ F(n-1) , F(n/2) , F(n/3) } if (n>1) , else 0 ( i.e., F(1) = 0 ) . Now that we have our recurrence equation, we can right way start coding the recursion. Wait.., does it have over-lapping subproblems ? YES. Is the optimal solution to a given input depends on the optimal solution of its subproblems ? Yes... Bingo ! its DP :) So, we just store the solutions to the subproblems we solve and use them later on, as in memoization.. or we start from bottom and move up till the given n , as in dp. As its the very first problem we are looking at here, lets see both the codes.
Memoization
Bottom-up dp.
Both the approaches are fine. But one should also take care of the lot of over head involved in the function calls in Memoization, which may give StackOverFlow error or TLE rarely.
The Longest Increasing Subsequence problem is to find the longest increasing subsequence of a given sequence. Given a sequence S= {a 1 , a 2 , a 3 , a 4 , ............. , a n-1 , a n } we have to find a longest subset such that for all j and i, j<i in the subset a j <a i .
First of all we have to find the value of the longest subsequences(LS i ) at every index i with last element of sequence being a i . Then largest LS i would be the longest subsequence in the given sequence. To begin LS i is assigned to be one since a i is element of the sequence(Last element). Then for all j such that j<i and a j <a i ,we find Largest LS j and add it to LS i . Then algorithm take O(n 2 ) time.
Pseudo-code for finding the length of the longest increasing subsequence:
This algorithms complexity could be reduced by using better data structure rather than array. Storing predecessor array and variable like largest_sequences_so_far and its index would save a lot time.
Similar concept could be applied in finding longest path in Directed acyclic graph.
---------------------------------------------------------------------------
Longest Common Subsequence - Dynamic Programming - Tutorial and C Program Source code
Given a sequence of elements, a subsequence of it can be obtained by removing zero or more elements from the sequence, preserving the relative order of the elements. Note that for a substring, the elements need to be contiguous in a given string, for a subsequence it need not be. Eg: S1="ABCDEFG" is the given string. "ACEG", "CDF" are subsequences, where as "AEC" is not. For a string of lenght n the total number of subsequences is 2 n ( Each character can be taken or not taken ). Now the question is, what is the length of the longest subsequence that is common to the given two Strings S1 and S2. Lets denote length of S1 by N and length of S2 by M.
Consider each of the 2 N subsequences of S1 and check if its also a subsequence of S2, and take the longest of all such subsequences. Clearly, very time consuming.
Can we break the problem of finding the LCS of S1[1...N] and S2[1...M] in to smaller subproblems ?
[to do , fibonacci , LCS etc., ]
1. Other Classic DP problems : 0-1 KnapSack Problem ( tutorial and C Program) , Matrix Chain Multiplication ( tutorial and C Program) , Subset sum, Coin change, All to all Shortest Paths in a Graph ( tutorial and C Program) , Assembly line joining or topographical sort
You can refer to some of these in the Algorithmist site
2. The lucky draw(June 09 Contest). https://www.codechef.com/problems/D2/
3. Find the number of increasing subsequences in the given subsequence of length 1 or more.
To see problems on DP visit this link
5.TopCoder - ZigZag
6.TopCoder - AvoidRoads - A simple and nice problem to practice
7. For more DP problems and different varieties, refer a very nice collection https://www.codeforces.com/blog/entry/325
8.. TopCoder problem archive
This is not related to Dynamic Programming, but as 'finding the n th [[https://www.thelearningpoint.net/computer-science/learning-python-programming-and-data-structures/learning-python-programming-and-data-structures--tutorial-7--functions-and-recursion-multiple-function-arguments-and-partial-functions|Fibonacci number]' is discussed, it would be useful to know a very fast technique to solve the same.
Note: The method described here for finding the n th Fibonacci number using dynamic programming runs in O(n) time. There is still a better method to find F(n), when n become as large as 10 18 ( as F(n) can be very huge, all we want is to find the F(N)%MOD , for a given MOD ).
Consider the Fibonacci recurrence F(n+1) = F(n) + F(n-1). We can represent this in the form a matrix, we shown below.

start with [ F(1) F(0) ] , multiplying it with A n gives us [ F(n+1) F(n) ] , so all that is left is finding the n th power of the matrix A. Well, this can be computed in O(log n) time, by recursive doubling. The idea is, to find A n , we can do R = A n/2 x A n/2 and if n is odd, we need do multiply with an A at the end. The following pseudo code shows the same.
Matrix findNthPower( Matrix M , power n )
if( n == 1 ) return M;
Matrix R = findNthPower ( M , n/2 );
R = RxR; // matrix multiplication
if( n%2 == 1 ) R = RxM; // matrix multiplication
You can read more about it here
This method is in general applicable to solving any Homogeneous Linear Recurrence Equations, eg: G(n) = a.G(n-1) + b.G(n-2) - c.G(n-3) , all we need to do is to solve it and find the Matrix A and apply the same technique.
Tutorials and C Program Source Codes for Common Dynamic Programming problems
Floyd Warshall Algorithm - Tutorial and C Program source code:https://www.thelearningpoint.net/computer-science/algorithms-all-to-all-shortest-paths-in-graphs---floyd-warshall-algorithm-with-c-program-source-code Integer Knapsack Problem - Tutorial and C Program source code: https://www.thelearningpoint.net/computer-science/algorithms-dynamic-programming---the-integer-knapsack-problem Longest Common Subsequence - Tutorial and C Program source code : https://www.thelearningpoint.net/computer-science/algorithms-dynamic-programming---longest-common-subsequence Matrix Chain Multiplication - Tutorial and C Program source code : https://www.thelearningpoint.net/algorithms-dynamic-programming---matrix-chain-multiplication Related topics: Operations Research, Optimization problems, Linear Programming, Simplex, LP Geometry Floyd Warshall Algorithm - Tutorial and C Program source code: https://www.thelearningpoint.net/computer-science/algorithms-all-to-all-shortest-paths-in-graphs---floyd-warshall-algorithm-with-c-program-source-code
Dynamic Programming techniques are primarily based on the principle of Mathematical Induction unlike greedy algorithms which try to make an optimization based on local decisions, without looking at previously computed information or tables. Some classic cases of greedy algorithms are the [[https://www.thelearningpoint.net/computer-science/algorithms-greedy-algorithms---fractional-knapsack-problems-task-scheduling-problem|greedy knapsack problem]], [[https://www.thelearningpoint.net/computer-science/algorithms-greedy-algorithms---data-compression-using-huffman-encoding|huffman compression trees]], [[https://www.thelearningpoint.net/computer-science/algorithms-greedy-algorithms---fractional-knapsack-problems-task-scheduling-problem|task scheduling]]. [[https://www.thelearningpoint.net/computer-science/arrays-and-sorting-insertion-sort--with-c-program-source-code|Insertion sort]] is an example of dynamic programming, [[https://www.thelearningpoint.net/computer-science/arrays-and-sorting-selection-sort--with-c-program-source-code|selection sort]] is an example of greedy algorithms,[[https://www.thelearningpoint.net/computer-science/arrays-and-sorting-merge-sort--with-c-program-source-code|Merge Sort]] and [[https://www.thelearningpoint.net/computer-science/arrays-and-sorting-quick-sort--with-c-program-source-code|Quick Sort]] are example of divide and conquer. So, different categories of algorithms may be used for accomplishing the same goal - in this case, sorting.
We use cookies to improve your experience and for analytical purposes. Read our Privacy Policy and Terms to know more. You consent to our cookies if you continue to use our website.
The Ultimate Guide to Dynamic Programming
Imagine it again with those spooky Goosebumps letters.
What Is Dynamic Programming?
Before we get into all the details of how to solve dynamic programming problems, it’s key that we answer the most fundamental question:
A problem can be optimized using dynamic programming if it:
Optimal Substructure
“Using online flight search, we will frequently find that the cheapest flight from airport A to airport B involves a single connection through airport C, but the cheapest flight from airport A to airport C involves a connection through some other airport D.”
Overlapping Subproblems
Notice how we see repeated values in the tree. The number 3 is repeated twice, 2 is repeated three times, and 1 is repeated five times. Each of those repeats is an overlapping subproblem. There is no need for us to compute those subproblems multiple times because the value won’t change. If we cache it, we can save ourselves a lot of work.
When Should I Use Dynamic Programming?
Solve any dp problem using the fast method.
It was this mission that gave rise to The FAST Method .
Find the First Solution
While this may seem like a toy example, it is really important to understand the difference here. This second version of the function is reliant on result to compute the result of the function and result is scoped outside of the fibInner() function.
Analyze the First Solution
Now that we have our brute force solution, the next step in The FAST Method is to analyze the solution. In this step, we are looking at the runtime of our solution to see if it is worth trying to use dynamic programming and then considering whether we can use it for this problem at all.
So it would be nice if we could optimize this code, and if we have optimal substructure and overlapping subproblems, we could do just that.
Do we have optimal substructure? Yep. Again, the recursion basically tells us all we need to know on that count.
Identify the Subproblems
So what is our subproblem here? This problem is quite easy to understand because fib(n) is simply the nth Fibonacci number. Since our result is only dependent on a single variable, n , it is easy for us to memoize based on that single variable.
This problem starts to demonstrate the power of truly understanding the subproblems that we are solving. Specifically, not only does knapsack() take in a weight, it also takes in an index as an argument. So if you call knapsack(4, 2) what does that actually mean? What is the result that we expect? Well, if you look at the code, we can formulate a plain English definition of the function:
In terms of the time complexity here, we can turn to the size of our cache. Each value in the cache gets computed at most once, giving us a complexity of O(n*W) .
Turn Around the Solution
So how do we write the code for this? Well, our cache is going to look identical to how it did in the previous step; we’re just going to fill it in from the smallest subproblems to the largest, which we can do iteratively.
With this, we can start to fill in our base cases. We’ll start by initializing our dp array. Whenever the max weight is 0, knapsack(0, index) has to be 0. Referring back to our subproblem definition, that makes sense. If the weight is 0, then we can’t include any items, and so the value must be 0.
Dynamic Programming Doesn’t Have to Be Hard!
While dynamic programming seems like a scary and counterintuitive topic, it doesn’t have to be. By applying structure to your solutions, such as with The FAST Method, it is possible to solve any of these problems in a systematic way.

IMAGES
VIDEO
COMMENTS
The six steps of problem solving involve problem definition, problem analysis, developing possible solutions, selecting a solution, implementing the solution and evaluating the outcome. Problem solving models are used to address issues that...
When multiplying or dividing different bases with the same exponent, combine the bases, and keep the exponent the same. For example, X raised to the third power times Y raised to the third power becomes the product of X times Y raised to th...
The four steps for solving an equation include the combination of like terms, the isolation of terms containing variables, the isolation of the variable and the substitution of the answer into the original equation to check the answer.
Chapters. View all · Introduction · Introduction · Introduction · Longest Increasing Subsequence Problem · Longest Increasing Subsequence Problem.
Sample DP Problem · Step 1: How to recognize a Dynamic Programming problem · Step 2: Identify problem variables · Step 3: Clearly express the
The first step to solving a Dynamic Programming problem will be deciding on a state for the problem after identifying that the problem is a
Step 1: Recognize a DP problem · Step 2: Identify problem variables · Step 3: Clearly express the recurrence relation · Step 4: Identify the base cases · Step 5:
How to Solve Dynamic Programming Problems Using the Fast Method · 1. Find the First Solution · 2. Analyze the First Solution · 3. Identify the
The first step to solving a problem using dynamic programming is to identify it as a dynamic programming problem. If you can validate that the
Dynamic programming is a really useful general technique for solving problems that involves breaking down problems into smaller overlapping sub-problems
In simple words, the concept behind dynamic programming is to break the problems into sub-problems and save the result for the future so that we will not have
Analyze the problem and see the order in which the sub-problems are solved and start solving from the trivial subproblem, up towards the given problem. In this
Steps to Solve Dynamic Programming Problems · 1. Identify the sub-problem and write it down in words · 2. Sub-problem expressed as Mathematical
The first step to solving any dynamic programming problem using The FAST Method is to find the initial brute force recursive solution.