Greedy approach and dynamic programming are two different algorithmic approaches that can be used to solve optimization problems. Here are the main differences between these two approaches:
Greedy approach:
- The greedy approach makes locally optimal choices at each step with the hope of finding a global optimum.
- The greedy approach does not necessarily consider the future consequences of the current choice.
- The greedy approach is useful for solving problems where making locally optimal choices at each step leads to a global optimum.
- The greedy approach is generally faster and simpler than dynamic programming.
Dynamic programming:
- Dynamic programming is a bottom-up algorithmic approach that builds up the solution to a problem by solving its subproblems recursively.
- Dynamic programming stores the solutions to subproblems and reuses them when necessary to avoid solving the same subproblems multiple times.
- Dynamic programming is useful for solving problems where the optimal solution can be obtained by combining optimal solutions to subproblems.
- Dynamic programming is generally slower and more complex than the greedy approach, but it guarantees the optimal solution.
- In summary, the main difference between the greedy approach and dynamic programming is that the greedy approach makes locally optimal choices at each step without considering the future consequences, while dynamic programming solves subproblems recursively and reuses their solutions to avoid repeated calculations. The greedy approach is generally faster and simpler, but may not always provide the optimal solution, while dynamic programming guarantees the optimal solution but is slower and more complex.
Greedy approach:
A Greedy algorithm is an algorithmic paradigm that builds up a solution piece by piece, always choosing the next piece that offers the most obvious and immediate benefit. So the problems where choosing locally optimal also leads to a global solution is the best fit for Greedy.
Example: In Fractional Knapsack Problem the local optimal strategy is to choose the item that has maximum value vs weight ratio. This strategy also leads to global optimal solution because we allowed taking fractions of an item.
Characteristics of Greedy approach:
A problem that can be solved using the Greedy approach follows the below-mentioned properties:
- Optimal substructure property.
- Minimization or Maximization of quantity is required.
- Ordered data is available such as data on increasing profit, decreasing cost, etc.
- Non-overlapping subproblems.
Standard problems on Greedy Approach:
S.No. | Article | Practice |
1. | Fractional Knapsack Problem | link |
2. | Activity Selection Problem | link |
3. | link | |
4. | link |
Dynamic Programming:
Dynamic programming is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of subproblems so that we do not have to re-compute them when needed later. This simple optimization reduces time complexities from exponential to polynomial.
Example: If we write a simple recursive solution for Fibonacci Numbers, we get exponential time complexity and to optimize it by storing solutions of subproblems, time complexity reduces to linear this can be achieved by Tabulation or Memoization method of Dynamic programming.
Characteristics of Dynamic Programming:
A problem that can be solved using Dynamic Programming must follow the below mentioned properties:
- Optimal substructure property.
- Overlapping subproblems.
Standard problems on Dynamic Programming:
S.No. | Article | Practice |
1. | Coin Change | link |
2. | Edit Distance | link |
3. | Longest Common Subsequence | link |
4. | Count ways to reach the n’th stair | link |
Below are some major differences between Greedy method and Dynamic programming:
Feature |
Greedy method | Dynamic programming |
---|---|---|
Feasibility |
In a greedy Algorithm, we make whatever choice seems best at the moment in the hope that it will lead to global optimal solution. | In Dynamic Programming we make decision at each step considering current problem and solution to previously solved sub problem to calculate optimal solution . |
Optimality |
In Greedy Method, sometimes there is no such guarantee of getting Optimal Solution. | It is guaranteed that Dynamic Programming will generate an optimal solution as it generally considers all possible cases and then choose the best. |
Recursion |
A greedy method follows the problem solving heuristic of making the locally optimal choice at each stage. | A Dynamic programming is an algorithmic technique which is usually based on a recurrent formula that uses some previously calculated states. |
Memoization |
It is more efficient in terms of memory as it never look back or revise previous choices | It requires Dynamic Programming table for Memoization and it increases it’s memory complexity. |
Time complexity |
Greedy methods are generally faster. For example, Dijkstra’s shortest path algorithm takes O(ELogV + VLogV) time. | Dynamic Programming is generally slower. For example, Bellman Ford algorithm takes O(VE) time. |
Fashion |
The greedy method computes its solution by making its choices in a serial forward fashion, never looking back or revising previous choices. | Dynamic programming computes its solution bottom up or top down by synthesizing them from smaller optimal sub solutions. |
Example |
Fractional knapsack . |
0/1 knapsack problem |
Related Articles:
Ready to dive in? Explore our Free Demo Content and join our DSA course, trusted by over 100,000 neveropen!