Saturday, December 28, 2024
Google search engine
HomeLanguagesDynamic ProgrammingComparison among Greedy, Divide and Conquer and Dynamic Programming algorithm

Comparison among Greedy, Divide and Conquer and Dynamic Programming algorithm

Greedy algorithm, divide and conquer algorithm, and dynamic programming algorithm are three common algorithmic paradigms used to solve problems. Here’s a comparison among these algorithms:

Approach:

  1. Greedy algorithm: Makes locally optimal choices at each step with the hope of finding a global optimum.
  2. Divide and conquer algorithm: Breaks down a problem into smaller subproblems, solves each subproblem recursively, and then combines the solutions to the
  3. subproblems to solve the original problem.
  4. Dynamic programming algorithm: Solves subproblems recursively and stores their solutions to avoid repeated calculations.

Goal:

  1. Greedy algorithm: Finds the best solution among a set of possible solutions.
  2. Divide and conquer algorithm: Solves a problem by dividing it into smaller subproblems, solving each subproblem independently, and then combining the
  3. solutions to the subproblems to solve the original problem.
  4. Dynamic programming algorithm: Solves a problem by breaking it down into smaller subproblems and solving each subproblem recursively.

Time complexity:

  1. Greedy algorithm: O(nlogn) or O(n) depending on the problem.
  2. Divide and conquer algorithm: O(nlogn) or O(n^2) depending on the problem.
  3. Dynamic programming algorithm: O(n^2) or O(n^3) depending on the problem

.
Space complexity:

  1. Greedy algorithm: O(1) or O(n) depending on the problem.
  2. Divide and conquer algorithm: O(nlogn) or O(n^2) depending on the problem.
  3. Dynamic programming algorithm: O(n^2) or O(n^3) depending on the problem.

Optimal solution:

  1. Greedy algorithm: May or may not provide the optimal solution.
  2. Divide and conquer algorithm: May or may not provide the optimal solution.
  3. Dynamic programming algorithm: Guarantees the optimal solution.

Examples:

  1. Greedy algorithm: Huffman coding, Kruskal’s algorithm, Dijkstra’s algorithm, etc.
  2. Divide and conquer algorithm: Merge sort, Quick sort, binary search, etc.
  3. Dynamic programming algorithm: Fibonacci series, Longest common subsequence, Knapsack problem, etc.
  4. In summary, the main differences among these algorithms are their approach, goal, time and space complexity, and their ability to provide the optimal solution. Greedy algorithm and divide and conquer algorithm are generally faster and simpler, but may not always provide the optimal solution, while dynamic programming algorithm guarantees the optimal solution but is slower and more complex.

 

Greedy Algorithm:

Greedy algorithm is defined as a method for solving optimization problems by taking decisions that result in the most evident and immediate benefit irrespective of the final outcome. It is a simple, intuitive algorithm that is used in optimization problems.

Divide and conquer Algorithm:

Divide and conquer is an algorithmic paradigm in which the problem is solved using the Divide, Conquer, and Combine strategy. A typical Divide and Conquer algorithm solve a problem using the following three steps:

Divide: This involves dividing the problem into smaller sub-problems.
Conquer: Solve sub-problems by calling recursively until solved.
Combine: Combine the sub-problems to get the final solution of the whole problem.

Dynamic Programming:

Dynamic Programming is mainly an optimization over plain recursion. Wherever we see a recursive solution that has sometimes repeated calls for the same input states, we can optimize it using Dynamic Programming. The idea is to simply store the results of subproblems so that we do not have to re-compute them when needed later. This simple optimization reduces time complexities from exponential to polynomial.

Greedy Algorithm vs Divide and Conquer Algorithm vs Dynamic Algorithm:

Sl.No Greedy Algorithm Divide and conquer Dynamic Programming 
1 Follows Top-down approach Follows Top-down approach Follows bottom-up approach
2 Used to solve optimization problem Used to solve decision problem Used to solve optimization problem
3 The optimal solution is generated without revisiting previously generated solutions; thus, it avoids the re-computation Solution of subproblem is computed recursively more than once. The solution of subproblems is computed once and stored in a table for later use.
4 It may or may not generate an optimal solution.  It is used to obtain a solution to the given problem, it does not aim for the optimal solution It always generates optimal solution.
5 Iterative in nature. Recursive in nature. Recursive in nature.
6  efficient and fast than divide and conquer. For instance, single source shortest path finding using Dijkstra’s Algo takes O(ElogV) time less efficient and slower. more efficient but slower than greedy. For instance, single source shortest path finding using Bellman Ford Algo takes O(VE) time.
7 extra memory is not required. some memory is required. more memory is required to store subproblems for later use.
8 Examples: Fractional Knapsack problem
Activity selection problem
Job sequencing problem.
Examples: Merge sort
Quick sort
Strassen’s matrix multiplication.
Examples: 0/1 Knapsack
All pair shortest path
Matrix-chain multiplication.
Feeling lost in the world of random DSA topics, wasting time without progress? It’s time for a change! Join our DSA course, where we’ll guide you on an exciting journey to master DSA efficiently and on schedule.
Ready to dive in? Explore our Free Demo Content and join our DSA course, trusted by over 100,000 neveropen!

RELATED ARTICLES

Most Popular

Recent Comments