roadmap(DSA): added missing resouce links in problem solving techniques (#7586)

pull/7599/head
Murshal Akhtar Ansari 4 weeks ago committed by GitHub
parent e34619c24b
commit 7e9de94b14
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 5
      src/data/roadmaps/datastructures-and-algorithms/content/112-problem-solving-techniques/100-brute-force.md
  2. 5
      src/data/roadmaps/datastructures-and-algorithms/content/112-problem-solving-techniques/101-backtracking.md
  3. 5
      src/data/roadmaps/datastructures-and-algorithms/content/112-problem-solving-techniques/102-greedy-algorithms.md
  4. 5
      src/data/roadmaps/datastructures-and-algorithms/content/112-problem-solving-techniques/103-randomised-algorithms.md
  5. 5
      src/data/roadmaps/datastructures-and-algorithms/content/112-problem-solving-techniques/104-divide-and-conquer.md
  6. 5
      src/data/roadmaps/datastructures-and-algorithms/content/112-problem-solving-techniques/105-recursion.md
  7. 7
      src/data/roadmaps/datastructures-and-algorithms/content/112-problem-solving-techniques/106-dynamic-programming.md
  8. 7
      src/data/roadmaps/datastructures-and-algorithms/content/112-problem-solving-techniques/107-two-pointer-techniques.md
  9. 6
      src/data/roadmaps/datastructures-and-algorithms/content/112-problem-solving-techniques/108-sliding-window-technique.md

@ -1,3 +1,8 @@
# Brute Force # Brute Force
"Brute Force" is a straightforward method to solve problems. It involves trying every possible solution until the right one is found. This technique does not require any specific skills or knowledge and the approach is directly applied to the problem at hand. However, while it can be effective, it is not always efficient since it often requires a significant amount of time and resources to go through all potential solutions. In terms of computational problems, a brute force algorithm examines all possibilities one by one until a satisfactory solution is found. With growing complexity, the processing time of brute force solutions dramatically increases leading to combinatorial explosion. Brute force is a base for complex problem-solving algorithms which improve the time and space complexity by adding heuristics or rules of thumb. "Brute Force" is a straightforward method to solve problems. It involves trying every possible solution until the right one is found. This technique does not require any specific skills or knowledge and the approach is directly applied to the problem at hand. However, while it can be effective, it is not always efficient since it often requires a significant amount of time and resources to go through all potential solutions. In terms of computational problems, a brute force algorithm examines all possibilities one by one until a satisfactory solution is found. With growing complexity, the processing time of brute force solutions dramatically increases leading to combinatorial explosion. Brute force is a base for complex problem-solving algorithms which improve the time and space complexity by adding heuristics or rules of thumb.
Learn more from the following links:
- [@article@Brute Force Technique in Algorithms](https://medium.com/@shraddharao_/brute-force-technique-in-algorithms-34bac04bde8a)
- [@video@Brute Force Algorithm Explained With C++ Examples](https://www.youtube.com/watch?v=BYWf6-tpQ4k)

@ -1,3 +1,8 @@
# Backtracking # Backtracking
Backtracking is a powerful algorithmic technique that aims to solve a problem incrementally, by trying out an various sequences of decisions. If at any point it realizes that its current path will not lead to a solution, it reverses or "backtracks" the most recent decision and tries the next available route. Backtracking is often applied in problems where the solution requires the sequence of decisions to meet certain constraints, like the 8-queens puzzle or the traveling salesperson problem. In essence, it involves exhaustive search and thus, can be computationally expensive. However, with the right sorts of constraints, it can sometimes find solutions to problems with large and complex spaces very efficiently. Backtracking is a powerful algorithmic technique that aims to solve a problem incrementally, by trying out an various sequences of decisions. If at any point it realizes that its current path will not lead to a solution, it reverses or "backtracks" the most recent decision and tries the next available route. Backtracking is often applied in problems where the solution requires the sequence of decisions to meet certain constraints, like the 8-queens puzzle or the traveling salesperson problem. In essence, it involves exhaustive search and thus, can be computationally expensive. However, with the right sorts of constraints, it can sometimes find solutions to problems with large and complex spaces very efficiently.
Learn more from the following links:
- [@article@Backtracking Algorithm](https://www.geeksforgeeks.org/backtracking-algorithms/)
- [@video@What is backtracking?](https://www.youtube.com/watch?v=Peo7k2osVVs)

@ -1,3 +1,8 @@
# Greedy Algorithms # Greedy Algorithms
Greedy algorithms follow the problem-solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. They are used for optimization problems. An optimal solution is one where the value of the solution is either maximum or minimum. These algorithms work in a " greedy" manner by choosing the best option at the current, disregarding any implications on the future steps. This can lead to solutions that are less optimal. Examples of problems solved by greedy algorithms are Kruskal's minimal spanning tree algorithm, Dijkstra's shortest path algorithm, and the Knapsack problem. Greedy algorithms follow the problem-solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. They are used for optimization problems. An optimal solution is one where the value of the solution is either maximum or minimum. These algorithms work in a " greedy" manner by choosing the best option at the current, disregarding any implications on the future steps. This can lead to solutions that are less optimal. Examples of problems solved by greedy algorithms are Kruskal's minimal spanning tree algorithm, Dijkstra's shortest path algorithm, and the Knapsack problem.
Learn more from the following links:
- [@article@Greedy Algorithm Tutorial – Examples, Application and Practice Problem](https://www.geeksforgeeks.org/introduction-to-greedy-algorithm-data-structures-and-algorithm-tutorials/)
- [@video@Greedy Algorithms Tutorial ](https://www.youtube.com/watch?v=bC7o8P_Ste4)

@ -1,3 +1,8 @@
# Randomised Algorithms # Randomised Algorithms
Randomised algorithms are a type of algorithm that employs a degree of randomness as part of the logic of the algorithm. These algorithms use random numbers to make decisions, and thus, even for the same input, can produce different outcomes on different executions. The correctness of these algorithms are probabilistic and they are particularly useful when dealing with a large input space. There are two major types of randomised algorithms: Las Vegas algorithms, which always give the correct answer, but their running time is a random variable; and Monté Carlo algorithms, where the algorithm has a small probability of viability or accuracy. Randomised algorithms are a type of algorithm that employs a degree of randomness as part of the logic of the algorithm. These algorithms use random numbers to make decisions, and thus, even for the same input, can produce different outcomes on different executions. The correctness of these algorithms are probabilistic and they are particularly useful when dealing with a large input space. There are two major types of randomised algorithms: Las Vegas algorithms, which always give the correct answer, but their running time is a random variable; and Monté Carlo algorithms, where the algorithm has a small probability of viability or accuracy.
Learn more from the following links:
- [@article@Randomized Algorithms](https://www.geeksforgeeks.org/randomized-algorithms/)
- [@video@Algorithm Classification Randomized Algorithm](https://www.youtube.com/watch?v=J_EVG6yCOz0)

@ -1,3 +1,8 @@
# Divide and Conquer # Divide and Conquer
Divide and conquer is a powerful algorithm design technique that solves a problem by breaking it down into smaller and easier-to-manage sub-problems, until these become simple enough to be solved directly. This approach is usually carried out recursively for most problems. Once all the sub-problems are solved, the solutions are combined to give a solution to the original problem. It is a common strategy that significantly reduces the complexity of the problem. Divide and conquer is a powerful algorithm design technique that solves a problem by breaking it down into smaller and easier-to-manage sub-problems, until these become simple enough to be solved directly. This approach is usually carried out recursively for most problems. Once all the sub-problems are solved, the solutions are combined to give a solution to the original problem. It is a common strategy that significantly reduces the complexity of the problem.
Learn more from the following links:
- [@article@Introduction to Divide and Conquer Algorithm](https://www.geeksforgeeks.org/introduction-to-divide-and-conquer-algorithm/)
- [@video@Divide & Conquer Algorithm In 3 Minutes](https://www.youtube.com/watch?v=YOh6hBtX5l0)

@ -1,3 +1,8 @@
# Recursion # Recursion
Recursion is a method where the solution to a problem depends on solutions to shorter instances of the same problem. It involves a function calling itself while having a condition for its termination. This technique is mostly used in programming languages like C++, Java, Python, etc. There are two main components in a recursive function: the base case (termination condition) and the recursive case, where the function repeatedly calls itself. All recursive algorithms must have a base case to prevent infinite loops. Recursion can be direct (if a function calls itself) or indirect (if the function A calls another function B, which calls the first function A). Recursion is a method where the solution to a problem depends on solutions to shorter instances of the same problem. It involves a function calling itself while having a condition for its termination. This technique is mostly used in programming languages like C++, Java, Python, etc. There are two main components in a recursive function: the base case (termination condition) and the recursive case, where the function repeatedly calls itself. All recursive algorithms must have a base case to prevent infinite loops. Recursion can be direct (if a function calls itself) or indirect (if the function A calls another function B, which calls the first function A).
Learn more from the following links:
- [@article@Introduction to Recursion](https://www.geeksforgeeks.org/introduction-to-recursion-2/)
- [@video@Recursion in 100 Seconds](https://www.youtube.com/watch?v=rf60MejMz3E)

@ -1,3 +1,10 @@
# Dynamic Programming # Dynamic Programming
**Dynamic Programming** is a powerful problem-solving method that solves complex problems by breaking them down into simpler subproblems and solving each subproblem only once, storing their results using a memory-based data structure (like an array or a dictionary). The principle of dynamic programming is based on *Bellman's Principle of Optimality* which provides a method to solve optimization problems. In practical terms, this approach avoids repetitive computations by storing the results of expensive function calls. This technique is widely used in optimization problems where the same subproblem may occur multiple times. Dynamic Programming is used in numerous fields including mathematics, economics, and computer science. **Dynamic Programming** is a powerful problem-solving method that solves complex problems by breaking them down into simpler subproblems and solving each subproblem only once, storing their results using a memory-based data structure (like an array or a dictionary). The principle of dynamic programming is based on *Bellman's Principle of Optimality* which provides a method to solve optimization problems. In practical terms, this approach avoids repetitive computations by storing the results of expensive function calls. This technique is widely used in optimization problems where the same subproblem may occur multiple times. Dynamic Programming is used in numerous fields including mathematics, economics, and computer science.
Learn more from the following links:
- [@article@Dynamic Programming (DP) Tutorial with Problems](https://www.geeksforgeeks.org/introduction-to-dynamic-programming-data-structures-and-algorithm-tutorials/)
- [@article@Getting Started with Dynamic Programming in Data Structures and Algorithms](https://medium.com/@PythonicPioneer/getting-started-with-dynamic-programming-in-data-structures-and-algorithms-126c7a16775c)
- [@video@What Is Dynamic Programming and How To Use It](https://www.youtube.com/watch?v=vYquumk4nWw&t=4s)
- [@video@5 Simple Steps for Solving Dynamic Programming Problems](https://www.youtube.com/watch?v=aPQY__2H3tE)

@ -1,3 +1,10 @@
# Two Pointer Technique # Two Pointer Technique
The **two-pointer technique** is a strategy that can be used to solve certain types of problems, particularly those that involve arrays or linked lists. This technique primarily involves using two pointers, which navigate through the data structure in various ways, depending on the nature of the problem. The pointers could traverse the array from opposite ends, or one could be moving faster than the other - often referred to as the `slow` and `fast` pointer method. This technique can greatly optimize performance by reducing time complexity, often enabling solutions to achieve O(n) time complexity. The **two-pointer technique** is a strategy that can be used to solve certain types of problems, particularly those that involve arrays or linked lists. This technique primarily involves using two pointers, which navigate through the data structure in various ways, depending on the nature of the problem. The pointers could traverse the array from opposite ends, or one could be moving faster than the other - often referred to as the `slow` and `fast` pointer method. This technique can greatly optimize performance by reducing time complexity, often enabling solutions to achieve O(n) time complexity.
Learn more from the following links:
- [@article@Two Pointers Technique](https://www.geeksforgeeks.org/two-pointers-technique/)
- [@article@Two Pointers Technique](https://medium.com/@johnnyJK/data-structures-and-algorithms-907a63d691c1)
- [@article@Mastering the Two Pointers Technique: An In-Depth Guide](https://lordkonadu.medium.com/mastering-the-two-pointers-technique-an-in-depth-guide-3c2167584ccc)
- [@video@Visual introduction Two Pointer Algorithm](https://www.youtube.com/watch?v=On03HWe2tZM)

@ -1,3 +1,9 @@
# Sliding Window Technique # Sliding Window Technique
The **Sliding Window Technique** is an algorithmic paradigm that manages a subset of items in a collection of objects, like an array or list, by maintaining a range of elements observed, which is referred to as the 'window'. The window 'slides' over the data to examine different subsets of its contents. This technique is often used in array-related coding problems and is particularly useful for problems that ask for maximums or minimums over a specific range within the dataset. This technique can help to greatly reduce the time complexity when dealing with problems revolving around sequential or contiguous data. Common examples of its application are in solving problems like maximum sum subarray or minimum size subsequence with a given sum. The **Sliding Window Technique** is an algorithmic paradigm that manages a subset of items in a collection of objects, like an array or list, by maintaining a range of elements observed, which is referred to as the 'window'. The window 'slides' over the data to examine different subsets of its contents. This technique is often used in array-related coding problems and is particularly useful for problems that ask for maximums or minimums over a specific range within the dataset. This technique can help to greatly reduce the time complexity when dealing with problems revolving around sequential or contiguous data. Common examples of its application are in solving problems like maximum sum subarray or minimum size subsequence with a given sum.
Learn more from the following links:
- [@article@Sliding Window Technique](https://www.geeksforgeeks.org/window-sliding-technique/)
- [@article@Mastering Sliding Window Techniques](https://medium.com/@rishu__2701/mastering-sliding-window-techniques-48f819194fd7)
- [@video@Sliding window technique](https://www.youtube.com/watch?v=p-ss2JNynmw)

Loading…
Cancel
Save