The 10 most basic utility algorithms that programmers should know and their explanations

Author: The Little Dream, Created: 2016-12-09 11:37:36, Updated: 2016-12-09 11:39:20

The 10 most basic utility algorithms that programmers should know and their explanations

  • Algorithm one: the fast sorting algorithm.

    Quick sort is a sorting algorithm developed by Tony Hall. In the average case, sorting n items requires nlogn comparisons. In the worst case, it requires n2 comparisons, but this is uncommon. In fact, quick sort is often significantly faster than other nlogn algorithms because its inner loop can be implemented very efficiently on most architectures.

    Quick sort uses the divideandconquer strategy to split a list into two sub-lists.

    Algorithmic steps:

    • 1, pick an element from the number column, called the pivot, and then select the pivot.

    • 2, rearrange the number column, placing all elements smaller than the reference value in front of the reference value, and all elements larger than the reference value in the back of the reference value (the same number can go to either side); after exiting this partition, the reference is in the middle of the number column. This is called the partition operation.

    • 3, recursively (recursive) Sort the sub-arrays of elements smaller than the reference value and larger than the reference value.

      The bottom-most case of recursion is that the number of rows is zero or one, which means that it has always been sorted. Although recursive, the algorithm always exits because in each iteration it moves at least one element to its last position.

  • Algorithm two: stack sorting

    Heapsort is a sorting algorithm designed to use a data structure such as a heap. A heap is a structure that is almost entirely binary while satisfying the property of a heap: the key value or index of a sub-node is always less than or greater than its parent node.

    The average time complexity of stack ordering is O (nlogn).

    Algorithmic steps:

    • 1, create a stack H [0...n-1]

    • 2, swap the header (max) and tail

    • 3, reduce the size of the heap by 1 and call shift_down ((0), the purpose of which is to adjust the new data at the top of the array to the corresponding position

    • 4, repeat step 2 until the size of the stack is 1.

  • Algorithm three: clustering and sorting

    Mergesort is an efficient sorting algorithm based on mergers. The algorithm is a very typical application of the Divide and Conquer method.

    Algorithmic steps:

    • 1 applications space, which is the sum of two already ordered sequences, and which is used to store the merged sequence

    • 2, set two pointers, starting at the starting position of two already ordered sequences

    • 3, compare the elements pointed at by the two pointers, select the relatively small element to be placed in the merge space, and move the pointer to the next position

    • 4, repeat step 3 until a pointer reaches the end of the sequence

    • 5, directly copying all the remaining elements of another sequence to the end of the combined sequence

  • Algorithm four: Binary search algorithm

    A binary search algorithm is a search algorithm that searches for a specific element in an ordered array. The search process starts from the middle element of the array, and ends if the middle element is the element to be searched; if a specific element is greater than or less than the middle element, the search is performed in the half of the array that is greater or less than the middle element, and the comparison begins from the middle element as before. If the array is empty at a certain step, the representative cannot be found.

  • Algorithm five: BFPRT (Linear search algorithm)

    The BFPRT algorithm solves the classic problem of selecting the k-largest (k-smallest) element from a sequence of n elements and, through clever analysis, ensures that the algorithm remains linear in time complexity at worst. The idea of the algorithm is similar to the idea of quick sorting, of course, in order to ensure that the algorithm still reaches the time complexity of o (n) at worst.

    Algorithmic steps:

    • 1⁄2, dividing n elements into groups of n/5 (upper boundary).

    • 2, extract the median of each set, any sorting method, such as inserting sort.

    • The recursive call selection algorithm searches for the median of all medians in the previous step, set to x, and in the case of even medians, set to select the smallest one in the middle.

    • 4 {\displaystyle 4} divides an array by x, and sets the number of elements less than or equal to x as k, and the number of elements greater than or equal to x as n − k {\displaystyle n−k}

    • 5, if i==k, returns x; if ik, recursively finds the i-kth smallest element in an element larger than x.

      Termination condition: When n = 1, the returned element is i.

  • Algorithm six: DFS (deep priority search)

    Deep-First-Search is a search algorithm that traverses the tree's nodes as deeply as possible along the tree and branches the tree as deeply as possible. When all sides of node v have been searched, the search goes back to the starting node on the side of node v that was found. This process continues until all nodes have been found that can be reached from the source node.

    Deep priority search is a classic algorithm in graph theory that can be used to generate a topological ordering table for the target graph. It can be used to solve many related graph problems, such as the maximum path problem, etc.

    In-depth prioritization through the steps of the graph algorithm:

    • 1, access to peak v;

    • 2° From the unreached adjacent endpoints of v, the graph is routed in depth prioritization; all vertices in the graph that have paths to meet v are accessed;

    • 3, if there are still unreached vertices in the graph at this time, proceed from an unreached vertex and repeat the depth priority traversal until all the vertices in the graph have been reached.

      The above description may be more abstract, as an example:

      DFS starts at a vertex v in one of the access graphs, then departs from v to access any of its adjacent vertices w1; then departs from w1 to access a vertex w2 that is adjacent to w1 but not yet accessed; then departs from w2 to make a similar visit,... and so on until it reaches a vertex u where all adjacent vertices have been accessed.

      Then, take a step back and go back to the previously visited vertex to see if there are any other adjacent vertices that have not been visited. If there are, visit this vertex and then proceed from this vertex for a similar visit; if there are none, go back and search again. Repeat the process until all the vertices in the connecting map have been visited.

  • Algorithm seven: BFS (broad-first search)

    Breadth-First-Search is a graph search algorithm. In simple terms, BFS is a node that runs along the width of the tree, starting from the root node. If all nodes are accessed, the algorithm is terminated. BFS is also a blind search.

    Algorithmic steps:

    • 1, first place the root node in the queue.

    • 2, remove the first node from the queue and check if it is a target.

      If the target is found, the search ends and the results are returned. Otherwise, it adds all of its direct sub-nodes that have not yet been checked to the queue.

    • If the queue is empty, it means that the whole graph has been checked and that there is no target in the graph.

    • 4 and repeat step 2.

  • Algorithm number eight: The Dijkstra algorithm

    Dijkstra's algorithm was proposed by Dutch computer scientist Ezher Dijkstra. Dijkstra's algorithm uses a broad priority search to solve the shortest path problem for a single-source non-negative directed graph, resulting in a shortest path tree. The algorithm is often used in routing algorithms or as a sub-module of other graph algorithms.

    The input to the algorithm consists of a weighted directional graph G, and a source vertex S in G. We represent V as the set of all vertices in G. Each edge in the graph is an ordered element pair formed by two vertices. U, v, u, v, v, etc. represent paths from the vertex u to v. We represent E as the set of all edges in G. The weight of the edge is defined by the weight function w:E→[0,∞].

    Thus, w ((u,v) is the non-negative weight ((weight)) of the side from the vertex u to the vertex v. The weight of the side can be thought of as the distance between the two vertices. The weight of the path between the two points is the sum of the weights of all the sides on the path. There are known vertices s and t in V, and the Dijkstra algorithm can find the minimum weighted path from s to t (e.g., the shortest path).

    Algorithmic steps:

    • 1, the distance between the vertices in T at the initial time S = {V0}, T = {remaining vertices} If , d ((V0,Vi)) exists, the weights on the arc are If there is no , d ((V0,Vi) is ∞

    • 2, select from T a vertex W whose distance is the smallest and is not in S, and add S

    • 3, modification of the distance value of the remaining vertices in T: this distance value is modified by adding W as the middle vertex and shortening the distance value from V0 to Vi

      Repeat steps 2 and 3 until all vertices are in S, i.e. W = Vi

  • Algorithm number nine: dynamic planning algorithm

    Dynamic programming is a method used in mathematics, computer science, and economics to solve complex problems by breaking down the original problem into relatively simple subproblems. Dynamic programming is often applied to problems with overlapping subproblems and optimal structural properties.

    The basic idea behind dynamic programming is very simple. Generally speaking, to solve a given problem, we need to solve its different parts ("subproblems") and reconcile subproblems to solve the original problem. Often many subproblems are very similar, so dynamic programming attempts to solve each subproblem only once, thus reducing computation: once a given subproblem has been solved, it is stored in memory so that it can be directly checked the next time it needs to be solved. This practice is especially useful when the number of iterative subproblems is increasing exponentially with respect to the size of the input.

    The most classic question about dynamic planning is the backpack question.

    Algorithmic steps:

    1, optimal substructure property. If the solution to the subproblem contained in the optimal solution to the problem is also optimal, we say that the problem has optimal substructure property (i.e. it satisfies the optimal principle).

    2, overlapping nature of subproblems. Overlapping nature of subproblems refers to the fact that when solving problems from top to bottom with recursive algorithms, each time the resulting subproblem is not always a new problem, and some subproblems are computed repeatedly. Dynamic scheduling algorithms take advantage of this overlapping nature of subproblems, calculating only once for each subproblem and then storing the results of their calculation in a table.

  • Algorithm 10: the basic Bayesian classification algorithm

    A simple Bayesian classification algorithm is a simple probability classification algorithm based on the Bayesian theorem. Bayesian classification is based on probability reasoning, which is how to complete reasoning and decision-making tasks when the existence of various conditions is uncertain and only when the probability of their occurrence is known. Probability reasoning corresponds to certainty reasoning.

    A simple Bayesian classifier relies on an accurate natural probability model to achieve very good classification results in sample sets with supervised learning. In many practical applications, a simple Bayesian model parameter estimate uses a maximum likelihood estimator, in other words a simple Bayesian model works without Bayesian probability or any Bayesian model.

    Despite these simplistic ideas and overly simplistic assumptions, the simplistic Bayesian classifier can still work quite well in many complex real-world scenarios.


More