MindMap Gallery Design and Analysis of Algorithms
The core points of algorithm design and analysis include several basic algorithm design ideas, which can give us a clear overall understanding of the algorithm and help us better understand the algorithm and take exams. If you don’t have enough time to review, just look at this picture.
Edited at 2021-05-06 20:10:50Avatar 3 centers on the Sully family, showcasing the internal rift caused by the sacrifice of their eldest son, and their alliance with other tribes on Pandora against the external conflict of the Ashbringers, who adhere to the philosophy of fire and are allied with humans. It explores the grand themes of family, faith, and survival.
This article discusses the Easter eggs and homages in Zootopia 2 that you may have discovered. The main content includes: character and archetype Easter eggs, cinematic universe crossover Easter eggs, animal ecology and behavior references, symbol and metaphor Easter eggs, social satire and brand allusions, and emotional storylines and sequel foreshadowing.
[Zootopia Character Relationship Chart] The idealistic rabbit police officer Judy and the cynical fox conman Nick form a charmingly contrasting duo, rising from street hustlers to become Zootopia police officers!
Avatar 3 centers on the Sully family, showcasing the internal rift caused by the sacrifice of their eldest son, and their alliance with other tribes on Pandora against the external conflict of the Ashbringers, who adhere to the philosophy of fire and are allied with humans. It explores the grand themes of family, faith, and survival.
This article discusses the Easter eggs and homages in Zootopia 2 that you may have discovered. The main content includes: character and archetype Easter eggs, cinematic universe crossover Easter eggs, animal ecology and behavior references, symbol and metaphor Easter eggs, social satire and brand allusions, and emotional storylines and sequel foreshadowing.
[Zootopia Character Relationship Chart] The idealistic rabbit police officer Judy and the cynical fox conman Nick form a charmingly contrasting duo, rising from street hustlers to become Zootopia police officers!
Design and Analysis of Algorithms
dynamic programming
Optimization problem
There are n inputs, and its solution consists of a subset of the n inputs. This subset must satisfy certain pre-given conditions. These conditions are called constraints. The solution that satisfies the constraints is called a feasible solution to the problem. . There may be more than one feasible solution that satisfies the constraints. In order to measure the quality of these feasible solutions, certain standards are given in advance. These standards are usually given in the form of functions. These standard functions are called objective functions, which make the objective function achieve an extreme value. The feasible solution (maximum or minimum) is called the optimal solution, and this type of problem is called an optimization problem.
principle of optimality
For an optimization problem with n inputs, the solution process can often be divided into several stages. The decision-making in each stage only depends on the state of the previous stage. The actions taken by the decision-making make the state transfer and become the next stage. basis for stage decisions. Thus, a sequence of decisions is generated in a constantly changing state. The process resulting from this decision sequence is called a multi-stage decision-making process
The basic idea of dynamic programming
For problems that are suitable to be solved by dynamic programming methods, the sub-problems obtained after decomposition are often not independent of each other. Unlike divide and conquer.
During the solution process, the solutions to the solved subproblems are saved and can be easily retrieved when needed. This avoids a large number of meaningless repeated calculations, thereby reducing the time complexity of the algorithm.
How to save the solution of the solved sub-problem? Usually in the form of a table, that is, in the actual solution process, once a certain sub-problem is calculated, regardless of whether the problem will be used in the future, the calculation results will be filled in the table, and the sub-problem will be found from the table when needed. The solution to the problem, the specific dynamic programming algorithms are diverse, but they have the same form filling format
Dynamic programming problem solving steps
Analyze the properties of the optimal solution and characterize the structural characteristics of the optimal solution—examine whether it is suitable to use dynamic programming method
Recursively define optimal values (i.e. establish recursive or dynamic programming equations)
Calculate the optimal value in a bottom-up manner and record relevant information
Construct the optimal solution based on the information obtained when calculating the optimal value
Basic elements of dynamic programming
Optimal substructure properties
Sub-problem overlapping properties
When a recursive algorithm solves a problem, the sub-problems generated each time are not always new problems. Some sub-problems appear multiple times. This property is called the overlapping property of sub-problems.
When applying dynamic programming, recurring sub-problems only need to be solved the first time they are encountered, and the solutions to each solved sub-problem are stored in the table so that they can be directly referenced when encountered in the future. There is no need to solve the problem again, which can greatly improve the efficiency of problem solving.
bottom-up solution
Analyze the structure of the optimal solution
Features: The order of the calculation matrix sub-chains A[i:k] and A[k 1:j] included in the optimal order of calculating A[i:j] is also optimal.
The optimal solution to the matrix multiplication calculation order problem contains the optimal solution to its sub-problems. This property is called the optimal substructure property. The optimal substructure property of a problem is a significant feature of the problem that can be solved by dynamic programming algorithms.
divide and conquer
divide and conquer strategy
The design idea of the divide-and-conquer method is to divide a large problem that is difficult to solve directly into a number of smaller identical problems so that they can be solved individually and divided and conquered.
Design thinking
Divide a large problem that is difficult to solve directly into some smaller sub-problems so that they can be solved individually and divide and conquer.
Then merge the solutions of the sub-problems into the solution of a larger problem, and gradually find the solution to the original problem from the bottom up.
Applicable conditions
The problems that can be solved by the divide and conquer method generally have the following characteristics:
The problem can be easily solved when the scale of the problem is reduced to a certain extent; the problem can be decomposed into several smaller problems of the same size, that is, the problem has optimal substructure properties.
The solutions to the sub-problems decomposed using this problem can be merged into the solution of this problem
The sub-problems decomposed into this problem are independent of each other, that is, there are no common sub-problems between the sub-problems.
This feature involves the efficiency of the divide-and-conquer method. If each sub-problem is not independent, the divide-and-conquer method will have to do a lot of unnecessary work and repeatedly solve common sub-problems. Although the divide-and-conquer method can also be used at this time, it is generally Dynamic programming is better
Solving process
divide
Since it is divide and conquer, of course it is necessary to divide the original problem of size n into k smaller sub-problems, and try to make the scales of these k sub-problems roughly the same.
Solve subproblems
The solution to each sub-problem is usually the same as the solution to the original problem. Each sub-problem can be solved recursively. Sometimes recursive processing can also be implemented using loops.
merge
The solutions to each sub-problem are combined. The cost of merging varies greatly depending on the situation. The effectiveness of the divide-and-conquer algorithm depends largely on the implementation of merging.
Solution ideas
How to divide it, that is, how to reasonably decompose the problem?
split into two
How to cure, that is, how to solve the problem
merge
The key to the problem - discovering the regularity in the process of making the round-robin schedule
Quick sort
The divide and conquer strategy of quick sort is
Division:
Select a record as the axis value, and divide the entire sequence into two subsequences r1...ri-1 and ri1...rn based on the axis value. The values recorded in the former subsequence are all less than or equal to the axis value, and the values recorded in the latter subsequence are The values recorded in the sequence are all greater than or equal to the axis value
Solve subproblems
Recursively process each divided subsequence separately
merge
Since the sorting of the subsequences r1 ... ri-1 and ri 1 ... rn is done in-place, no operations need to be performed by the merge
Divide-and-conquer strategy of quick sort algorithm
break down
Select an element in R[low:high] as the pivot element, use this pivot element as the standard to divide the sequence to be sorted into two subsequences, and make the values of all elements in the sequence R[low:pivotpos-1] are less than or equal to R[pivotpos], and the values of all elements in the sequence R[pivotpos 1:high] are greater than or equal to R[pivotpos]
Solve subproblems
Sort the subsequences R[low:pivotpos-1] and R[pivotpos 1:high] by recursively calling the quick sort algorithm.
merge
Sort in place
greedy method
Design thinking
The greedy method is short-sighted in problem-solving strategies and always makes the best choice at the moment. And once a choice is made, no matter what the consequences are in the future, this choice will not change. In other words, the greedy method does not consider the overall optimal, but the choice it makes is only a local optimal in a certain sense.
Characteristics of problems solved by the greedy method
Optimal substructure properties
When the optimal solution of a problem contains the optimal solutions of its subproblems, the problem is said to have optimal substructure properties, and it is also said that the problem satisfies the principle of optimality.
greedy choice property
The so-called greedy selection property means that the overall optimal solution to the problem can be obtained through a series of local optimal choices, that is, greedy selection (only make the best choice in the current state)
Dynamic programming methods usually solve each sub-problem in a bottom-up manner, while greedy rules usually make a series of greedy choices in a top-down manner.
Solving process
Candidate set C: In order to construct a solution to the problem, there is a candidate set C as a possible solution to the problem, that is, the final solution to the problem is taken from the candidate set C. For example, in a payment problem, currencies of various denominations form the candidate set.
Solution set S: As the greedy selection proceeds, the solution set S continues to expand until it forms a complete solution that satisfies the problem. For example, in the payment problem, the money paid constitutes the solution set.
Solution function: Checks whether the solution set S constitutes a complete solution to the problem. For example, in a payment problem, the solution function is that the monetary amount paid exactly equals the amount due
Selection function select: that is, the greedy strategy. This is the key to the greedy method. It points out which candidate object is most promising to form a solution to the problem. The selection function is usually related to the objective function. For example, in the payment problem, the greedy strategy is to choose the currency with the largest denomination in the candidate set.
Feasible function: Check whether it is feasible to add a candidate object to the solution set, that is, whether the constraint conditions are satisfied after the solution set is expanded. For example, in the payment problem, the feasible function is that the sum of the currency selected and the currency paid at each step does not exceed the amount due
Algorithm correctness proof
The problem has an optimal solution starting from greedy selection
greedy choice property
Adopt inductive approach
Greedy selection step by step can get the optimal solution to the problem
Optimal substructure properties
Evidence by contradiction
core structural ideas
The leaf with a large weight is closest to the root (characters with high frequency have short codes)
Comparison of Prim and Kruskal algorithms
It can be seen from the idea of the algorithm that if the number of edges in the graph G is small, Kruskal can be used, because the Kruskal algorithm finds the shortest edge each time; if the number of edges is large, the Prim algorithm can be used, because it adds one vertex each time. It can be seen that Kruskal is suitable for sparse graphs, while Prim is suitable for dense graphs.
In terms of time, the time complexity of Prim's algorithm is O(n2), and the time complexity of Kruskal's algorithm is O(eloge).
In terms of space, it is obvious that in Prim's algorithm, only a small space is needed to complete the algorithm, because each scan starts from an individual point, and each scan only scans the edges corresponding to the current vertex set. . But in the Kruskal algorithm, because you have to know where the edge with the smallest weight in the current edge set is at all times, you need to sort all the edges. For a very large graph, the Kruskal algorithm takes up much more space than the Prim algorithm. Space.
Algorithms and basic knowledge
algorithm
Refers to a description of the steps to solve a specific problem, which is a finite sequence of several instructions.
characteristic
enter
output
certainty
Finiteness
feasibility
Description
natural language
language for daily communication
advantage
Simple
Easy to use Disadvantages
Not rigorous enough
cumbersome
cannot be executed directly by the computer
graphics
tool
flow chart
N-S diagram
PAD diagram
advantage
Intuitive image
concise
shortcoming
It’s troublesome to draw
Not easy to modify
cannot be executed directly by the computer
programming language
A "symbol system" that can express people's intentions completely, accurately and regularly, and is used to command and control computer work.
advantage
Can be executed directly by the computer
shortcoming
Poor abstraction
Not easy to understand
There are strict format requirements
pseudocode
An algorithm description tool that combines text and symbols between natural language and programming language
General steps for algorithm design
Fully understand the problem to be solved
mathematical model formulation
Algorithm detailed design
Algorithm Description
Verification of correctness of algorithmic ideas
Analysis of Algorithms
Computer implementation and testing of algorithms
Preparation of documentation
Algorithm Analysis
Estimate the two computer resources required by the algorithm - time and space
Time Complexity
Space Complexity
Purpose
Design Algorithms - Design algorithms with the lowest possible complexity
Choosing an Algorithm—Choose the least complex algorithm among multiple algorithms
Algorithm complexity = the amount of computer resources required to run an algorithm
time complexity
Factors affecting time complexity
Problem size n, input sequence I, algorithm itself A
space complexity
Factors affecting space complexity
Algorithm itself, input and output data, auxiliary variables
Algorithmic complexity trade-off
Time complexity and space complexity interact with each other (time for space or space for time)
Complexity in three cases (combined with sequential search operations)
Best case Tmin(N)
-1 time
Worst case Tmax(N)
N times
Average caseTavg(N)
(N 1)/2
General steps for non-recursive algorithm analysis
1. Decide which parameter(s) to use as a measure of algorithmic problem size
to get from the description of the problem
2. Find the basic statements in the algorithm
Usually the loop body of the innermost loop.
3. Check whether the number of executions of basic statements depends only on the problem size
If the number of executions of a basic statement also depends on some other characteristics, you need to study the best-case, worst-case, and average-case efficiency separately.
4. Establish a summation expression for the number of execution times of basic statements
Count the number of times a basic statement is executed and build a summation expression that represents the running time of the algorithm
5. Represent this summation expression in asymptotic notation
Calculate the order of magnitude of the number of executions of a basic statement and use Big O notation to describe the upper limit of the algorithm's growth rate
recursion
A subroutine (or function) calls itself directly or indirectly through a series of calling statements, which is called recursion.
recursive algorithm
An algorithm that calls itself directly or indirectly is called a recursive algorithm
General steps for solving problems using recursive algorithms:
Analyze problems and look for recursive relationships
Find the stopping condition
Construct function body