MindMap Gallery Data structure 624624
Data structure mind map, in which the sequence table SqList is logically adjacent elements and must be physically adjacent. The subscripts of the sequence table start from 1. This picture is rich in content.
Edited at 2023-10-05 11:12:01Avatar 3 centers on the Sully family, showcasing the internal rift caused by the sacrifice of their eldest son, and their alliance with other tribes on Pandora against the external conflict of the Ashbringers, who adhere to the philosophy of fire and are allied with humans. It explores the grand themes of family, faith, and survival.
This article discusses the Easter eggs and homages in Zootopia 2 that you may have discovered. The main content includes: character and archetype Easter eggs, cinematic universe crossover Easter eggs, animal ecology and behavior references, symbol and metaphor Easter eggs, social satire and brand allusions, and emotional storylines and sequel foreshadowing.
[Zootopia Character Relationship Chart] The idealistic rabbit police officer Judy and the cynical fox conman Nick form a charmingly contrasting duo, rising from street hustlers to become Zootopia police officers!
Avatar 3 centers on the Sully family, showcasing the internal rift caused by the sacrifice of their eldest son, and their alliance with other tribes on Pandora against the external conflict of the Ashbringers, who adhere to the philosophy of fire and are allied with humans. It explores the grand themes of family, faith, and survival.
This article discusses the Easter eggs and homages in Zootopia 2 that you may have discovered. The main content includes: character and archetype Easter eggs, cinematic universe crossover Easter eggs, animal ecology and behavior references, symbol and metaphor Easter eggs, social satire and brand allusions, and emotional storylines and sequel foreshadowing.
[Zootopia Character Relationship Chart] The idealistic rabbit police officer Judy and the cynical fox conman Nick form a charmingly contrasting duo, rising from street hustlers to become Zootopia police officers!
data structure
Chapter One Introduction
time complexity
Chapter 2 Linear Table
Sequence list SqList
definition
Logically adjacent elements must also be physically adjacent. The subscripts of the sequence table start from 1.
advantage
random access
shortcoming
1. Insertion and deletion require moving a large number of elements to maintain physical contiguity. 2. A continuous storage space must be allocated, which is inflexible
Insert ListInsert (SqList &L,int i,ElemenType e) i position and all subsequent elements are moved backward by (n-i 1), length
Best case: insert the end of the table without moving back O(1)
Worst case scenario: insert the meter header and move everything back O(n)
Average time complexity: O(n) probability * number of moves
1<=i<=length 1
Delete ListDelete (SqList &L,int i,ElemenType e). All elements after the i position are moved forward (n-i), length - -
Best case: delete the end of the table without moving forward O(1)
Worst case scenario: delete the table header and move everything back O(n)
Average time complexity: O(n) probability * number of moves
1<=i<=length
Find by value LocateElem(SqList L,ElemenType e) Compare one by one from the beginning to find the first value equal to e
Best case: e is at the head of the table, and the comparison is O(1)
Worst case: e is at the end of the table, comparison n times O(n)
Average time complexity: O(n) Average number of comparisons: probability * number of comparisons (1/n * n (n 1)/2 = (n 1)/2
Find LocateElem(SqList L,int i) by serial number Return directly to L【i-1】
Average time complexity: random access, find O(1) at one time
Counting sorting: Given a bunch of integers, count the number of times each number appears, and then sort the numbers.
Single linked list LinkList
definition
Each linked list node contains element information and a pointer to the successor
Head pointer: Use the head pointer to identify a singly linked list L, L points to the first node of the linked list (when there is a head node, it points to the head node that does not store information)
advantage
No need to occupy contiguous memory
shortcoming
1. Insertion and deletion require moving a large number of elements to maintain physical contiguity. 2. A continuous storage space must be allocated, which is inflexible
Head insertion method creates a singly linked list List_HeadInsert(LinkList &L) Create the head node L, L—>next is null, and insert the new node s into the table header.
Core code: s—>next=L—>next (the next bit of the node pointed by the head pointer (head node)) L—>next=s
Average time complexity: O(n)
The order in which the data is read is the opposite of the order in which the elements in the linked list are generated.
Tail insertion method creates a singly linked list List_TailInsert(LinkList &L) Create a head node L and a tail pointer r, and insert the new node s to the end of the table
Core code: r—>next=s r=s
Average time complexity: O(n)
The order in which the data is read is the same as the order in which the elements in the linked list are generated.
Search by value *LocateElem(LinkList L,ElemenType e) Compare one by one from the beginning, find the first value equal to e, and return p
Best case: e is at the head of the table, and the comparison is O(1)
Worst case: e is at the end of the table, comparison n times O(n)
Average time complexity: O(n) Average number of comparisons: probability * number of comparisons (1/n * n (n 1)/2 = (n 1)/2
Search by serial number *GetElem(LinkList L,int i) Traverse from the beginning to the i-1th position and return p=p—>next
Best case: e is at the head of the table, and the comparison is O(1)
Worst case: e is at the end of the table, comparison n times O(n)
Average time complexity: O(n) Average number of comparisons: probability * number of comparisons (1/n * n (n 1)/2 = (n 1)/2
Insert node: Insert the new node into the i-th position of the linked list Traverse to position i-1 and insert new node s after it
Core code: p=GetElem(L, i-1) s—>next=p—>next p—>next=s
Average time complexity: O(n) The cost is to find the i-1th node
Delete node: delete the i-th node (check the legality of the deleted position i>0), traverse to the i-1 position, and modify its next pointer
Core code: p=GetElem(L, i-1) s=p—>next p—>next=s—>next
Average time complexity: O(n) The cost is to find the i-1th node
Find the length of a singly linked list: determine whether it is not empty, set the counter, and stop when p—>next=null. Note that there is no need to judge whether it is empty when it is the leading node, but it is necessary when it is not the leading node.
Chapter 3 Stack and Queue
Stack Sqstack
First in, last out
Stack top pointer: S.top S.top=-1 when the stack is empty, S.top=MaxSize-1 when the stack is full The element number range is 0~Maxsize-1. The elements exist in the sequence table. Stack length: S.top 1
Test point: Given the order of pushing into the stack, select the possible order of popping out of the stack. Given the order of pushing and popping into the stack, determine the minimum capacity of the stack
Pop off the stack: Return the top element of the stack x=S.data[S.top], S.top- - Push onto stack: S.top, write new element S.data[S.top]=x There is an element at the position pointed by the top pointer on the stack
QueueSqQueue
First in, first out, last in line, first in line out
Team head pointer: Q.front Queue tail pointer: Q.rear
sequential queue
When the team is empty: Q.front=Q.rear Join the team: Q.front does not move, S.data[Q.rear]=x,Q.rear Dequeue: Q.rear does not move, x=S.data[Q.front],Q.front
When front is greater than 0, Q.rear=Maxsize cannot be used to determine that the queue is full, and false overflow will introduce a circular queue.
circular queue
Element number range 0~Maxsize-1 When the team is empty: Q.front=Q.rear When the team is full: Q.front=Q.rear Enqueue: S.data[Q.rear]=x. PppppQ.rear=(Q.rear 1)%Maxsize Dequeue: x=S.data[Q.front] Q.rear=(Q.rear 1)%Maxsize Captain: (Q.rear Maxsize-Q.front)%Maxsize Since one element is sacrificed to determine if the team is full, the maximum captain is Maxsize-1 There is an element at the position pointed by the head of the queue, and there is no element at the position pointed by the tail of the queue.
The empty queue in the area is 1. Sacrificing an element cannot fill in the data. Team empty Q.front==Q.rear The team is full (Q.rear 1)%Maxsize==Q.front 2. Set size to represent the number of queue elements When size=0, the queue is empty; when size=Maxsize, the queue is full. 3. Set the tag mark. The last operation was to delete 0 and return it. Is inserting 1. hhhjjjnnnn. kfjejdfjjjj. Team empty tag==0&&Q.front==Q.rear, many, many. The team is full tag==1&&Q.front==Q.rear
deque
A queue that can perform input and output operations on both ends When entering the queue, the element entered by the front end is in front of the element entered by the back end. When leaving the queue, no matter which end you exit from, the first person to leave the queue will be in front.
Input-restricted double-ended queue: one end only allows output, and the other end allows input and output
Output-restricted double-ended queue: one end only allows input, the other end allows input and output
Applications of stacks and queues
Queue applications
1. Level-order traversal of a binary tree 2. Solve the direct speed mismatch between the host and the external device - buffer 3. Solve the problem of resource competition among multiple users - process ready queue
stack application
1. Bracket matching eg: ([]());[([][])];[(]);(()] When a left bracket is encountered, it is pushed onto the stack. When a right bracket is encountered, it is matched with the left bracket on the top of the stack. If it matches, it will be popped from the top of the stack. If it does not match, an error will be reported. When the stack is empty, the match is successful.
2. Convert infix to suffix eg: (a b)*c/d —> (((a b)*c)/d) —> ab c*d/ (1) Add parentheses to the expression according to the priority of operation (2) Replace the right bracket with the operator of this layer and remove the left bracket
Hand calculation of postfix expressions: Scan from left to right, and the operands are pushed onto the stack. If an operator is not encountered, the two nearest operands in front of the operator perform the corresponding operation, and the operation result is pushed onto the stack.
Recursive call, number system conversion
Use an array to store n-order matrices
row-major storage column-major storage Tell you what storage method to use. Given the array subscript, determine how many rows and columns of the matrix the element is in.
Special matrix compression storage
Symmetric matrix Only store the elements of the upper (lower) triangle
Number of array elements: n(n 1)/2
Row priority when saving triangles Store upper triangles in column first
Upper (lower) triangular matrix Combine the constant part into one element and put it at the end
Number of array elements: (n(n 1)/2) 1
Row priority when saving triangles Store upper triangles in column first
tridiagonal matrix Zero elements are not placed in the array
Number of array elements: 3n-2
row-major storage
By default, the first element of the matrix is a1,1 and the subscript of the first element of the array is 0. Please pay attention to whether the question has been changed.
Chapter 8 Sorting
insertion sort
direct insertion sort
Insert the elements in the sequence to be sorted into the previously sorted subsequence in turn, compare the size from back to front, and exchange positions. If it is greater than or equal to the last element of the sorted sequence, no movement is required.
performance
time complexity
Best case: given the sequence is sorted, each time only compares with the last one of the sorted sequence O(n)
Worst case: Given a sequence in reverse order, an entire sorted list must be traversed every time O(n^2)
Average case: O(n^2)
space complexity
O(1)
stability
Stablize
storage structure
Linked list/sequence list
Whether each sorting pass determines the final position of an element
no
When the columns to be sorted are basically in order, direct insertion sort is the most efficient.
half insertion sort
Insert the elements in the sequence to be sorted into the previously sorted subsequence in turn, and use half search to determine the insertion position.
performance
time complexity
Best case: given that the sequence is ordered, a half-search starting from the middle still needs to be performed, so the efficiency is lower than direct insertion sorting O(nlog2n)
Worst case: Given a sequence in reverse order, an entire sorted list must be traversed every time O(n^2)
Average case: O(n^2)
space complexity
O(1)
stability
Stablize
storage structure
Sequence list (halved search cannot use linked list)
Whether each sorting pass determines the final position of an element
no
Hill sort
Divide the sequence to be sorted into several groups in increments of d, perform direct insertion sorting on these groups at the same time, reduce the value of d in a certain way after each sorting, and group and sort again until d equals 1
performance
time complexity
none
space complexity
O(1)
stability
unstable The same value may be exchanged when sorting different groups.
storage structure
Sequence list (difficulty in grouping by linked list)
Whether each sorting pass determines the final position of an element
no
swap sort
Bubble Sort
Compare the values of adjacent elements from front to back (or from back to front). If they are in reverse order, swap positions.
performance
time complexity
Best case: given that the sequence is ordered, only one pairwise comparison is required, O(n)
Worst case: given the reverse order of the sequence, n-1 pairwise comparisons are required, O(n^2)
Average case: O(n^2)
space complexity
O(1)
stability
Stablize
storage structure
sequence list/linked list
Whether each sorting pass determines the final position of an element
Yes, determine a largest/smallest element each time
Quick sort
Select a pivot pivot (usually the first or last one). The pointer i points to low and j points to high. Assume that the first element is the pivot and i scans backward. When i traverses to an element larger than the pivot, the The value is replaced to the position pointed by j, and j scans forward. When j scans an element smaller than pivot, replace the value with the position pointed by i. Until i and j point to the same position, the pivot is value is placed at that location. Using the just determined position as the dividing point, perform the above operations on the left and right sequences at the same time until there is only one element left in the sequence to be sorted.
performance
time complexity
Best case scenario: The pivots chosen each time are equally divided on both sides, and the number of groups to be sorted at the same time is O(nlog2n).
Worst case: given sequence basically ordered/reverse O(n^2)
Average case: O(nlog2n)
space complexity
O(log2n)~O(n), you need to use the recursive stack to sort both sides at the same time
stability
Unstable. For example, abc, a=b, low scans until a is large, and changes to the position where high points to c.
storage structure
Sequence table
Whether each sorting pass determines the final position of an element
Yes, determine the position of a randomly selected pivot at a time
It is not suitable to use quick sort when the elements are basically in order. In terms of average performance, quick sort is the best algorithm.
selection sort
Simple selection sort
Each pass traverses the sequence of elements to be sorted and selects the element with the smallest/largest keyword to add to the ordered subsequence (exchange with the element at position L[i])
performance
time complexity
Average case: O(n^2) The number of comparisons is n(n-1)/2 regardless of the initial sequence order.
space complexity
O(1)
stability
Unstable. For example, after finding the smallest element in the i-th pass and exchanging it with the i-th element, the relative position of the i-th element and its equal element may change.
storage structure
sequence list linked list
Heap sort
Large root heap: the value of all nodes is greater than its left and right subtrees Small root heap: the value of all nodes is smaller than its left and right subtrees
The top element of the heap must be the largest/smallest element among all elements. Therefore, after each round of outputting the top element of the heap, the heap is adjusted to obtain a new top element of the heap, then output, and then adjusted again of the heap until all elements are output on the top of the heap, resulting in Sequence
Construct a heap: treat the initial sequence as a sequence traversed by a complete binary tree to construct a complete binary tree, find the parent node of the last leaf node (n/2"), use this node as the root, adjust the subtree, and the root node Compare the point with the keywords of the left and right child nodes, and select the larger exchange position (check whether it affects the lower heap after the exchange); then adjust each non-leaf node to the root node sequentially from right to left, bottom to top, in the same way. The subtree of the point, each time the root node is only compared with its left and right nodes, the adjustment can be completed. Adjust the heap after outputting the top element of the heap: put the last element on the top of the heap, compare it with the left and right nodes, swap positions with the larger one, and then compare it with its left and right nodes...from top to bottom until it changes to its position , the adjustment is completed, and the top element of the heap is output again
In the insertion operation, the new element is placed at the last position of the complete binary tree, the position is compared with its parent node, and the position is adjusted from bottom to top.
Heap sort is suitable for situations where there are a large number of elements to be sorted. It is very convenient to find the top n highest values.
performance
Time complexity: building the heap O(n), adjusting O(h), adjusting n-1 times, the best and worst time complexity is nlog2n on average.
Space complexity O(1)
stability
unstable
merge sort
Two-way merge sort
Given a sequence of n elements, sort them in pairs. The sorted sequence is a whole, and then merged and sorted with the adjacent sequences until an ordered list of length n is synthesized. How to merge two ordered lists into one: ij points to the elements of the two lists to be sorted respectively, k points to the storage location of the ordered list of length n, compares the two elements pointed to by ij, and puts the smaller one in k At the pointed position, k moves backward and i/j moves backward. Until one of the tables is traversed, the other table directly puts the remaining elements at the end.
performance
merge algorithm
The time complexity of each merge algorithm is O(n). No matter what, the code to put the keywords into the auxiliary space needs to be run, and "log2n sorting" is required.
time complexity
The best and worst average are O(nlog2n) That is, the initial sequence will not affect its efficiency under any circumstances.
space complexity
O(n) Open a new sequence table to store the compared values
stability
Stablize
storage structure
Sequence table
Maximum number of comparisons: m n-1 Minimum number of comparisons min(m, n)
Whether each sorting pass determines the final position of an element
no
Radix sort
Given a linked list A, there are elements in it, assuming they are all three-digit numbers. Prepare 10 linked lists to store elements with keywords 0 to 9 respectively. In the first round, the values in the single digits are first placed in the linked list corresponding to 0 to 9, and then traversed from the first element of the linked list 0 to the linked list 9, forming an ordered linked list A' with single digits. Sequentially put the values corresponding to the tens digit into ten linked lists by value, then traverse the linked list starting from 0 to form an A'', and then perform the same operation on the hundreds digit value, and the final A''' is arranged sequential linked list, there is no comparison in the whole process
It can solve the problem of sorting students by age. A linked list of 1 to 31 is assigned to the day, a linked list of 1 to 12 is assigned to the month, and a linked list of 2000 to 2015 is assigned to the year.
The space complexity of each sort is the number of required linked lists The total time complexity is O(d(n r)). d is the number of sorting, n is the number of elements, and r requires an average number of linked lists.
stability
Stablize
subtopic
For comparison-based sorting of any n keywords, at least "log2n! keyword comparisons must be performed
Chapter 6 Search
sequential search
Process: Compare one by one from the beginning until the comparison is successful or the comparison ends at the end of the table. The elements are not required to be in order.
Under equal probability, the average search length is: ASL success = (n 1)/2 ASL failed=n 1
Note that the ASL that fails a sequential lookup of an ordered list is different
half search
Process: Compare Key with the element in the middle If the Key is small, then high = the position of the middle element -1 If the key is large, then low = middle element position 1 Only for ordered lists
n/2 round up
average search length ASL success=log2(n 1)-1 ASL failure = sum of each block*number of comparisons/number of block nodes in the decision tree Time complexity O(log2n)
binary decision tree
With the middle element as the root node, the left element is in the left subtree and the right element is in the right subtree. The elements of the sequence table are represented by circles, and the elements not in the table are represented by boxes in the form of intervals. The element to be found is the number of comparisons at the level of the decision tree, and the height of the tree "log2(n 1)" is the maximum number of comparisons.
sequential storage
Block search
There is no order within the block, and there is order between blocks. The maximum value of the elements in the first block is smaller than all the elements in the second block. The index points to the first element of the block.
Searching for elements between blocks and within blocks can use sequential search or binary search. If the given sequence is ordered, use binary search.
Block size: If the number of elements is n, then the number of blocks = root n
Binary sorting tree binary search tree
The key of the left subtree is less than the root node, and the key of the right subtree is greater than the root node. The result of the binary traversal of the binary sort tree must be increasing.
Search operation: If the keyword is smaller than the root node, go to the left subtree; if it is larger than the root node, go to the right subtree.
The Nth node of the tree When the tree looks like a balanced binary tree (that is, each level is filled as much as possible) the search efficiency is best O(log2(N)) When there is one node at one level of the tree, the worst search efficiency is O(N)
Insertion operation: A keyword smaller than the root node is inserted into the left subtree, and a keyword larger than the root node is inserted into the right subtree. The new node inserted must be a leaf node.
Delete operation: 1. If the node is a leaf node, delete it directly. 2. If the node has only a left subtree or a right subtree, let the root node of the subtree replace itself. 3. If the node x has left and right subtrees, find the in-order successor node y of the node to replace x, and then delete y.
Balanced Binary Tree AVL
Binary sorting tree with height difference between left and right not exceeding 1
Insert operation: 1. Insert the new node x into the appropriate position according to the size of the keyword 2. Find the smallest unbalanced subtree, whose root node is y 3. Find the three nodes closest to y (including y) on the path between x and y, and adjust these three nodes according to the keyword size
Test point: The maximum depth of n nodes is h = a balanced binary tree with h level nodes has at least n nodes. As long as the balance factor of each node does not cross the boundary, the h-1 layer will generally not be full in this case.
m-order B-tree and B-tree
m-order B-tree
1. The root node has at least two subtrees 2. Other branch nodes, At most m subtrees, at most m-1 keywords, At least "m/2 subtrees, at least "m/2 - 1 keyword The balance factors of all nodes are 0, so the nodes of each layer are full. The leaf nodes of B-tree do not contain keywords and do not account for height.
Non-leaf node structure: n|P0|K1|P1|K2|P2|…|Kn|Pn P is a pointer and K is a keyword. The size of the key value increases from K1 to Kn. The subtree keys pointed to by P on the left of K are all smaller than K, and the subtree keys pointed to by P on the right of K are all greater than K.
B-tree
1. The root node has at least two subtrees 2. Other branch nodes At most m subtrees, at most m keywords At least "m/2 subtrees, at least "m/2 keywords 3. Leaf nodes are arranged in order of keyword size. All keywords are in leaf nodes. Non-leaf nodes only serve as indexes. 4. Non-leaf nodes only contain the maximum value of the keyword of each child node, and are constructed from bottom to top. Therefore, B-tree has two search methods, multi-path search starting from the root node (random search), and sequential search starting from the smallest keyword, while B-tree only has the former. The examination concepts and definitions of B-tree are not included in the examination syllabus. Exam operation
B-tree search
1. First search for the node in the disk. If the leaf node is found, the search fails. 2. After finding the node, search the ordered list key sequence of the node in the memory. a. Search the sequence table and compare the keywords b. If the comparison fails, find the next node based on the disk location pointed by the corresponding pointer. Random search: Based on the given value to be searched, compare it with the root node key of the binary tree each time to determine whether to go left or right to search. The search path changes in real time according to the value to be found, so it is called random search.
The main factor that affects search efficiency is the height of the tree, that is, the number of I/Os. The layer tree where the keyword is located is the number of I/Os
Insertion into B-tree
1. Use a search algorithm to find the non-leaf node where the keyword should be placed based on the size of the keyword. 2. If the number of keywords in the node does not exceed m-1 after insertion, insert it directly; if the number of keywords is greater than m-1, split the node. Split: After placing the element to be inserted at the node position, put the keyword in the middle of the node (rounded up) to the corresponding position of the parent node, and divide it into two nodes with this keyword as the boundary. If moved After reaching the parent node, the parent node needs to be split because the number of keywords increases, so the parent node is split. The split of the parent node may eventually cause the root node to split, and the height of the tree is 1
Deletion of B-tree
When K is not at the bottom node, replace K's position with the predecessor/successor keyword K' in K's subtree, and then delete K' in the node where K' is located. Finally, the problem can always be converted into Deletion of the lowest node: 1. When the number of keywords in node P is greater than "m/2 - 1, delete it directly 2. When the number of keywords in node P is less than or equal to "m/2 - 1: a. If the number of keywords of the sibling nodes is greater than "m/2 - 1, then replace the keywords at the farthest edge of the left/right sibling node with the parent node, and replace the corresponding keywords of the parent node with node P at the deleted location b. If the number of sibling nodes is equal to "m/2 - 1, delete the keyword, move down the keyword of the parent node between the sibling, and merge the two nodes.
hash table (hash table)
Hash function: mapping relationship between key value and array subscript. Recorded as Hash (key) Conflict: Different key values are brought into the Hash function to calculate the same result. These keys are called synonyms. Conflict handling methods: open addressing method; zipper method Gathering: Handling post-collision non-synonym contention for addresses Filling factor: number of records in the table n/hash table length m
Hash function construction method: Except for the remainder method, if the hash table length is m, then take a prime number p that is closest to but not larger than m. H(key)=key%p
Open addressing method: 1. Linear detection method, starting from the calculated subscript and traversing backward to find a free location to store the key 2. Square detection method, find the calculated subscripts 1^2, -1^2, 2^2, -2^2,...k^2, -k^2 in sequence, and save the free position when you find it. k<m/2, m must be a prime number that can be expressed as 4k 3
The zipper method stores key values with the same calculation results in the same linked list. The zipper method is suitable for frequent insertion and deletion.
Average search length: ASL success: the sum of the number of comparisons for each element multiplied by the probability divided by the number of elements ASL failed: Empty encountered starting from 0~p-1 (It also needs to be compared once with empty) The sum of the number of comparisons is divided by p
Chapter 5 Picture
basic concept
Directed graph <v arc tail, w arc head>
Strongly connected graph: there is a round-trip path between any two nodes
Undirected graph (v,w)
Connected graph: there is a path between any two nodes
Complete graph: the graph with the most edges Undirected n(n-1)/2 directed n(n-1)
Spanning tree (undirected graph): connected graph with the fewest edges n-1
Simple loop: only the head and tail nodes on the path are repeated
Simple path: there are no repeated nodes on the path
Loop: The head and tail nodes of the path are the same node
Graph storage
adjacency matrix method
Store in two-dimensional array For a directed graph: when viewed horizontally, it is the out-degree of this point, and when viewed vertically, it is the in-degree. For an undirected graph: this is a symmetric matrix that can be stored with compression
Disadvantages: Regardless of whether there are many or few edges, determining how many edges there are in the graph requires traversing the entire matrix, which is very costly O(n^2)
Suitable for dense graphs
adjacency list method
Create a linked list for each node, followed by the node pointed to by the node
Features
1. If an undirected graph has n vertices and e edges, its adjacency list only needs n head nodes and 2e table nodes.
2. For undirected graphs, the degree of node i is the number of nodes in the i-th linked list. For directed graphs, if the degree of i is required, all linked lists must be traversed.
3. To determine whether two nodes are connected, the corresponding linked list must be searched, while the adjacency matrix only needs to access two array elements to see if they are 1
4. The representation of the adjacency list is not unique.
Graph traversal
Breadth-first traversal of BFS
with a queue and a one-dimensional array (used to mark visited elements)
Every time a queue head element is accessed, all its adjacent nodes are put into the queue until the queue is empty (similar to the hierarchical traversal of a tree, also using a queue)
Space complexity O(V)V is the number of nodes
Time complexity: O(V^2) when using adjacency matrix O(E V) when using adjacency list V is the number of nodes, E is the number of edges
Depth-first traversal of DFS
with a stack and a one-dimensional array (used to mark visited elements)
Each time an element is accessed, one of its tie nodes is accessed and pushed onto the stack. If the current node has no tie nodes, an element is popped off the stack and another adjacent node of the previous element is accessed until the stack is empty (similar to preorder traversal
Space complexity O(V)V is the number of nodes
Time complexity: O(V^2) when using adjacency matrix O(E V) when using adjacency list V is the number of nodes, E is the number of edges
Note: The traversed sequence is unique when using an adjacency matrix, but not unique when using an adjacency list.
Minimum spanning tree: The spanning tree with the smallest sum of edge costs among all spanning trees.
nature
1. Minimum spanning tree is not unique 2. When the weight of each edge in the graph is different, the minimum spanning tree is unique.
Algorithm for computing minimum spanning tree given a graph
Prim's algorithm: Select any point, find the edge with the minimum cost connecting it, include the points connected to it into the point set, and find the edge with the minimum cost connecting the two points from the unselected edges. , merge the connected points into the point set...and so on, until the nodes are all in the set
Time complexity O(V^2) The time cost has nothing to do with the number of edges and is suitable for graphs with dense edges.
Kruskal's algorithm: Arrange the edges according to their weights from small to large, and select the smallest edge each time. If the vertices at both ends of the selected edge are already in the same connected component, discard this edge and select downwards. Until n-1 edges are selected
Time complexity: O (Elog2E) The time cost has nothing to do with the number of vertices and is not suitable for dense graphs
Algorithm for finding the shortest path
Dijkstra's algorithm: find the shortest path from a certain vertex to all other vertices
Time complexity O(V^2)
Edges with negative weights are not allowed in the graph
Starting from the source point, write down the cost from the origin to its adjacent nodes. Select the node corresponding to the edge with the minimum cost and add it to the sequence. Then starting from the new node, write down the distance that the node can reach. The cost of the adjacent node. If the cost of the source point passing through this node to reach the same node is smaller than before, modify the corresponding value, find the node corresponding to the current minimum cost edge again, and add it to the sequence...
Floyd's algorithm: algorithm for finding the shortest path between all vertices
Time complexity O(V^3)
Edges with negative weights are allowed in the graph, but cycles with edges with negative weights are not allowed.
Iterate a matrix. Initially, write the adjacency matrix of the graph, and then start from the first vertex. If this vertex can be used as a transit node, will the cost required to reach the remaining points become smaller? If so, Just modify the value of the corresponding position
topological sort
Sort the vertices (activities) of the AOV network without weighted directed acyclic graph and find the order of doing things.
AOV network: vertices represent activities, and edge directions represent the sequence of activities.
Each time a node without a predecessor is visited, the node and the edges connected to it are deleted until the AOV network is empty or a node without a predecessor cannot be found (indicating that there is a cycle in the graph)
time complexity: When using the collar to connect the watch O (V E) O(V^2) when using adjacency matrix
Find the critical path
The longest path from source to sink in weighted directed graph AOE network
AOE network: vertices represent events, edges represent activities, and weights represent the cost of completing activities.
1. The critical path length is the shortened time required to complete the project. 2. Shorten the construction period by shortening the activities on the critical path, but it cannot be shortened indefinitely, because if it is shortened to a certain extent, this path may no longer be the critical path. 3. When the critical path is not unique, shortening the time of one critical path will not shorten the construction period.
Starting from the source point, for each point, select the longest cost to reach this point each time, and record the longest path predecessor of each point. The path composed of these points is the critical path.
Chapter 4 Trees and Binary Trees
full binary tree
The leaf nodes of the last layer are full, that is, the degree of all nodes except the leaf nodes is 2. If the height is h, there are 2^h -1 nodes. The number is 1~2^h -1
complete binary tree
The numbering is consistent with the complete binary tree. On the basis of the full binary tree, the last layer and the continuous leaf nodes on the right are missing.
nature
1. Numbered 1, 2, 3,...n from top to bottom and from left to right; The number of the last branch node is n/2" (take the bound) Because when the number of the last leaf node is n, the number of its parent node is n/2. For example, the parent node of 2 and 3 is 1.
2. If the node numbered k is a leaf node or has only the left subtree (indicating that it is the last branch node), then the nodes i>k are all leaf nodes.
3. If n is an odd number, then all branch nodes have left and right children; if n is an even number, then the left child after the last branch node
4. The level (depth) of node i is log2(i)" 1 The first layer is numbered 1, the second layer is 2~3, the third layer is 4~7, the fourth layer is 8~15...
5. The height of a complete binary tree with n nodes is "log2(n 1) or log2(n)" 1
Binary sorting tree
The key of the left subtree is less than the root node, and the key of the right subtree is greater than the root node. The result of the binary traversal of the binary sort tree must be increasing.
Search operation: If the keyword is smaller than the root node, go to the left subtree; if it is larger than the root node, go to the right subtree.
The Nth node of the tree When the tree looks like a balanced binary tree (that is, each level is filled as much as possible) the search efficiency is best O(log2(N)) When there is one node at one level of the tree, the worst search efficiency is O(N)
Insertion operation: A keyword smaller than the root node is inserted into the left subtree, and a keyword larger than the root node is inserted into the right subtree. The new node inserted must be a leaf node.
Delete operation: 1. If the node is a leaf node, delete it directly. 2. If the node has only a left subtree or a right subtree, let the root node of the subtree replace itself. 3. If the node x has left and right subtrees, find the in-order successor node y of the node to replace x, and then delete y.
Balanced Binary Tree AVL
Binary sorting tree with height difference between left and right not exceeding 1
Insert operation: 1. Insert the new node x into the appropriate position according to the size of the keyword 2. Find the smallest unbalanced subtree, whose root node is y 3. Find the three nodes closest to y (including y) on the path between x and y, and adjust these three nodes according to the keyword size
Binary tree traversal
Use recursion to traverse the left and right roots in order; traverse the left and right roots in mid-order; traverse the left and right roots in post-order; use queues to implement level-order traversal
Construct a binary tree from a traversal sequence: Middle order Preorder middle order last order middle sequence sequence The first one in the pre-order and the last one in the post-order are the root nodes. In the in-order sequence, the root node is used to divide the left and right subtrees.
Huffman tree
The binary tree with the smallest sum of weighted path lengths of leaf nodes Weighted path length: the path length from the weight x of the leaf node to the root node
How to construct: Given a bunch of weighted nodes, the two with the smallest weight each time are used as the left and right children, and the weight of the root node is the sum of the weights of the children.
The Huffman tree is used to solve the cost problem. The ones with large weights (highly visited) are closer to the root node, and the ones with small weights (less used) are farther from the root node, thereby reducing the average cost.
Huffman coding: Left 1, right 0 or left 0, right 1. Code weighted characters according to the method of constructing a Huffman tree. The coding of characters is 01 encoding each character from the root node to the leaf node of the character. None of the encodings is a prefix of another encoding
Conversion of trees, forests, and binary trees
Convert tree to binary tree
Right brother becomes right child
Convert forest to binary tree
The root nodes of each tree are connected, and the root node of the first tree is the root node of the binary tree. Then convert each tree into a binary tree
clue binary tree
Each node has two pointers. The node pointers with left and right children point to the left and right children. When there are no children, the null pointer points to the clue. The left pointer points to the predecessor node of the in-order traversal, and the right pointer points to the posterior node. In a binary tree containing n nodes, there are n 1 null pointers.
node structure
lchild ltag data rtag rchild
ltag
When 1, ltag points to its predecessor node At 0, ltag points to the left child
rtag
When 1, rtag points to its successor node At 0, rtag points to its right child
It is divided into pre-order clue binary tree, mid-order clue binary tree and post-order clue binary tree. The mid-order clue test is more common.
Basic properties of trees
Total degree=1*n1 2*n2
Total number of nodes n=n0 n1 n2
The total number of nodes n = total degree B 1 (the root node is not pointed to by a degree)
The leaf node is equal to the number of nodes with degree two plus one n0=n2 1
When m<n/2, the number of nodes with degree 1 cannot be 2m (an even number), but can only be an odd number.