MindMap Gallery 算法之排序算法对比导图
Discover the world of sorting algorithms with our comprehensive comparison guide. This mind map outlines key dimensions, including time and space complexity, stability, adaptiveness, and practical use cases. Explore the intricacies of popular algorithms such as Bubble Sort, Selection Sort, Insertion Sort, Shell Sort, and Merge Sort. Each section delves into core concepts, performance metrics, advantages, and typical scenarios for application. From the simplicity of Bubble Sort for small datasets to the efficiency of Merge Sort for larger arrays, understand how to choose the right algorithm based on your specific needs, including memory constraints and input patterns. Join us in unraveling the complexities of sorting!
Edited at 2026-03-20 03:57:21Unlock the mysteries of how neurons communicate! This overview delves into synaptic transmission, the vital process by which neurons relay information across synapses, utilizing both electrical and chemical signaling. We explore the core components of synapses, including presynaptic terminals, synaptic clefts, and postsynaptic membranes, along with the roles of supporting elements like astrocytes and microglia. Discover the mechanisms behind chemical and electrical synaptic transmission, the step-by-step process of neurotransmitter release, and the diverse outcomes of excitatory and inhibitory signaling. Learn how these intricate interactions lay the biological foundation for learning, memory, and overall brain function. Join us in understanding this essential aspect of neuronal communication!
Discover the fascinating world of acid-base theories, which provide essential frameworks for understanding chemical behavior in various contexts. This overview explores key models, including Arrhenius, Brønsted-Lowry, and Lewis theories, highlighting their definitions, typical reactions, strengths, and limitations. We delve into concepts like neutralization, pH, and solvent effects, alongside specialized theories like Lux-Flood and Usanovich, which broaden the scope of acid-base interactions. Additionally, the HSAB principle offers insights into the compatibility of acids and bases. Join us in uncovering how these theories explain and predict chemical phenomena across diverse environments.
Discover the rich tapestry of Japan's history, from its mythic origins to modern industrialization. This timeline provides a structured overview of key periods, including the early state formation marked by the legendary Emperor Jimmu and the introduction of Buddhism. Explore the classical era with the establishment of the Nara and Heian capitals, the rise of shogunate rule in Kamakura, and the fragmented authority during the Muromachi period. Witness the unification efforts of notable figures like Oda Nobunaga and Tokugawa Ieyasu leading to the Edo period's stability. Finally, delve into the pressures faced by the Tokugawa shogunate as Japan encounters the West, setting the stage for profound transformation. Join us in this journey through time!
Unlock the mysteries of how neurons communicate! This overview delves into synaptic transmission, the vital process by which neurons relay information across synapses, utilizing both electrical and chemical signaling. We explore the core components of synapses, including presynaptic terminals, synaptic clefts, and postsynaptic membranes, along with the roles of supporting elements like astrocytes and microglia. Discover the mechanisms behind chemical and electrical synaptic transmission, the step-by-step process of neurotransmitter release, and the diverse outcomes of excitatory and inhibitory signaling. Learn how these intricate interactions lay the biological foundation for learning, memory, and overall brain function. Join us in understanding this essential aspect of neuronal communication!
Discover the fascinating world of acid-base theories, which provide essential frameworks for understanding chemical behavior in various contexts. This overview explores key models, including Arrhenius, Brønsted-Lowry, and Lewis theories, highlighting their definitions, typical reactions, strengths, and limitations. We delve into concepts like neutralization, pH, and solvent effects, alongside specialized theories like Lux-Flood and Usanovich, which broaden the scope of acid-base interactions. Additionally, the HSAB principle offers insights into the compatibility of acids and bases. Join us in uncovering how these theories explain and predict chemical phenomena across diverse environments.
Discover the rich tapestry of Japan's history, from its mythic origins to modern industrialization. This timeline provides a structured overview of key periods, including the early state formation marked by the legendary Emperor Jimmu and the introduction of Buddhism. Explore the classical era with the establishment of the Nara and Heian capitals, the rise of shogunate rule in Kamakura, and the fragmented authority during the Muromachi period. Witness the unification efforts of notable figures like Oda Nobunaga and Tokugawa Ieyasu leading to the Edo period's stability. Finally, delve into the pressures faced by the Tokugawa shogunate as Japan encounters the West, setting the stage for profound transformation. Join us in this journey through time!
Sorting Algorithms Comparison Mind Map
Comparison Dimensions
Time Complexity
Best Case
Average Case
Worst Case
Notes (what input patterns trigger cases)
Space Complexity
Auxiliary Space (extra memory)
In-place or Not
Recursion Stack (if applicable)
Stability
Stable (keeps equal keys’ relative order)
Unstable
Adaptiveness
Adaptive (faster on nearly-sorted data)
Not adaptive
Typical Use Considerations
Small vs large datasets
Nearly-sorted data
Memory constraints
Need for stability
External sorting (disk-based)
Choose by runtime vs memory vs stability needs, and by input patterns like nearly-sorted or duplicate-heavy.
Bubble Sort
Core Idea
Repeatedly swap adjacent out-of-order elements; largest “bubbles” to the end each pass
Time Complexity
Best: O(n)
With early-exit optimization (no swaps in a pass)
Trigger: already sorted / nearly sorted
Average: O(n^2)
Worst: O(n^2)
Trigger: reverse-sorted
Space Complexity
Auxiliary: O(1)
In-place: Yes
Stability
Stable (adjacent swaps preserve equal elements’ order)
Adaptiveness
Yes (with early-exit flag)
Pros
Very simple; easy to implement and teach
Works well only for very small or almost-sorted arrays (with optimization)
Cons
Too slow for medium/large n
Typical Scenarios
Educational purposes, tiny arrays, nearly sorted with early exit
Selection Sort
Core Idea
Repeatedly select the minimum (or maximum) element from the unsorted part and swap into position
Time Complexity
Best: O(n^2)
Average: O(n^2)
Worst: O(n^2)
Trigger: any (always scans remaining elements)
Space Complexity
Auxiliary: O(1)
In-place: Yes
Stability
Unstable (swap can move equal elements past each other)
Stable variant exists but typically requires extra shifts (may affect performance)
Adaptiveness
No (work does not reduce for nearly sorted input)
Pros
Minimal number of swaps: O(n) swaps
Predictable performance
Cons
Still O(n^2) comparisons; slow for large n
Typical Scenarios
When writes/swaps are costly (e.g., flash memory) and n is small
Insertion Sort
Core Idea
Build a sorted prefix; insert each new element into correct position by shifting larger elements right
Time Complexity
Best: O(n)
Trigger: already sorted
Average: O(n^2)
Worst: O(n^2)
Trigger: reverse-sorted (maximum shifts)
Space Complexity
Auxiliary: O(1)
In-place: Yes
Stability
Stable (when inserting by shifting rather than swapping past equals)
Adaptiveness
Yes (excellent on nearly sorted data; running time ~ O(n + inversions))
Pros
Fast for small n and nearly sorted data
Simple and stable
Online (can sort as data arrives)
Cons
Quadratic time on random/reverse data for large n
Typical Scenarios
Small arrays; nearly sorted inputs
Used as “base case” inside hybrid sorts (e.g., TimSort, IntroSort implementations often switch for small partitions)
Shell Sort
Core Idea
Generalized insertion sort using gaps; perform gapped insertion sorts with decreasing gaps
Time Complexity (gap-sequence dependent)
Best: varies (often better than O(n^2) in practice)
Average: commonly ~ O(n^1.3) to O(n^1.5) (depends on gaps)
Worst: can be O(n^2) for some sequences
Space Complexity
Auxiliary: O(1)
In-place: Yes
Stability
Unstable (far-apart moves can reorder equals)
Adaptiveness
Partially (often benefits from some presortedness)
Pros
Practical improvement over insertion sort without extra memory
Good for medium-size arrays
Cons
Complexity depends on chosen gap sequence; not stable
Typical Scenarios
In-place sorting where extra memory is constrained and n is moderate
Merge Sort
Core Idea
Divide array, sort halves recursively, merge two sorted halves
Time Complexity
Best: O(n log n)
Average: O(n log n)
Worst: O(n log n)
Trigger: any (consistent)
Space Complexity
Auxiliary: O(n) for merging arrays (typical array implementation)
In-place: No (typical); in-place variants exist but are complex and slower in practice
Recursion stack: O(log n)
Stability
Stable (if merge favors left element when equal)
Adaptiveness
Not inherently adaptive (still splits/merges); can benefit slightly from runs if implemented accordingly
Pros
Predictable O(n log n)
Stable; good for linked lists (can be O(1) extra space with pointer manipulations)
Good for external sorting (disk), because merging is sequential I/O friendly
Cons
Extra memory O(n) for arrays
Typical Scenarios
Need stability with guaranteed O(n log n)
Sorting linked lists
External sorting / very large datasets on disk
Quick Sort
Core Idea
Partition around a pivot; recursively sort partitions
Time Complexity
Best: O(n log n)
Trigger: balanced partitions (pivot near median)
Average: O(n log n)
With randomized pivot or good pivot strategy
Worst: O(n^2)
Trigger: highly unbalanced partitions (e.g., already sorted with naive first/last pivot; many equal keys with poor partitioning)
Space Complexity
Auxiliary: O(1) (partitioning is in-place)
Recursion stack: O(log n) average, O(n) worst
In-place: Yes (typical in-place partition)
Stability
Unstable (partition swaps can reorder equals)
Stable variants exist but usually require extra memory
Adaptiveness
Not adaptive by default
Practical Notes
Randomization reduces probability of worst-case
3-way partitioning (Dutch National Flag) improves performance with many duplicates
Tail recursion elimination / iterative stack can reduce stack usage
Pros
Excellent cache locality; often fastest in practice for in-memory arrays
Low constant factors
Cons
Worst-case O(n^2) unless mitigated (randomization, introspection)
Unstable
Typical Scenarios
General-purpose in-memory sorting when stability not required; often combined with safeguards (IntroSort)
Heap Sort
Core Idea
Build a heap; repeatedly extract max/min and place into final position
Time Complexity
Best: O(n log n)
Average: O(n log n)
Worst: O(n log n)
Trigger: any (guaranteed)
Space Complexity
Auxiliary: O(1)
In-place: Yes
Stability
Unstable
Adaptiveness
No
Pros
Guaranteed O(n log n) worst-case
In-place with O(1) extra memory
Cons
Usually slower than quicksort in practice due to cache behavior
Unstable
Typical Scenarios
When worst-case guarantees and low memory matter
Used inside IntroSort as fallback to avoid quicksort worst-case
Counting Sort (Non-comparison Sort)
Core Idea
Count occurrences of each key in a limited integer range; compute prefix sums to place elements
Time Complexity
Best/Average/Worst: O(n + k)
k = range of keys (maxKey - minKey + 1)
Space Complexity
Auxiliary: O(n + k) (output + counts) or O(k) plus in-place-like output handling depending on implementation
In-place: No (typical stable version uses output array)
Stability
Stable (when placing elements using prefix sums from right to left)
Adaptiveness
Not relevant in same way; depends on k, not presortedness
Pros
Linear-time when k is not much larger than n
Stable; good building block for radix sort
Cons
Requires integer keys / mappable to small range; memory heavy if k is large
Typical Scenarios
Sorting grades, IDs, small-range integers
Preprocessing step for radix sort
Radix Sort (Non-comparison Sort)
Core Idea
Sort by digits/bytes from least significant to most (LSD) or vice versa (MSD) using a stable sub-sort (often counting sort)
Time Complexity
Best/Average/Worst: O(d * (n + b))
d = number of digits/passes, b = base/radix (e.g., 10, 256)
Space Complexity
Auxiliary: often O(n + b) (for stable counting per pass)
In-place: Typically no (common implementations use extra buffers)
Stability
Stable (if each digit-pass sort is stable)
Adaptiveness
Not primarily; performance depends on d and base
Pros
Can be near-linear for fixed-width integers/strings
Predictable performance independent of comparisons
Cons
Extra memory; requires fixed-length or well-defined digit representation
Typical Scenarios
Sorting 32-bit/64-bit integers
Sorting strings with bounded length (with careful choice of MSD/LSD)
Bucket Sort (Distribution Sort)
Core Idea
Distribute elements into buckets based on value ranges; sort each bucket (often insertion sort); concatenate
Time Complexity
Best: O(n) (uniform distribution, balanced buckets, simple per-bucket sorting)
Average: O(n + k + sum sort(bucket_i))
Commonly near O(n) under uniform distribution assumptions
Worst: O(n^2)
Trigger: all elements fall into one bucket and inner sort is quadratic
Space Complexity
Auxiliary: O(n + k) (buckets)
In-place: No (typical)
Stability
Depends on bucket insertion and inner sort; can be stable
Adaptiveness
Depends on inner sort and distribution
Pros
Very fast for uniformly distributed floating-point numbers in [0,1) or known ranges
Cons
Sensitive to distribution; extra memory
Typical Scenarios
Data known to be uniformly distributed over a range
TimSort (Hybrid, Practical)
Core Idea
Identify naturally occurring runs; use insertion sort on small runs; merge runs with invariants
Time Complexity
Best: O(n)
Trigger: already sorted or composed of long runs
Average: O(n log n)
Worst: O(n log n)
Space Complexity
Auxiliary: O(n) worst-case (often less in practice depending on run structure)
Stability
Stable
Adaptiveness
Highly adaptive (designed for real-world partially ordered data)
Pros
Excellent real-world performance; stable; exploits existing order
Cons
More complex; extra memory for merges
Typical Scenarios
Default sort in Python and Java (object sorting); when stability + real-world performance matter
IntroSort (Hybrid)
Core Idea
Start with quicksort; if recursion depth exceeds limit, switch to heap sort; use insertion sort for small partitions
Time Complexity
Best/Average: O(n log n)
Worst: O(n log n) (due to heap sort fallback)
Space Complexity
Auxiliary: O(1) + recursion stack O(log n)
In-place: Yes (typical)
Stability
Unstable (inherits from quicksort/heap sort)
Adaptiveness
Not specifically; aims for robust worst-case
Pros
Quicksort speed with worst-case guarantee
Widely used in C++ std::sort
Cons
Unstable
Typical Scenarios
General-purpose in-place sorting with worst-case guarantees
Quick Reference Summary (Big-O)