MindMap Gallery Artificial Intelligence Basics
Introducing several major developments and basic principles of artificial intelligence.
Edited at 2020-08-17 08:52:49This is a mind map about bacteria, and its main contents include: overview, morphology, types, structure, reproduction, distribution, application, and expansion. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about plant asexual reproduction, and its main contents include: concept, spore reproduction, vegetative reproduction, tissue culture, and buds. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about the reproductive development of animals, and its main contents include: insects, frogs, birds, sexual reproduction, and asexual reproduction. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about bacteria, and its main contents include: overview, morphology, types, structure, reproduction, distribution, application, and expansion. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about plant asexual reproduction, and its main contents include: concept, spore reproduction, vegetative reproduction, tissue culture, and buds. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about the reproductive development of animals, and its main contents include: insects, frogs, birds, sexual reproduction, and asexual reproduction. The summary is comprehensive and meticulous, suitable as review materials.
Artificial Intelligence Basics
1. The beginning of a new era
machine learning
definition
Simulating human cognitive abilities through machines
how to learn
from data
supervised learning
unsupervised learning
from behavior
reinforcement learning
AI application
Medical/Safety/Manufacturing/Autonomous Driving
AI history
The first wave (1956-1974)
1964~1966 The first natural language conversation program ELIZA
The second wave (1980-1987)
expert system
Solve problems in specific areas
subtopic
subtopic
subtopic
The third wave (2011-)
Model and algorithm development
statistical learning
Support Vector Machines
Probabilistic graphical model
Born out of nowhere
1950: :The Turing Test
1951: Minsky's first neural network machine, SNARC
1955: Sou Su Reasoning
2. Observe differences and identify flowers
2.1 Classification
Images, videos, text, sounds
2.2 Extracting features
Feature vector
vector
Feature points and feature space
Feature points: represented points
Feature space: composed of feature points
2.3 Classifier
function from feature vector to predicted class
train classifier
train
Algorithm: Perceptron
Optimization: Adjust parameters to minimize the loss function
Loss function: a mathematical measure of the error in a classifier’s output
Algorithm: Support Vector Machine
Classifier with maximum classification interval in feature space
2.4
application
judge good or bad
test
actual use
2.5 Multi-category classification
normalized exponential function
Compress one vector into another vector
Each element is between 0 and 1
The sum of the elements is 1
2.62 Practical application of classification
Face prediction/cancer prediction
3. Image recognition
3.1 Image classification based on manual features
computer recognition image
matrix:
Pixel: small grid
Resolution: number of grid rows * number of columns
Image features
Humans: Quick Discrimination
Machine: convolution operation
Edge feature extraction
Example: Oriented gradient histogram
3. Histogram stitching
2. Statistics based on characteristics
1. Extract features
3.2 Image classification based on neural network
Feature design to feature learning
Deep neural network structure
multiple sequentially connected layers
convolution layer
1. The first convolutional layer: image input
2. The second convolution layer~: the feature map input of the previous layer
Convert feature map to feature vector
Fully connected layer
Transform the feature vector
Normalized exponential layer
The last layer of the classification network
nonlinear activation layer
The transformation effects of convolutional layers and fully connected layers are preserved
Pooling layer
Reduce the number of calculations and parameters
Artificial neural network and biological neural network
Algorithm: Backpropagation
1. Chain propagation
2. Stochastic Gradient Descent
3.3 Development and Challenges of Deep Neural Networks
"deep" of depth
year 2010
ALEX net: 5 convolutional layers
2016
polynet: 10 convolutional layers
Deep "difficulty"
Too much is not enough
overfitting
Training set OK
Catering to the data Doesn’t perform well on new data
Underfitting
limited ability
poor performance
gradient disappears
Batch processing
Cross-layer connection
3.4 Practical application of image classification
3.4.1 Face recognition
4. Sound distinction
4.1 Listening
Human ear: Acceptable frequency range: 85~1100HZ
sound digitization
Sampling/Quantization/Encoding
sound
1. Voice
2.Music
rhythmic
understand sounds
loudness
tone
Spectrum
timbre
different hearing effects
formant
relatively concentrated areas on the spectrum
4.2 Music style classification
Computer "in-the-ear" style
Feature extraction
feature
Classification
Style classification
Mel frequency cepstral coefficient
low dimensionality
Describe the energy level of different frequencies
9. Go master
9.1 AlphaGo chess network
Supervised Learning Policy Network
KGS platform obtains data
reinforcement learning
Feedback is evaluative
Interaction between agent and environment
Find the best strategy for maximum returns
Reinforcement Learning Policy Network
9.2 AlphaGo’s overall view
Valuation network
situation value judgment
fast moving network
9.3 Alpha yuan
Strategy iteration
strategic assessment
Strategy improvements
constantly alternating
self-play
Integrated into Monte Carlo search
through random deduction
8. Create pictures
8.1 Data space and data distribution
data space
The space where the data is located
Data distribution
The distribution of data in space
Generative Adversarial Network
Generate network
Randomly generate observation data
discriminative network
Distinguish whether data is true or false
potential space
Simple distribution masters the space of complex distribution
8.2 Generating Network
Random point changes to similar images in the dataset
8.3 Discrimination network
Input different pictures to improve judgment ability
8.4 Generative Adversarial Networks
Generative and discriminative networks train each other
7. Understand the text
7.1 Mission characteristics
latent semantic analysis
Corpus: massive text data
Document: stand-alone text
Subject: the main content of the document
7.2 Text features
bag of words model
Chinese word segmentation
Stop words and low frequency words
word frequency
The quotient of the number of times a word appears in a text divided by the total number of words
Inverse document frequency
Negative logarithm of document frequency
7.3 Discover potential themes in the text
topic model
Core: document word frequency = topic proportion * topic word frequency
Matrix multiplication
latent semantic analysis
7.4 Text-based search and recommendation
question
synonyms
Polysemy
6.Category
6.1 First contact with things
supervised learning
Annotation information required for training data
unsupervised learning
No annotation information
6.2 K-means clustering
clustering
The aggregation of data in the feature space is divided into different groups
algorithm
Randomly select K samples
Each sample is assigned to the category corresponding to the nearest cluster center, and a new division method is obtained.
Recalculate the cluster center of each type of sample
6.3 Practical application-crowd classification
Face Detection
face straightening
Feature extraction
face clustering
6.4 Hierarchical clustering and biological clustering
Each sample is separated into 1 column
Repeatedly merge the two most similar classes
5. Identify videos
5.1 Image to video
persistence of vision
video
Hundreds of photos taken continuously
5.2 Video behavior recognition
difficulty
Large differences in behaviors in the same category
Behavior is not well defined
Environmental background varies greatly
behavioral identification features
sports
light flow
Calculation: Correspondence of the same points between 2 frames
definition
A two-dimensional (2D) instantaneous velocity composed of all pixels in the image
hypothesis
The motion of two adjacent frames is small
The colors of two adjacent frames are basically unchanged.
5.3 Video behavior based on deep learning
Single frame recognition
widely varying behavior
Watch TV/write information
video
Recognition: pixel components
level
vertical
Long time: sequential segmented network
Short time: two-stream convolutional neural network
static
spatial flow convolutional neural
dynamic
Time flow convolutional neural