MindMap Gallery Neural Networks and Deep Learning Convolutional Neural Networks
The main content of convolutional neural networks is summarized, such as basic concepts, convolution operations, basic structures, parameter learning methods and some convolutional neural network example structures.
Edited at 2023-02-26 23:13:29This is a panoramic infographic—currently sweeping across the web—illustrating the comprehensive applications of OpenClaw, a popular open-source AI agent platform. It systematically introduces this intelligent agent framework—affectionately dubbed "Lobster Farming"—helping readers quickly grasp its core value, technical features, application scenarios, and security protocols. It serves as an excellent introductory guide and practical manual.
這是一張最近風靡全網關於熱門開源AI代理平台OpenClaw的全網應用全景圖解。它系統性地介紹了這款被稱為「養龍蝦」的智慧體框架,幫助讀者快速理解其核心價值、技術特性、應用場景及安全規範,是一份極佳的入門指南與實操手冊。此圖主要針對希望利用AI建構自動化工作流程的技術從業人員、中小企業主及效率追求者,透過9大模組層層遞進,全面剖析了OpenClaw從概念到落地的整個過程。 圖中核心內容首先釐清了「養龍蝦」指涉的是OpenClawd開源智能體,並強調其本質是「AI基建」而非一般聊天機器人。隨後詳細比較其與傳統AI助理的區別,擁有記憶管理、權限控制、會話隔離和異常恢復四大基礎能力,支援跨平台存取和多模型相容(如GPT、Claude、Ollama)。同時,圖解提供了完整的部署方案(雲端/本地/Docker),並列舉了辦公室自動化、內容創作、資料收集等五大應用程式場景。此外,還展示了其火爆程度、政府與大廠佈局、安全部署建議及適合/不適合的人群分類。幫助你快速掌握OpenClaw技術架構與應用價值,指導個人或企業建構AI自動化系統,規避資料外洩與權限失控風險,是學習「執行式AI」轉型的權威參考圖譜。
本圖由萬興腦圖繪製,是針對IT研發崗位的結構化個人履歷模板,完整涵蓋求職核心資訊模組。基本資訊區包含姓名、電話、信箱、求職意願及GitHub連結;專業概要要求以2-3句提煉核心優勢;工作經驗以「公司A高級Java開發工程師」為例,以「透過(行動),達成(量化成果)」格式呈現微服務架構設計、系統效能優化、團隊技術規範制定等職責,公司B經歷則聚焦功能模組開發與Elasticsearch搜尋優化;技能專長分程式語言、後端框架、中介軟體、資料庫、容器雲等維度,清楚展示技術堆疊;專案成果以「電商平台秒殺系統」為例,說明技術棧、架構設計、個人貢獻(Redis Lua庫存原子扣減)及KPI;教育背景包含一流大學電腦專業學歷,以及AWS認證解決方案架構師、軟考中級軟體設計師證書。模板邏輯嚴謹,涵蓋IT研發求職全流程關鍵訊息,幫助求職者清晰、量化展示專業能力。
This is a panoramic infographic—currently sweeping across the web—illustrating the comprehensive applications of OpenClaw, a popular open-source AI agent platform. It systematically introduces this intelligent agent framework—affectionately dubbed "Lobster Farming"—helping readers quickly grasp its core value, technical features, application scenarios, and security protocols. It serves as an excellent introductory guide and practical manual.
這是一張最近風靡全網關於熱門開源AI代理平台OpenClaw的全網應用全景圖解。它系統性地介紹了這款被稱為「養龍蝦」的智慧體框架,幫助讀者快速理解其核心價值、技術特性、應用場景及安全規範,是一份極佳的入門指南與實操手冊。此圖主要針對希望利用AI建構自動化工作流程的技術從業人員、中小企業主及效率追求者,透過9大模組層層遞進,全面剖析了OpenClaw從概念到落地的整個過程。 圖中核心內容首先釐清了「養龍蝦」指涉的是OpenClawd開源智能體,並強調其本質是「AI基建」而非一般聊天機器人。隨後詳細比較其與傳統AI助理的區別,擁有記憶管理、權限控制、會話隔離和異常恢復四大基礎能力,支援跨平台存取和多模型相容(如GPT、Claude、Ollama)。同時,圖解提供了完整的部署方案(雲端/本地/Docker),並列舉了辦公室自動化、內容創作、資料收集等五大應用程式場景。此外,還展示了其火爆程度、政府與大廠佈局、安全部署建議及適合/不適合的人群分類。幫助你快速掌握OpenClaw技術架構與應用價值,指導個人或企業建構AI自動化系統,規避資料外洩與權限失控風險,是學習「執行式AI」轉型的權威參考圖譜。
本圖由萬興腦圖繪製,是針對IT研發崗位的結構化個人履歷模板,完整涵蓋求職核心資訊模組。基本資訊區包含姓名、電話、信箱、求職意願及GitHub連結;專業概要要求以2-3句提煉核心優勢;工作經驗以「公司A高級Java開發工程師」為例,以「透過(行動),達成(量化成果)」格式呈現微服務架構設計、系統效能優化、團隊技術規範制定等職責,公司B經歷則聚焦功能模組開發與Elasticsearch搜尋優化;技能專長分程式語言、後端框架、中介軟體、資料庫、容器雲等維度,清楚展示技術堆疊;專案成果以「電商平台秒殺系統」為例,說明技術棧、架構設計、個人貢獻(Redis Lua庫存原子扣減)及KPI;教育背景包含一流大學電腦專業學歷,以及AWS認證解決方案架構師、軟考中級軟體設計師證書。模板邏輯嚴謹,涵蓋IT研發求職全流程關鍵訊息,幫助求職者清晰、量化展示專業能力。
Neural Networks and Deep Learning convolutional neural network
Introduction to CNN
Typical CNN structure preview
basic properties
sparse connection
Compared with the fully connected network FC, CNN is a local connection, that is, the output of a neuron in the previous layer is only connected to the input of several adjacent neurons in the next layer, and the input of a neuron in the next layer only receives the input of the previous layer. The output of several neighboring neurons
Parameter sharing
Receptive field (field of view)
The input of a neuron in the current layer is the output of several neurons near the previous layer, and what is felt is the output of the neighboring neurons in the previous layer. This input area is called the receptive field of the current neuron.
Convolution kernel
The signals within the receptive field are weighted to form the activation of the current neuron. Adjacent neurons have different but equal receptive fields (regardless of boundaries).
The activation of each neuron is generated by the weighted summation of the signals in their respective sensory fields using the same set of weight coefficients, that is, each neuron uses the same weight coefficient vector. This set of shared weight coefficients is called a convolution kernel.
Approximate translation invariance
A translation of the input signal has the same translation of the output signal
The properties of the convolution operation itself, the basic properties of linear time-invariant systems
By properly designing the pooling unit and selecting the activation function, CNN can approximately maintain translation invariance.
Example
Identify a dog in an image. It is still a dog after translation.
Convolution operation and its physical meaning
Convolution operation
Input signal x(t)
System unit impulse response h(t) (CNN convolution kernel)
Output signal y(t)
Convolution properties
Interchangeability
translation invariance
full convolution length
N K-1
Effective convolution length
N-K 1
physical meaning
filter
low pass filter
Extract the slowly changing low-frequency components of the signal
h1[n]={1/2,1/2}
high pass filter
Extract rapidly changing high-frequency components of signals
h2[n]={1/2,-1/2}
bandpass filter
Extract moderately varied ingredients
Convolution filter function
For a complex signal that contains various frequency components, different filters implemented by different convolution kernels can obtain components of different varying scales in the signal.
adaptive filtering
The error between the output of the network output layer and the expected response is used to train the output layer network
The BP algorithm back-propagates the output layer error to each previous layer, and trains the convolution kernels of each layer in turn using the back-propagation error.
The structure of basic CNN
One-dimensional convolution
Neuron activation value
neuron output
Rectified linear activation function ReLU
z=max{0,a}
convolution channel
Convolution operation between input and convolution kernel Activation function operation
Compare with fully connected network
Few shared parameters
Divide and conquer input features of different natures
2D convolution
formula
Image understanding
The two-dimensional convolution operation is equivalent to sliding hij in the Xij data array. When amn needs to be calculated, h00 slides to align with Xmn, and then the product term Xm in j hij is calculated and added.
The effective convolution output size is (D1-K1 1)×(D2-K2 1)
detection level
Calculate activation function, neuron output
multi-channel convolution
Convolution channel/convolution plane
The matrix generated by each convolution kernel h through convolution operation
Example
enter
32×32 image, 3 channels representing RGB three primary colors
Convolution kernel
6 5×5 convolution kernels, two for each input channel
output
Generates 6 28×28 convolution channels
Pooling
max pooling
Pooling that takes the maximum value of the window, that is, selects the maximum value within a small window as the pooling result
average pooling
Average within window as pooling result
decimation pooling
Fixed point value within window as pooling result
Window properties
size
M1×M2
pooling stride
S
Equal-length zero-padding convolution
K is an odd number
Add (K-1)/2 zeros to both ends of the input
K is an even number
Add K/2 zeros to one side and (K/2)-1 zeros to the other side.
Constituting CNN
Composition of convolutional layers
Convolution operation stage
Detection level (ReLU function)
Pooling (optional)
Typical CNN network structure
Some extended structures of convolution
tensor convolution
3D data volume
tensor convolution kernel
convolution plane
Channel-dimensional convolution
Extract different features of the channel dimension
1×1 convolution kernel
S-stride convolution
CNN parameter learning
CNN’s BP algorithm idea
forward propagation
Convolution layer convolution calculation
FC layer fully connected calculation activation output
Pooling layer performs pooling
Backpropagation
The FC layer is calculated according to the standard BP backpropagation algorithm.
Convolutional layer and pooling layer backpropagation algorithm
Backpropagation formula for convolutional layers
Backpropagation formula for pooling layer
2D expansion
CNN example introduction
LeNet-5 network
AlexNet network and VGGNet network
activation function
The ReLU activation function trains 6 times faster than the tanh activation function
AlexNet structure
VGGNet structure
Use deeper layers, smaller convolution kernels, and multiple convolution layers corresponding to one pooling layer.
Ideas for improving training effects
Obtain better training results by increasing the depth of CNN
A direct increase in the number of layers will bring negative effects
Easy to overfit
gradient disappears
gradient explosion
GoogLeNetNetwork
Macro construction module Inception
4 parallel branches
Generate output by branching and merging modules
Each branch contains a 1×1 convolution
The purpose is to divide and conquer to reduce parameters and computational complexity
structure
Residual networks and dense networks
residual network
network degradation problem
The accuracy on the training set is saturated or even drops.
Residual network characteristics
Easy to optimize and can improve accuracy by adding considerable depth
The residual block inside the residual network uses skip connections, which alleviates the vanishing gradient problem caused by increasing depth in the deep neural network.
residual building block
residual grid structure
dense network
Dense network characteristics
Maintain the structure of the feedforward network, connecting the output from the input layer or from the current layer to the input of each subsequent layer
For L-layer networks, there can be L(L-1)/2 connections
dense network structure