MindMap Gallery Linear Algebra Chapter 5 Eigenvalues and Eigenvectors
Linear algebra eigenvalues and eigenvectors, a very detailed and concise overview of relevant knowledge points. Let you get twice the result with half the effort, welcome everyone to learn together! Very worth collecting and learning!
Edited at 2024-12-16 19:54:48Find a streamlined guide created using EdrawMind, showcasing the Lemon 8 registration and login flow chart. This visual tool facilitates an effortless journey for American users to switch from TikTok to Lemon 8, making the transition both intuitive and rapid. Ideal for those looking for a user-centric route to Lemon 8's offerings, our flow chart demystifies the registration procedure and emphasizes crucial steps for a hassle-free login.
これは稲盛和夫に関するマインドマップです。私のこれまでの人生のすべての経験は、ビジネスの明確な目的と意味、強い意志、売上の最大化、業務の最小化、そして運営は強い意志に依存することを主な内容としています。
かんばんボードのデザインはシンプルかつ明確で、計画が一目で明確になります。毎日の進捗状況を簡単に記録し、月末に要約を作成して成長と成果を確認することができます。 実用性が高い:読書、早起き、運動など、さまざまなプランをカバーします。 操作簡単:シンプルなデザイン、便利な記録、いつでも進捗状況を確認できます。 明確な概要: 毎月の概要により、成長を明確に確認できます。 小さい まとめ、今月の振り返り掲示板、今月の習慣掲示板、今月のまとめ掲示板。
Find a streamlined guide created using EdrawMind, showcasing the Lemon 8 registration and login flow chart. This visual tool facilitates an effortless journey for American users to switch from TikTok to Lemon 8, making the transition both intuitive and rapid. Ideal for those looking for a user-centric route to Lemon 8's offerings, our flow chart demystifies the registration procedure and emphasizes crucial steps for a hassle-free login.
これは稲盛和夫に関するマインドマップです。私のこれまでの人生のすべての経験は、ビジネスの明確な目的と意味、強い意志、売上の最大化、業務の最小化、そして運営は強い意志に依存することを主な内容としています。
かんばんボードのデザインはシンプルかつ明確で、計画が一目で明確になります。毎日の進捗状況を簡単に記録し、月末に要約を作成して成長と成果を確認することができます。 実用性が高い:読書、早起き、運動など、さまざまなプランをカバーします。 操作簡単:シンプルなデザイン、便利な記録、いつでも進捗状況を確認できます。 明確な概要: 毎月の概要により、成長を明確に確認できます。 小さい まとめ、今月の振り返り掲示板、今月の習慣掲示板、今月のまとめ掲示板。
Linear Algebra Chapter 5 Eigenvalues and Eigenvectors
eigenvectoreigenvalue
Definition: A is an n×n matrix, x is a non-zero vector, if there is a number l such that Ax = lx has a non-trivial solution x, then l is called the eigenvalue of A, and x is called the eigenvector corresponding to l (eigennvector)
l can be 0, but x cannot be 0; l is finite, but the number of x is infinite
eigenvalue eigenvalue
You cannot use row reduction to find eigenvalues! The step shape of A usually does not show the eigenvalues of A!
l is an eigenvalue of A if and only if (A-lI)x=0 has a non-trivial solution
The null space of (A-II) is the eigenspace corresponding to l
0 is an eigenvalue of A if and only if A is irreversible
Otherwise, Ax=0 has only one solution, that is, 0 solution
Theorem 2: l1,...,lr are different eigenvalues of n×n matrix A, v1,...,vr are eigenvectors corresponding to l1,...,lr, then the vector set {v1, ...,vr} linearly independent
If l is the eigenvector of A, that is, Ax=lx, we have
The eigenvalue of A squared is l squared®AAx=lAx=llx
The same goes for higher levels
The eigenvalue of A-1 is 1/l®A-1Ax=A-1lx=x, so A-1x=1/l x
characteristic equation
characteristic polynomial
The characteristic polynomial of A is the expression of the determinant of (A-lI)
characteristic equation
det(A-lI)=0 is the characteristic equation of A
The necessary and sufficient condition for the number l to be the eigenvalue of n×n matrix A is that l is the root of the characteristic equation det(A-lI)=0
The multiplicity of l as a root of a characteristic equation is called the (algebraic multiplicity) of l
The degree of the characteristic equation is n. According to the fundamental theorem of algebra, there are n solutions. The sum of the algebraic multiplicities is n.
similarity
definition
If A and B are n×n matrices, and if there is an invertible matrix P, which is PAP-1=B, or equivalently A=PBP-1, then A is said to be similar to B or B is similar to A, that is, AB is similar.
The transformation that turns A into P-1AP is called similarity transformation.
Theorem 4: If n×n matrices A and B are similar, then they have the same characteristic polynomial, thus the same eigenvalue and the same multiplicity.
diagonalization
definition
If a square matrix A is similar to a diagonal matrix, that is, there is an invertible matrix P and a diagonal matrix D, and A=PDP-1, then A is said to be diagonalizable
To find the power of a diagonal matrix, exponentiate the elements on the diagonal, and the result is
Theorem 5 (Diagonalization Theorem): The necessary and sufficient condition for an n×n matrix A to be diagonal is that A has n linearly independent eigenvectors.
For A=PDP-1, D is a diagonal matrix «The column vectors of P are n linearly independent eigenvectors of A. At this time, the elements on the main diagonal of D are the eigenvectors of A corresponding to P. eigenvalues
The necessary and sufficient condition for A to be diagonal is that there are enough eigenvectors to form the basis of Rn. Such a basis is called an eigenvector basis (en eigenvector basis).
If A can be diagonalized, then both AT and A-1 can be diagonalized
Theorem 1: The elements of the main diagonal of a triangular matrix are its eigenvalues
Non-triangular matrices cannot be calculated by row transformation, which will affect the eigenvalues!
Diagonalize A, first find the eigenvalues, and find a total of n linearly independent eigenvectors for l. You can use the basis of the corresponding feature space to construct P and use the corresponding eigenvalues to construct D (to confirm the eigenvector you are looking for) Linearly independent!)
Theorem 6: An n×n matrix with n distinct eigenvalues can be diagonalized
It is not a necessary and sufficient condition. It is possible to diagonalize if there are <n distinct eigenvalues
Theorem 7: Let A be an n×n matrix, and its distinct eigenvalues are l1,...,lp
a. For 1≤k≤p, the dimension of the feature space of lk is less than or equal to the algebraic multiplicity of lk
b. The necessary and sufficient condition for matrix A to be diagonal is that the sum of the dimensions of all different feature spaces is n
Characteristic polynomials can be completely decomposed into linear factors
The dimension of the feature space of each lk is equal to the algebraic multiplicity of lk
c. If A is diagonalizable and Bk is the basis of the feature space corresponding to lk, then the set of all vectors in the set B1,...,Bp is the feature vector basis of Rn (the total number is n and linearly independent)
Eigenvectors and linear transformations
Linear transformation from V to W
definition
V is an n-dimensional vector space, W is an m-dimensional vector space, and T is a linear transformation from V to W. Let B be the basis of V and C be the basis of W.
M is the matrix representation of T, which is called the matrix of T relative to basis B and basis C. M=[ [T(b1)]c [T(b2)]c ... [T(bn)c] ]
core
Coordinate transformation is a linear transformation that satisfies injective and surjective
The basis after transformation determines how [x]B is transformed into [T(x)]c
Linear transformation from V to V
When V=W, C=B, M is called the matrix of T relative to B, or is referred to as the B-matrix of T, denoted as [T]B
The coordinates of x under base B after transformation represent [T(x)]B = [T]B [x]B
[T]B=[ [T(b1)]B ... [T(bn)]B ] in the same space, the basis remains unchanged
Linear transformation on Rn
Theorem 8 (diagonal matrix representation) Let A=PDP-1, where D is an n×n diagonal matrix. If the basis B of Rn consists of the column vector of P, then D is the B-matrix of the linear transformation x|®Ax
The characteristic polynomial is true for any number l and can be brought in; the characteristic equation is true only when l is an eigenvalue.