MindMap Gallery POWER SERIES
This is a clear mind map of Power Series, mainly explaining the Convergence Radius, Fourier Sine Cosine, etc. Each content is further explained at several levels, such as the definition and application of the convergence radius, and the principles and use of Fourier Sine Cosine.
Edited at 2021-07-06 13:12:57Linear Algebra
Ch 1: Systems of Linear Equations and Matrices
Matrix
A matrix is a rectangular array of numbers. The numbers in the array are called the entries in the matrix. As general m x n .
Types of Matrices :
Square Matrix
Column Matrix (Column Vector)
Row Matrix (Row Vector)
Triangular Matrix
Diagonal Matrix
Identity Matrix
Zero Matrix (Null Matrix)
Symmetric Matrix
Matrix Operation
In matrix notation, if having the same size, then
Matrix Adddition
A+B = a + b
Matrix Subtraction
A-B = a - b
Matrix Multiplication
tr(A) = a11 + a22 + a33
AxB = AB
Matrix Product as Linear Combinations
c1a1 + c2a2 + .... cxax
Transpose of Matrix
(A^T) = (A)
Elementary Matrix
The algebraic operations are as follows 1. Multiply an equation through by a nonzero constant 2. Interchange two equations 3. Add a constant times one equation to another
These three operations correspond to the following operations on the rows of the augmented matrix : 1. Multiply a row through by a nonzero constant 2. Interchange two rows 3. Add a constant times one row to another
Gaussian Elimination
Row Echelon Form
If a row does not consist entirely of zero, then the first nonzero number in the riw is a 1 ( called a leading 1)
If there are any rows that consist entirely of zero, then they are grouoped together at the bottom of the matrix
In any two successive rows that do not consist entirely of zeros, the leading 1 in the lower occurs farther to the right than the leading 1 in the higher row
Matrix Inverse
If A is invertible, then its inverse will be denoted by the symbol A^-1
Reduced Row Echelon Form
If a row does not consist entirely of zero, then the first nonzero number in the row is a 1 ( called a leading 1 )
In any two successive rows that do not consist entirely of zeros, the leading 1 in the lower occurs farther to the right than the leading 1 in the higher row
Each column that contains a leading 1 has zeros everywhere else in that column
Diagonal Matrix
A square matrix in which all the entries off the main diagonal are zero is called a diagonal matrix
Triangular Matrix
A square matrix in which all the entries above the main diagonal are zero is called lower triangular and a square matrix in which all the entries below the main diagonal are zero is called upper triangular. A matrix that is either upper triangular or lower triangular is called triangular.
Symmetric Matrix
A square matrix A is said to be symmetric if A = A^T
The system have three possible solutions :
1. no solution
2. exactly one solution
3. infinitely many solution
Application of Matrices
A network is a set of branches through which something "flows" > Example : Electrical wires, pipe, traffic lanes or economic linkages > Nodes or junctions : points where branches meet > In the study of networks, there is generally some numerical measure of the rate at which the medium flows through a branch > Our attention to network : flow conservation at each node i.e the rate of flow into any node is equal to the rate of flow out of that node
1. Network Analysis
2. Design of Traffic Patterns
3. An Electric Circuit
4. Chemical Equations
5. Polynomial Interpolation
Other Applications
LEONTIEF INPUT OUTPUT MODELS
- Economic modelling
- Use matrix methods to study the relationships between different sectors in an economy
- Inputs and outputs are commonly measures in monetary units
- The ideas that has been developed by Leontief
- Inputs and outputs in economy
- Leontief Model of an Open Economy
- Productive Open Economies
Ch 2: Determinants
Cofactor of Determinants
Minor and Cofactors
General Determinant
Useful Techniques ( Arrow )
Row Reduction Form
Reduce the matric to row echelon first, then apply det(A) = a11*a12*K*a nn
Properties
det(kA) = k^n det(A)
det(A+B) not equal to det(A) + det(B)
Cramer's Rule
Method used to solve linear equation with the use of determinant
Ch 4: General Vector Spaces
Vector Spaces
1. Vector addition
2. Vector multiplication
3. Vector space axioms
Subspaces
THEOREM 4.2.1 If W is a set of one or more vectors in a vector space V, then W is a subspace of V if and only if the following conditions hold. (a) If u and v are vectors in W, then u + v is in W. (b) If k is any scalar and u is any vector in W, then ku is in W
THEOREM 4.2.3 The solution set of a homogeneous system Ax = 0 of m equations in n unknowns is a subspace of R n .
Linear Combination
THEOREM 4.3.1 If S= {w1 , w2 , …, wr } is a nonempty set of vectors in a vector space V, then: (a) The set W of all possible linear combinations of the vectors in S is a subspace of V. (b) The set W in part (a) is the “smallest” subspace of V that contains all of the vectors in S in the sense that any other subspace that contains those vectors contains W.
Span of S
Guidance to identify Spanning Sets
Linear Independence
THEOREM 4.4.1 A nonempty set S = {v1 , v2 , …, vr } in a vector space V is linearly independent if and only if the only coefficients satisfying the vector equation k1v1+k2v2+ ... +krvr=0 are k1=0, k2=0,..., kr=0
THEOREM 4.4.2 (a) A set with finitely many vectors that contains 0 is linearly dependent. (b) A set with exactly two vectors is linearly independent if and only if neither vector is a scalar multiple of the other.
THEOREM 4.4.3 Let S= {v1 , v2 , …, vr } be a set of vectors in R n . If r > n, then S is linearly dependent.
THEOREM 4.4.4 If the functions f1 , f2 , …, fn have n − 1 continuous derivatives on the interval (−∞, ∞), and if the Wronskian of these functions is not identically zero on (−∞, ∞), then these functions form a linearly independent set of vectors in C(n −1) (−∞, ∞).
Coordinate and Basis
Basis for a vector space
Standard basis for R3
Standard basis for P3
Standard basis for M2x2
Coordinates relative to basis
THEOREM 4.5.1 Uniqueness of Basis Representation If S = {v1 , v2 , …, vn } is a basis for a vector space V, then every vector v in V can be expressed in the form v = c1 v1 + c2 v2 + ⋯ + cn vn in exactly one way
Dimension
The dimension of a finite-dimensional vector space V is denoted by and is defined to be the number of vectors in a basis for V. In addition, the zero vector space is defined to have dimension zero.
THEOREM 4.6.1 All bases for a finite-dimensional vector space have the same number of vectors.
THEOREM 4.6.2 Let V be a finite-dimensional vector space, and let be any basis for V. (a) If a set has more than n vectors, then it is linearly dependent. (b) If a set has fewer than n vectors, then it does not span V.
THEOREM 4.6.4 Let V be an n-dimensional vector space, and let S be a set in V with exactly n vectors. Then S is a basis for V if and only if S spans V or S is linearly independent.
THEOREM 4.6.5 Let S be a finite set of vectors in a finite-dimensional vector space V. (a) If S spans V but is not a basis for V, then S can be reduced to a basis for V by removing appropriate vectors from S. (b) If S is a linearly independent set that is not already a basis for V, then S can be enlarged to a basis for V by inserting appropriate vectors into S.
THEOREM 4.6.6 If W is a subspace of a finite-dimensional vector space V, then: (a) W is finite-dimensional. (b) dim(W) is less or equal dim(V). (c) W=V if and only if dim (W)=dim (V)
Change of Basis
Transition Matrices
Invertibility of Transition Matrices
THEOREM 4.7.1 If P is the transition matrix from a basis B to a basis B’ for a finite-dimensional vector space V, then P is invertible and P-1 is the transition matrix from B’ to B.
Guidance in Computing Transition Matrix
THEOREM 4.7.2 Let B={u1,u2,...,u n} be any basis for vector space Rn and let S={e1,e2,...,e n} be the standard basis for Rn.
Row space, Column space and Null space
THEOREM 4.8.1 A system of linear equations Ax = b is consistent if and only if b is in the column space of A.
THEOREM 4.8.2 If x0 is any solution of a consistent linear system Ax = b, and if S={v1,v2,...,v k} is a basis for the null space of A, then every solution of Ax = b can be expressed in the form x=x0+c1v1+c2v2+...+ck vk .Conversely, for all choices of scalars c1 , c2 , …, ck , the vector x in this formula is a solution of Ax = b.
THEOREM 4.8.3 (a) Row equivalent matrices have the same row space (b)Row equivalent matrices have the same null space
THEOREM 4.8.4 If a matrix R is in row echelon form, then the row vectors with the leading 1’s (the nonzero row vectors) form a basis for the row space of R, and the column vectors with the leading 1’s of the row vectors form a basis for the column space of R.
THEOREM 4.8.5 If A and B are row equivalent matrices, then: (a) A given set of column vectors of A is linearly independent if and only if the corresponding column vectors of B are linearly independent. (b)A given set of column vectors of A forms a basis for the column space of A if and only if the corresponding column vectors of B form a basis for the column space of B.
Rank, Nullity and Fundamental Matrix Spaces
The common dimension of the row space and column space of a matrix A is called the rank of A and is denoted by rank(A); the dimension of the null space of A is called the nullity of A and is denoted by nullity(A)
THEOREM 4.9.1 The row space and column space of a matrix A have the same dimension.
THEOREM 4.9.2 Dimension Theorem for Matrices If A is a matrix with n columns, then rank (A)+nullity (A)=n
THEOREM 4.9.3 If A is an m x n matrix, then (a) rank(A) = the number of leading variables in the general solution of Ax = 0. (b) nullity(A) = the number of parameters in the general solution of Ax = 0
THEOREM 4.9.4 If Ax = b is a consistent linear system of m equations in n unknowns, and if A has rank r, then the general solution of the system contains n – r parameters
THEOREM 4.9.5 If A is any matrix, then rank(A) = rank(A^t)
Matrix Transformation
Let V and W be real vector spaces, and let T be a function with domain V and range in W (written T : V -> W). We say T is a linear Transformation if
Ch 5: Eigenvalues and Eigenvectors
Eigenvalues & Eigenvectors
Definition :

written to as characteristic equation




Definition
1. If A and B are square matrices, then we say that B is similar to A if there is an invertible matrix P such that B = P −1 AP
2. A square matrix A is said to be diagonalizable if it is similar to some diagonal matrix; that is, if there exists an invertible matrix P such that P −1AP is diagonal. In this case the matrix P is said to diagonalize A

If k is a positive integer, λ is an eigenvalue of a matrix A, and x is a corresponding eigenvector, then λk is an eigenvalue of Ak and x is a corresponding eigenvector



Procedure Diagonalizing an n x n Matrix
STEP 1 Determine first whether the matrix is actually diagonalizable by searching for n linearly independent eigenvectors. One way to do this is to find a basis for each eigenspace and count the total number of vectors obtained. If there is a total of n vectors, then the matrix is diagonalizable, and if the total is less than n, then it is not
STEP 2 If you ascertained that the matrix is diagonalizable, then form the matrix P = [ p1 p2 p3 ... ] whose column vectors are the n basis vectors you obtained in Step 1
STEP 3 P −1AP will be a diagonal matrix whose successive diagonal entries are the eigenvalues λ1, λ2, …, λn that correspond to the successive columns of P
Similarity Invariants

A = PDP-1
there exists n linearly independent eigenvectors
P = invertible D = diagonal
D = P-1AP


For complex eigenvalues, we use

Vector Cn
If n is a positive integer, then a complex n-tuple is a sequence of n complex numbers ( V1, V2, ..., Vn ). the set of all complex n-tuples is called complex n-space and is denoted by Cn. Scalar are complex numbers, and the operations of addition, subtraction and scalar multiplication are performed componentwise
Complex Conjugate

 
Complex Euclidean Norm on Cn 
First Order Linear Systems
Solution of a Linear System with Initial Conditions
Solution by Diagonalization
Dynamic Systems & Markov Chains
Dynamical Systems
Markov Chains
is a dynamical system whose state vectors at a succession of equally spaced times are probability vectors and for which the state vectors at successive times are related by an equation of the form x(k + 1) = Px(k)
In Terms of Powers of the Transition Matrix

follows as

Ch 9: Numerical Methods
LU-Decomposition
The Method of LU‐Decomposition
Step 1: Rewrite the system Ax=b as LUx=b
Step 2: Makethe substitution y=Ux , Then rewrite it as Ly=b and solve system for y
Step 3: Subtitution y and solve for x
The Power Method with Euclidean Scaling
Step 0: Choose an arbitrary nonzero vector and normalize it, if need be, to obtains a unit vector x0
Step 1: Compute Ax0 and normalize it to obtainthe first approximation x1 to a dominantant unit eigenvector. Compute Ax1 . x1 to obtain the first approximationto the dominant eigenvalue
Step 2: Compute Ax1 and normalize it to obtain the second approximation x2 to a dominant unit eigenvector. Compute Ax2 . x2 to obtain the second approximation to the dominant eigenvalue
Step 3: Compute Ax2 and normalize it to obtain the third approximation x3 to a dominant unit eigenvector. Compute Ax3 . x3 to obtain the third approximation to the dominant eigenvalue
Ch 8: General Linear Transformations
8.1 General Linear Transformations
Definition 1
(i) T(ku) = kT(u) [Homogeneity property]
(ii) T(u + v) = T(u) + T(v) [Additivity property]
Theorem 8.1.1
If T : V → W is a linear transformation, then:
a) T(0) = 0.
b) T(u − v) = T(u) − T(v) for all u and v in V.
c) T(−v) = −T(v) for all v in V.
Theorem 8.1.2
If S = {v1, v2, …, vn} is a basis for V, then the image of any vector v in V can be expressed as T(v)=c1T(v1)+c2T(v2)+...+cnT(vn)
Theorem 8.1.3
If T : V → W is a linear transformation, then: a) The kernel of T is a subspace of V. b) The range of T is a subspace of W.
Theorem 8.1.4
If T : V → W is a linear transformation from a finite‐dimensional vector space V to a vector space W : rank(T)+nullity(T)= dim(V)
Ch 7: Diagnolization and Quadratic Forms
Orthogonal Diagonalization
DEFINITION 1 If A and B are square matrices, then we say that A and B are orthogonally similar if there is an orthogonal matrix P such that P^T AP=B
THEOREM 7.2.1 CONDITIONS FOR ORTHOGONAL DIAGONALIZABILITY If A is an n x n matrix, then the following are equivalent. (a) A is orthogonally diagonalizable. (b) A has an orthonormal set of n eigenvectors. (c) A is symmetric.
THEOREM 7.2.2 PROPERTIES OF SYMMETRIC MATRICES If A is a symmetric matrix, then (a) The eigenvalues of A are all real numbers. (b) Eigenvectors from different eigenspaces are orthogonal.
THEOREM 7.2.3 SCHUR’S THEOREM
THEOREM 7.2.4 HESSENBERG’S THEOREM
Orthogonal Matrices
DEFINITION 1 A square matrix A is said to be orthogonal if its transpose is the same as its inverse, that is, if A^-1=A^T or equivalently, if AA^T=A^T A=I
THEOREM 7.1.1 The following are equivalent for an n x n matrix A. (a) A is orthogonal. (b) The row vectors of A form an orthonormal set in R n with the Euclidean inner product. (c) The column vectors of A form an orthonormal set in R n with the Euclidean inner product.
THEOREM 7.1.2 (a) The inverse of an orthogonal matrix is orthogonal. (b) A product of orthogonal matrix is orthogonal. (c) If A is orthogonal, then det 1 ( A) = or det 1.
THEOREM 7.1.3 If A is an n x n matrix, then the following are equivalent. (a) A is orthogonal. (b) ||Ax||=||x|| for all x in R n . (c) Ax.Ay=x.yfor all x and y in R n .
THEOREM 7.1.4 If S is an orthonormal basis for an n-dimensional inner product space V, and if
THEOREM 7.1.5 Let V be a finite-dimensional inner product space. If P is the transition matrix from one orthonormal basis for V to another orthonormal basis for V, the P is an orthogonal matrix.
Ch 6: Inner Product Spaces
6.1 Inner Products
Definition 1
1) 〈u, v〉 = 〈v, u〉 [Symmetry axiom] 2) 〈u + v, w〉 = 〈u, w〉 + 〈v, w〉 [Additivity axiom] 3) 〈ku, v〉 = k〈u, v〉 [Homogeneity axiom] 4) 〈v, v〉 ≥ 0 and 〈v, v〉 = 0 if and only if v = 0 [Positivity axiom]
Definition 2
IIvII= √〈v, v〉 d(u,v)=IIu-vII=√〈u-v,u-v〉
Theorem 6.1.1
‖v‖ ≥ 0 with equality if and only if v = 0.
‖kv‖ = |k|‖v‖.
d(u, v) = d(v, u).
d(u, v) ≥ 0 with equality if and only if u = v
Definition 3
If V is an inner product space, then the set of points in V that satisfy IIuII=1
Theorem 6.1.2
〈0, v〉 = 〈v, 0〉 = 0 〈u, v + w〉 = 〈u, v〉 + 〈u, w〉 〈u, v − w〉 = 〈u, v〉 − 〈u, w〉 〈u − v, w〉 = 〈u, w〉 − 〈v, w〉 k〈u, v〉 = 〈u, kv〉
6.2 Angle and Orthogonality Inner Product Spaces
Cauchy-Schwarz Inequality
I<u,v>I ≤ IIuII IIvII
Theorem 6.2.2
a) ‖u + v‖ ≤ ‖u‖ + ‖v‖ [Triangle inequality for vectors]
(b) d(u, v) ≤ d(u, w) + d(w, v) [Triangle inequality for distances]
Definition 1
Two vectors u and v in an inner product space V called orthogonal if <u,v> = 0
Generalized Theorem of Pythagoras
IIu+vII^2 = IIuII^2 + IIvII^2
If W is a subspace of a real inner product space V, then:
1. W ⊥ is a subspace of V. W ∩ W ⊥ = {0}.
6.3 Gram-Schmidt Process; QR-Decomposition
Theorem 6.3.1
If S = {v1, v2, …, vn} is an orthogonal set of nonzero vectors, Then S is linearly independent
Theorem 6.3.3
Projection Theorem If W is a finite‐dimensional subspace of an inner product space V, then every vector u in V can be expressed in exactly one way as u=w1 + w2
Theorem 6.3.4
Theorem 6.3.5
Every nonzero finite‐dimensional inner product space has an orthonormal basis.
Theorem 6,3,6
If W is a finite-dimensional inner product space, then : a) Every orthogonal set of nonzero vectors in W can be enlargedto an orthogonal basis for W b) Every orthogonal set in W can be enlarged to an orthonormal basis for W
Theorem 6.3.7
QR‐Decomposition If A is an m × n matrix with linearly independent column vectors, then A can be factored as A=QR
6.4 Best Approximations; Least Squares
Best Approximation Theorem
IIb-projWbII<IIb-wII
Theorem 6.4.2
(A^T)Ax =(A^T)b ATb
Ax = projWb ATb
If A is an m × n matrix, then the following are equivalent
1. The column vectors of A are linearly independent. ATA is invertible.
Theorem 6.4.4
If A is an m × n matrix with linearly independent column vectors, then for every m × 1 matrix b, the linear system Ax = b has a unique least squares solution.
Theorem 6.4.6
If A is an m × n matrix with linearly independent column vectors, and if A = QR is a QR‐decomposition of A, then for each b in Rm the system Ax = b has a unique least squares solution
6.5 Mathematical Modeling Using Squares
Uniqueness of the Regression Line
6.6 Applications: Function Approximations, Fourier Series
Float Chapter
POWER SERIES
Taylor and Maclaurin Series
The form of a convergence power series

Definition of Taylor and Maclaurin Series

Convergence of Taylor Series and how to solve it


Fourier Sine and Cosine


Radius of Convergence
Is a number p such that the series an(x − x0)^n in range of n=0 to infinity. It converges absolutely for |x − x0| <p and diverges for |x − x0| >0.

If the series converges only at x=x0, p=0 and if the series converges for all values of x then p is infinite.The radius of convergence is calculated using the ratio test.