MindMap Gallery LINEAR ALGEBRA (KKKQ1223)

- 846

LINEAR ALGEBRA MIND MAP (KKKQ1223): System of Linear Equations and Matrices Determinants General Vector Spaces Inner Product Space Diagonalization and Quadratic Forms Numerical Method General Linear Transformation Infinite Series Eigenvalues & Eigenvector

Edited at 2021-07-07 13:29:40- Recommended to you
- Outline

LINEAR ALGEBRA (KKKQ1223)

System of Linear Equations and Matrices

Systems of Linear Equations

￼

homogeneous

Ax = 0 , always consistent one solution : trivial , x = 0 n = r infinitely many solution : non-trivial , x n > r

non-homogeneous

Ax = b , b must not equal to zero vector consistent one solution : n = r infinitely many solution : n > r incosistent no solution : n < r

Gaussian Elimination

used to solve linear eqtn more than 3 equation & unknowns

Row Operation

technique used in Gaussion Elimination can used to find inverse ￼

Row Echelon Form (REF)

￼

Elimination method

Gauss-Jordan Elimination (RREF) ￼

Elementary Matrices

A matrix E is called an elementary matrix if it can be obtained from an identity matrix by performing a single elementary row operation. Every elemtary matrix is invertible, and the inverse is also an elementary matrix.

inverse matrices

￼

equivalent statement

￼

Applications of Linear System

Network analysis

linear system

traffic patterns

Electrical circuits

Balancing chemical equation

Polynomial interpolation

Leontif Input-Output Models

￼

Determinants

2 x 2 matrices

￼

3 x 3 matrices and above (Cofactor Expansion)

￼

Row Reduction

￼

Cramer’s Rule

to solve linear system ￼

General Vector Spaces

Real Vector Spaces

A vector space is a set that is closed under finite vector addition and scalar multiplication. In layman term, a vector space is a collection of objects called vectors, which may be added together and multiplied by numbers, called scalars . objects that are still in the realm of real number is a Vector Space of Real Number.

vector spaces axioms

￼

Subspaces

￼

homogenus system

￼

line through the origin

line through the origin plane through the origin

Spanning Sets

￼

linear combinations

￼

Linear Independence

￼

Coordinates and Basis

￼ ￼

Dimension

All bases for finite dimensional space have same number vector If set in V have more than n vector, it is a linear dependent If set in V have fewer than n vector, it is not span V ￼

Change of Basis

￼

Row Space, Column Space and Null Space

￼ ￼ ￼ ￼

Rank, Nullity and the Fundamental Matric Spaces

￼ ￼

Inner Product Space

Inner products

An inner product on a real vector space V is a function that associates a real number < u, v > with each pair of vectors in V in such a way that the following axioms are satisfied for all vectors u, v, and w in Vand all scalars k. ￼ A real vector space with an inner product is called a real inner product space. norm : ￼ distance : ￼

If u and v are vectors in a real inner product space V, and if k is a scalar, then : ￼ If u, v, and w are vectors in a real inner product space V, and if k is a scalar, then : ￼

Angle and orthogality

angle : ￼ Two vectors u and v in an inner product space V called orthogonal if < u, v > = 0

If W is a subspace of a real inner product space V , then : ￼ If W is subpace of a real finite-dimensional inner product space V, then the orthogonal complement is : ￼

Gram-Schmidt

orthogonal if all pairs of distinct vectors in the set are orthogonal. orthonormal if an orthogonal set in which each vector has norm1. ￼

￼ ￼

Best approximation ; Least square

￼ If A is an m x n matrix with linearly independent column vectors, then for every m x 1 matrix b, the linear system Ax = b has a unique least squares solution : ￼

Equivalent statement

￼

Mathematical modelling using least square

￼

Function approximation ; Fourier series

￼

Power Series

￼

Taylor & Maclaurin Method

￼

Fourier Sine & Cosine Series

Notes on Fourier Sine & Cosine series: http://www.sosmath.com/fourier/fourier2/fourier2.html

Diagonalization and Quadratic Forms

Orthogonal matrices

￼ or ￼ The transpose of an orthogonal matrix is orthogonal. The inverse of an orthogonal matrix is orthogonal. A product of orthogonal matrices is orthogonal. If A is orthogonal, then det(A) - 1 or det(A)--1 If S is an orthonormal basis for an n-dimensional inner product space V, and if ￼

Orthogonal diagonalization

￼

Schur's theorem

￼

Hessenberg's theorem

￼

Quadratic forms

￼ ￼ ￼

Optimization using quadratic forms

ex: ￼￼

Numerical Method

LU-Decomposition

￼ LU-Decomposition Method ￼ let Ax=b where A=LU (step 1) therefore Ax=b can be rewrite as LUx=b (step 2) let Ux=y :. LUx=b can be rewrite as Ly=b (step 3) solve for y (step 4) substitute the value of y into Ux=y (step 5) Finding LU-Decomposition ￼ Constructing LU-Decomposition Procedure for Constructing an LU‐Decomposition Step 1. Reduce A to a row echelon form U by Gaussian elimination without row interchanges, keeping track of the multipliers used to introduce the leading 1's and the multipliers used to introduce the zeros below the leading 1's. Step 2. In each position along the main diagonal of L, place the reciprocal of the multiplier that introduced the leading 1 in that position in U. Step 3. In each position below the main diagonal of L, place the negative of the multiplier used to introduce the zero in that position in U. Step 4. Form the decomposition A = LU. ￼ In this, we will reduce A to a row echelon form (U) and at each step we will fill in an entry of L in accordance with the 4-step procedures stated above. Therefore we get; ￼

The power method

￼

General Linear Transformation

￼￼

Infinite Series

Sequence, infine series, and Partial sums

Sequence Definition of Sequence: Stated informally, an infinite sequence, or more simply a sequence, is an unending succession of numbers, called terms. It is understood that the terms have a definite order; that is, there is a first term a1 , a second term a2 , a third term a3 , a fourth term a4 , and so forth. Such a sequence would typically be written as a1,a2,a3,a4,... where the dots are used to indicate that the sequence continues indefinitely. Each of these sequences has a definite pattern that makes it easy to generate additional terms if we assume that those terms follow the same pattern as the displayed terms. However, the most common way to specify a sequence is to give a rule or formula that relates each term in the sequence to its term number. For example, in the sequence 2,4,6,8,... each term is twice the term number; that is, the nth term in the sequence is given by the formula 2n . We denote this by writing the sequence as 2,4,6,8,...,2n,... We refer to as the general term of this sequence. Now, if we want to know a specific term in the sequence, we need only substitute its term number in the formula for the general term. For example, the 37th term in the sequence is 2(37)=74 . Infinite Series Definiton of infinite series:￼

Geometric, Harmonic, P-Series, and Alternating Series

Geometric Series In many important series, each term is obtained by multiplying the preceding term by some fixed constant. Thus, if the initial term of the series is and each term is obtained by multiplying the preceding term by , then the series has the form ￼ Such series are called geometric series, and the number is called the ratio for the series. Here are some examples:￼ Theorem of Geometric Series:￼ Harmonic Series ￼ P-Series https://study.com/academy/lesson/p-series-definition-examples.html Alternating Series ￼

Integral, Comparison, Ratio, & Root Test

Integral Test: https://tutorial.math.lamar.edu/classes/calcii/IntegralTest.aspx ￼

Eigenvalues & Eigenvector

Diagonalization

￼ A Procedure for Diagonalizing an n × n Matrix Step 1. Determine first whether the matrix is actually diagonalizable by searching for n linearly independent eigenvectors. One way to do this is to find a basis for each eigenspace and count the total number of vectors obtained. If there is a total of n vectors, then the matrix is diagonalizable, and if the total is less than n, then it is not. Step 2. If you ascertained that the matrix is diagonalizable, then form the matrix whose column vectors are the n basis vectors you obtained in Step 1. Step 3. P −1AP will be a diagonal matrix whose successive diagonal entries are the eigenvalues λ1, λ2, …, λn that correspond to the successive columns of P.

Dynamic system and Markov Chain

Dynamical Systems A dynamical system is a finite set of variables whose values change with time. The value of a variable at a point in time is called the state of the variable at that time, and the vector formed from these states is called the state vector (or state) of the dynamical system at that time. Our primary objective in this section is to analyze how the state vector of a dynamical system changes with time. Let us begin with an example. Markov Chain Definition 1: A Markov chain is a dynamical system whose state vectors at a succession of equally spaced times are probability vectors and for which the state vectors at successive times are related by an equation of the form x(k+1)=Px(k) in which P = [pij] is a stochastic matrix and pij is the probability that the system will be in state i at time t = k + 1 if it is in state j at time t = k. The matrix P is called the transition matrix for the system. Definition 2: A stochastic matrix P is said to be regular if P or some positive power of P has all positive entries, and a Markov chain whose transition matrix is regular is said to be a regular Markov chain. ￼