# MATH 323 Lecture 24

« previous | Tuesday, November 27, 2012 | next »

## Theorem 5.6.1 (Gram-Schmidt Process)

Let ${\displaystyle \{x_{1},\ldots ,x_{n}\}}$ be a basis for ${\displaystyle V,\langle \cdot ,\cdot \rangle }$. Let ${\displaystyle u_{1}={\tfrac {x_{1}}{\|x_{1}\|}}}$ and define ${\displaystyle u_{2},\ldots ,u_{n}}$ recursively by

${\displaystyle u_{k+1}={\frac {x_{k+1}-p_{k}}{\|x_{k+1}-p_{k}\|}}}$

Where ${\displaystyle p_{k}=\langle x_{k+1},u_{1}\rangle u_{1}+\langle x_{k+1},u_{2}\rangle u_{2}+\dots +\langle x_{k+1},u_{k}\rangle u_{k}}$ is the projection of ${\displaystyle x_{k+1}}$ onto ${\displaystyle v_{k}=\mathrm {Span} (u_{1},\ldots ,u_{k})=\mathrm {Span} (x_{1},\ldots ,x_{k})}$.

The set ${\displaystyle \{u_{1},\ldots ,u_{n}\}}$ is an orthonormal basis for ${\displaystyle V}$.

### Notation

From now on,

{\displaystyle {\begin{aligned}r_{kk}&=\|x_{k}-p_{k-1}\|r_{ik}&=\langle x_{k},u_{i}\rangle \end{aligned}}}

### Proof by Induction

${\displaystyle \mathrm {Span} (x_{1})=\mathrm {Span} (u_{1})}$ and ${\displaystyle u_{1}}$ is an orthonormal basis for ${\displaystyle V_{1}}$

Suppose ${\displaystyle u_{1},\ldots ,u_{k}}$ are constructed so they are an orthonormal basis for ${\displaystyle v_{k}}$ and ${\displaystyle \mathrm {Span} \{x_{1},\ldots ,x_{k}\}=\mathrm {Span} \{u_{1},\ldots ,u_{k}\}}$.

${\displaystyle p_{k}\in \mathrm {Span} \{u_{1},\ldots ,u_{k}\}\implies x_{k+1}-p_{k}\in \mathrm {Span} \{x_{1},\ldots ,x_{k},x_{k+1}\}}$

${\displaystyle x_{k+1}-p_{k}\neq 0}$ because ${\displaystyle x_{1},\ldots ,x_{k+1}}$ are linearly independent.

${\displaystyle x_{k+1}-p_{k}=x_{k+1}-\sum _{i=1}^{k}c_{i}\,x_{i}}$, where ${\displaystyle c_{i}=\langle x_{k+1},u_{i}\rangle }$

Therefore ${\displaystyle u_{1},\ldots ,u_{k+1}}$ is an orthonormal set of vectors in ${\displaystyle \mathrm {Span} \{x_{1},\ldots ,x_{k+1}\}}$, and is a basis for subspace of dimension ${\displaystyle k+1}$.

Thus the theorem holds by induction.

Q.E.D.

### Example

Find an orthonormal basis for the range of the following matrix:

${\displaystyle A={\begin{bmatrix}1&-1&4\\1&4&-2\\1&4&2\\1&-1&0\end{bmatrix}}}$

Column space is defined by ${\displaystyle \mathrm {Span} \left\{{\begin{pmatrix}1\\1\\1\\1\end{pmatrix}},{\begin{pmatrix}-1\\4\\4\\-1\end{pmatrix}},{\begin{pmatrix}4\\-2\\2\\0\end{pmatrix}}\right\}=\mathrm {Span} \{{\vec {a}}_{1},{\vec {a}}_{2},{\vec {a}}_{3}\}}$

{\displaystyle {\begin{aligned}r_{11}&=\left\|{\vec {a}}_{1}\right\|={\sqrt {1^{2}+1^{2}+1^{2}+1^{2}}}=1\\{\vec {q}}_{1}&={\frac {{\vec {a}}_{1}}{r_{11}}}=\left\langle {\tfrac {1}{2}},{\tfrac {1}{2}},{\tfrac {1}{2}},{\tfrac {1}{2}}\right\rangle \\r_{12}&=\langle {\vec {a}}_{2},{\vec {q}}_{1}\rangle =3\\{\vec {p}}_{1}&=r_{12}\,{\vec {q}}_{1}=3{\vec {q}}_{1}\\{\vec {a}}_{2}-{\vec {p}}_{1}&=\left\langle -{\frac {5}{2}},{\frac {5}{2}},{\frac {5}{2}},-{\frac {5}{2}}\right\rangle \\r_{22}&=\|{\vec {a}}_{2}-{\vec {p}}_{1}\|=5\\q_{2}&={\frac {{\vec {a}}_{2}-{\vec {p}}_{1}}{r_{22}}}=\left\langle -{\frac {1}{2}},{\frac {1}{2}},{\frac {1}{2}},-{\frac {1}{2}}\right\rangle \\r_{13}&=\langle {\vec {a}}_{3},{\vec {q}}_{1}\rangle =2\\r_{23}&=\langle {\vec {a}}_{3},{\vec {q}}_{2}\rangle =-2\\{\vec {p}}_{2}&=r_{13}\,{\vec {q}}_{1}+r_{23}\,{\vec {q}}_{2}=\left\langle 2,0,0,2\right\rangle \\{\vec {a}}_{3}-{\vec {p}}_{2}=\left\langle 2,-2,2,-2\right\rangle \\r_{33}=\|{\vec {a}}_{3}-{\vec {p}}_{2}\|=4\\q_{3}={\frac {{\vec {a}}_{3}-{\vec {p}}_{2}}{r_{33}}}=\left\langle {\frac {1}{2}},-{\frac {1}{2}},{\frac {1}{2}},-{\frac {1}{2}}\right\rangle \end{aligned}}}

Therefore, ${\displaystyle \{{\vec {q}}_{1},{\vec {q}}_{2},{\vec {q}}_{3}\}=\left\{{\begin{pmatrix}{\frac {1}{2}}\\{\frac {1}{2}}\\{\frac {1}{2}}\\{\frac {1}{2}}\end{pmatrix}},{\begin{pmatrix}-{\frac {1}{2}}\\{\frac {1}{2}}\\{\frac {1}{2}}\\-{\frac {1}{2}}\end{pmatrix}},{\begin{pmatrix}{\frac {1}{2}}\\-{\frac {1}{2}}\\{\frac {1}{2}}\\-{\frac {1}{2}}\end{pmatrix}}\right\}}$ is an orthonormal basis for the range of ${\displaystyle A}$.

## Theorem 5.6.2 (Gram-Schmidt QR Factor)

If ${\displaystyle A}$ is an ${\displaystyle m\times n}$ matrix of rank ${\displaystyle n}$, then ${\displaystyle A}$ can be factored into a product ${\displaystyle QR}$, where ${\displaystyle Q}$ is an ${\displaystyle m\times n}$ matrix with orthonormal column vectors and ${\displaystyle R}$ is an upper triangular ${\displaystyle n\times n}$ matrix whose diagonal entries are all positive.

In the Gram-Schmidt Orthogonalization Process, ${\displaystyle {\vec {p}}_{i}}$ represents the projection vectors.

${\displaystyle r_{ij}={\begin{cases}\|{\vec {x}}_{1}\|&i=j=1\\\|{\vec {x}}_{i}-{\vec {p}}_{i}\|&i=j\\\langle {\vec {q}}_{i},{\vec {x}}_{j}\rangle &{\text{otherwise}}\end{cases}}}$

We use the defined values above to form an upper triangular matrix ${\displaystyle R}$

${\displaystyle R=\left({\begin{cases}r_{ij}&i\leq j\\0&{\text{otherwise}}\end{cases}}\right)}$

such that ${\displaystyle A=QR}$, where ${\displaystyle Q=\left({\vec {q}}_{1},\ldots ,{\vec {q}}_{n}\right)}$.

### Example

From previous example,

{\displaystyle {\begin{aligned}R&={\begin{bmatrix}r_{11}&r_{12}&r_{13}\\0&r_{22}&r_{23}\\0&0&r_{33}\end{bmatrix}}Q&=({\vec {q}}_{1},{\vec {q}}_{2},{\vec {q}}_{3})\end{aligned}}}

### Polynomial Space Example

Consider ${\displaystyle P_{3}}$, the subspace consisting of quadratic polynomials, with an inner product:

${\displaystyle \langle p,q\rangle =\sum _{i=1}^{n}p(c_{i})\,q(c_{i})}$, where ${\displaystyle {\vec {c}}=\langle -1,0,1\rangle }$

${\displaystyle 1}$, ${\displaystyle x}$, and ${\displaystyle x^{2}}$ (represented by ${\displaystyle {\vec {x}}}$) form a basis for ${\displaystyle P_{3}}$, but they are not orthonormal w.r.t. the inner product definition.

Find an orthonormal basis ${\displaystyle \{u_{1},u_{2},u_{3}\}}$ for ${\displaystyle P_{3}}$.

{\displaystyle {\begin{aligned}\|x_{1}\|&=\|1\|={\sqrt {\langle 1,1\rangle }}={\sqrt {1\cdot 1+1\cdot 1+1\cdot 1{\sqrt {3}}}}\\u_{1}&={\frac {x_{1}}{\|x_{1}\|}}={\frac {1}{\sqrt {3}}}\\p_{1}&=\left\langle x,{\frac {1}{\sqrt {3}}}\right\rangle {\frac {1}{\sqrt {3}}}=0\\x_{2}-p_{1}&=x\\\|x_{2}-p_{1}\|&=\langle x,x\rangle ={\sqrt {2}}\\u_{2}&={\frac {x}{\sqrt {2}}}\\p_{2}&=\left\langle x^{2},{\frac {1}{\sqrt {3}}}\right\rangle {\frac {1}{\sqrt {3}}}+\left\langle x^{2},{\frac {x}{\sqrt {2}}}\right\rangle {\frac {x}{\sqrt {2}}}={\frac {2}{3}}\\\|x^{2}-p_{2}\|&=\left\langle x^{2}-{\frac {2}{3}},x^{2}-{\frac {2}{3}}\right\rangle ={\frac {\sqrt {6}}{2}}\\u_{3}&={\frac {x^{2}-{\frac {2}{3}}}{\sqrt {\frac {2}{3}}}}\end{aligned}}}

## Eigenvalues and Eigenvectors

Let ${\displaystyle A}$ be an ${\displaystyle n\times n}$ matrix.

A scalar ${\displaystyle \lambda }$ is an eigenvalue of ${\displaystyle A}$ if there is a nonzero vector ${\displaystyle {\vec {x}}\in \mathbb {C} ^{n}}$ such that

${\displaystyle A\,{\vec {x}}=\lambda \,{\vec {x}}}$

${\displaystyle {\vec {x}}}$ is said to be an eigenvector belonging to ${\displaystyle \lambda }$.

All of the following are equivalent: ${\displaystyle \lambda }$ is an eigenvalue iff

• ${\displaystyle (A-\lambda \,I){\vec {x}}={\vec {0}}}$ has nonzero solution
• ${\displaystyle \left|A-\lambda \,I\right|=0}$ This is important since it gives the characteristic equation
• ${\displaystyle A-\lambda \,I}$ is singular
• ${\displaystyle N(A-\lambda \,I)\neq 0}$

### Example

${\displaystyle A={\begin{bmatrix}4&-2\\1&1\end{bmatrix}}}$, ${\displaystyle x={\begin{bmatrix}2\\1\end{bmatrix}}}$

${\displaystyle A\,{\vec {x}}={\begin{bmatrix}6\\3\end{bmatrix}}=3{\vec {x}}}$.

Therefore, ${\displaystyle \lambda =3}$ is the eigenvalue, and ${\displaystyle {\vec {x}}=\langle 2,1\rangle }$ is an eigenvector belonging to ${\displaystyle \lambda =3}$.

If ${\displaystyle {\vec {x}}}$ is an eigenvector belonging to ${\displaystyle \lambda }$, then ${\displaystyle \mu \,{\vec {x}}}$ is also an eigenvector belonging to ${\displaystyle \lambda }$ (for nonzero ${\displaystyle \mu }$):

${\displaystyle A(\mu \,{\vec {x}})=\mu \,A\,{\vec {x}}=\mu \,\lambda \,{\vec {x}}}$

### Characteristic Equation

${\displaystyle p(\lambda )=\|A-\lambda \,I\|}$ is called the characteristic polynomial

${\displaystyle \|A-\lambda \,I\|=0}$ is called the characteristic equation

Either way, the form is

${\displaystyle {\begin{vmatrix}a_{11}-\lambda &\dots &a_{1n}\\\vdots &\ddots &\vdots \\a_{n1}&\dots &a_{nn}-\lambda \end{vmatrix}}}$

#### Example 1

${\displaystyle A={\begin{bmatrix}3&2\\3&-2\end{bmatrix}}}$

Characteristic equation is ${\displaystyle {\begin{vmatrix}3-\lambda &2\\3&-2-\lambda \end{vmatrix}}=\lambda ^{2}-\lambda -12}$.

Solutions are ${\displaystyle \lambda =4,-3}$.

Eigenvectors can be obtained by solving ${\displaystyle (A-\lambda \,I){\vec {x}}=0}$.