# MATH 323 Lecture 20

« previous | Tuesday, November 6, 2012 | next »

## Orthogonality

${\displaystyle {\vec {x}}\perp {\vec {y}}\iff {\vec {x}}\cdot {\vec {y}}=0\iff \theta ={\frac {\pi }{2}}}$

Subspaces ${\displaystyle X,Y\subset \mathbb {R} ^{n}}$: ${\displaystyle X\perp Y\iff {\vec {x}}\perp {\vec {y}}\forall {\vec {x}}\in X\forall {\vec {y}}\in Y}$

If ${\displaystyle X\cap Y\neq \{{\vec {0}}\}}$, then take ${\displaystyle {\vec {v}}\cdot {\vec {v}}=\left\|{\vec {v}}\right\|^{2}\neq 0}$ for ${\displaystyle {\vec {v}}\in X\cap Y\neq {\vec {0}}}$: ${\displaystyle X}$ is not orthogonal to ${\displaystyle Y}$.

${\displaystyle X\subset \mathbb {R} ^{n}}$, ${\displaystyle X^{\perp }=\left\{{\vec {y}}\in \mathbb {R} ^{n}:{\vec {y}}\perp X\right\}}$ is orthogonal complement. For example, a plane and a normal vector.

## Range

${\displaystyle R(A)}$ for ${\displaystyle m\times n}$ matrix ${\displaystyle A}$ and ${\displaystyle L_{A}:\mathbb {R} ^{n}\to \mathbb {R} ^{m}}$ is defined as

${\displaystyle R(L_{A})=R(A)=\left\{{\vec {y}}\in \mathbb {R} ^{m}:{\vec {y}}=A{\vec {x}}\exists {\vec {x}}\in \mathbb {R} ^{n}\right\}\subset \mathbb {R} ^{m}}$

For transpose matrix, ${\displaystyle R(A^{T})\subset \mathbb {R} ^{n}}$

Note: Range is nothing more than the column space of a matrix

## Theorem 5.2.1

Fundamental subspaces theorem

1. ${\displaystyle N(A)=R(A^{T})^{\perp }}$
2. ${\displaystyle N(A^{T})=R(A)^{\perp }}$

### Proof

Prove one, then the proof of the second follows from the first: Let ${\displaystyle B=A^{T}}$, then ${\displaystyle N(A^{T})=N(B)=R(B^{T})^{\perp }=R(A)^{\perp }}$

### Example

{\displaystyle {\begin{aligned}A&={\begin{bmatrix}1&0\\2&0\end{bmatrix}}&A^{T}&={\begin{bmatrix}1&2\\0&0\end{bmatrix}}\\R(A)&=\mathrm {Span} {\begin{pmatrix}1\\2\end{pmatrix}}=\left\{\alpha {\begin{pmatrix}1\\2\end{pmatrix}}:\alpha \in \mathbb {R} \right\}\\N(A)&=\ldots =\mathrm {Span} ({\vec {e}}_{2})\\R(A^{T})&=\mathrm {Span} \left({\begin{pmatrix}1\\0\end{pmatrix}},{\begin{pmatrix}2\\0\end{pmatrix}}\right)=\mathrm {Span} ({\vec {e}}_{1})\\N(A^{T})&=\ldots =\mathrm {Span} {\begin{pmatrix}-2\\1\end{pmatrix}}\end{aligned}}}

1. ${\displaystyle N(A)\perp R(A^{T})}$
2. ${\displaystyle N(A^{T})\perp R(A)}$

## Theorem 5.2.2

If ${\displaystyle S}$ is a subspace of ${\displaystyle \mathbb {R} ^{n}}$, then ${\displaystyle \dim S+\dim S^{\perp }=n=\dim \mathbb {R} ^{n}}$

Furthermore, if ${\displaystyle \{{\vec {x}}_{1},\ldots ,{\vec {x}}_{r}\}}$ is a basis for ${\displaystyle S}$ and ${\displaystyle \{{\vec {x}}_{r+1},\ldots ,{\vec {x}}_{n}\}}$ is a basis for ${\displaystyle S^{\perp }}$, then ${\displaystyle \{{\vec {x}}_{1},\ldots {\vec {x}}_{r},{\vec {x}}_{r+1},\ldots ,{\vec {x}}_{n}\}}$ is a basis for ${\displaystyle \mathbb {R} ^{n}}$

### Proof

If ${\displaystyle S\neq \{{\vec {0}}\}}$ and ${\displaystyle \{{\vec {x}}_{1},\ldots ,{\vec {x}}_{r}\}}$ is a basis for ${\displaystyle S}$, then ${\displaystyle \dim S=r}$.

Let ${\displaystyle X=({\vec {x}}_{i}^{T})}$ be a ${\displaystyle r\times n}$ matrix formed by using the basis vectors as rows of ${\displaystyle X}$. The rank of ${\displaystyle X}$ is ${\displaystyle r}$, and ${\displaystyle R(X^{T})=S}$.

${\displaystyle S^{\perp }=R(X^{T})^{\perp }=N(X)}$ by equation 1 of the previous theorem, so ${\displaystyle \dim S^{\perp }=\dim N(X)=n-r}$

Therefore ${\displaystyle \dim S+\dim S^{\perp }=r+(n-r)=n}$. This proves the first part of the theorem.

Check linear independence of ${\displaystyle {\vec {x}}}$s to determine whether it is a valid basis of ${\displaystyle \mathbb {R} ^{n}}$.

${\displaystyle \underbrace {c_{1}\,{\vec {x}}_{1}+\dots +c_{r}\,{\vec {x}}_{r}} _{y}+\underbrace {c_{r+1}\,{\vec {x}}_{r+1}+\dots +c_{n}\,{\vec {x}}_{n}} _{z}=0}$

In order for ${\displaystyle y=-z}$ to be true, ${\displaystyle y}$ and ${\displaystyle z}$ must be elements of ${\displaystyle S\cap S^{\perp }}$ Since ${\displaystyle S}$ and ${\displaystyle S^{\perp }}$ are orthogonal subspaces, ${\displaystyle S\cap S^{\perp }=\{{\vec {0}}\}}$, so ${\displaystyle y=z={\vec {0}}}$.

## Direct Sum

If ${\displaystyle U,V\subset W}$ are subspaces of a vector space ${\displaystyle W}$, and each ${\displaystyle w\in W}$ can be written as a sum ${\displaystyle u+v}$, where ${\displaystyle u\in U}$ and ${\displaystyle v\in V}$, then ${\displaystyle W}$ is a direct sum of ${\displaystyle U}$ and ${\displaystyle V}$, written ${\displaystyle W=U\oplus V}$

### Theorem 5.2.3

If ${\displaystyle S}$ is a subspcae of ${\displaystyle \mathbb {R} ^{n}}$, then ${\displaystyle \mathbb {R} ^{n}=S\oplus S^{\perp }}$. In other words (or lack thereof):

${\displaystyle S\subset \mathbb {R} ^{n}\implies \mathbb {R} ^{n}=S\oplus S^{\perp }}$

#### Proof

Let ${\displaystyle \{{\vec {x}}_{1},\ldots ,{\vec {x}}_{r},{\vec {x}}_{r+1},\ldots ,{\vec {x}}_{n}\}}$ be a basis for ${\displaystyle \mathbb {R} ^{n}}$, then

${\displaystyle {\vec {x}}=\underbrace {c_{1}\,{\vec {x}}_{1}+\dots +c_{r}\,{\vec {x}}_{r}} _{\vec {y}}+\underbrace {c_{r+1}\,{\vec {x}}_{r+1}+\dots +c_{n}\,{\vec {x}}_{n}} _{\vec {z}}={\vec {y}}+{\vec {z}}}$

{\displaystyle {\begin{aligned}{\vec {x}}&={\vec {u}}+{\vec {v}}&{\vec {x}}&={\vec {y}}+{\vec {z}}\end{aligned}}}

This must be unique since ${\displaystyle S\cap S^{\perp }=\{{\vec {0}}\}}$.

## Theorem 5.2.4

${\displaystyle (S^{\perp })^{\perp }=S}$

### Example

${\displaystyle A={\begin{bmatrix}1&1&2\\0&1&1\\1&3&4\end{bmatrix}}}$

Find basis for ${\displaystyle N(A)}$, ${\displaystyle R(A^{T})}$, ${\displaystyle N(A^{T})}$, and ${\displaystyle R(A)}$

${\displaystyle \mathrm {rref} A={\begin{bmatrix}1&0&1\\0&1&1\\0&0&0\end{bmatrix}}}$

Therefore, ${\displaystyle \left\langle 1,0,1\right\rangle ,\left\langle 0,1,1\right\rangle }$ is a basis for ${\displaystyle R(A^{T})}$.

${\displaystyle N(A)=\alpha \left\langle -1,-1,1\right\rangle }$, so ${\displaystyle \left\langle -1,-1,1\right\rangle }$ is basis for ${\displaystyle N(A)}$.

Repeat above steps for ${\displaystyle A^{T}={\begin{bmatrix}1&0&1\\1&1&3\\2&1&4\end{bmatrix}}}$

## Section 5.3: Least Squares

Find best approximation of ${\displaystyle {\vec {b}}}$ (outside of a subspace) using vector ${\displaystyle {\vec {p}}}$ (in subspace)

### Theorem 5.3.1

Let ${\displaystyle S\subset \mathbb {R} ^{m}}$ be a subspace.

For each ${\displaystyle {\vec {b}}\in \mathbb {R} ^{m}}$, there is a unique element ${\displaystyle {\vec {p}}}$ of ${\displaystyle S}$ that is closest to ${\displaystyle {\vec {b}}}$, i.e.

${\displaystyle \|{\vec {b}}-{\vec {y}}\|>\|{\vec {b}}-{\vec {p}}\|}$ for any ${\displaystyle {\vec {y}}\neq {\vec {p}}\in S}$

Furthermore, ${\displaystyle {\vec {b}}-{\vec {p}}\in S^{\perp }}$

### Definition: Residual Vector

A vector ${\displaystyle {\hat {x}}}$ is a solution to the least squares problem ${\displaystyle A{\vec {x}}={\vec {b}}}$ iff ${\displaystyle {\vec {p}}=A{\vec {x}}}$ is the vector in ${\displaystyle R(A)}$ that is closest to ${\displaystyle {\vec {b}}}$.

Thus we know that ${\displaystyle {\vec {p}}}$ is the projection of ${\displaystyle {\vec {b}}}$ onto ${\displaystyle R(A)}$

${\displaystyle {\vec {b}}-{\vec {p}}={\vec {b}}-A{\hat {x}}=r({\hat {x}})\in R(A)^{\perp }}$, where ${\displaystyle r({\hat {x}})}$ is the residual vector.

Thus ${\displaystyle {\vec {x}}}$ is a solution of the least squares problem iff ${\displaystyle r({\hat {x}})\in R(A)^{\perp }}$.