MATH 323 Lecture 5

« previous | Tuesday, September 11, 2012 | next »

Elementary Matrices

System of equations now in form ${\displaystyle A{\vec {x}}={\vec {b}}}$, where ${\displaystyle A}$ is ${\displaystyle m\times n}$, and ${\displaystyle {\vec {x}}}$ and ${\displaystyle {\vec {b}}}$ are ${\displaystyle \mathbb {R} ^{n}}$

For a non-singular, invertible ${\displaystyle m\times m}$ matrix ${\displaystyle M}$, ${\displaystyle M\,A\,{\vec {x}}=M\,{\vec {b}}}$, and this system will be equivalent to the original. ${\displaystyle M}$ could even be product of several matrices: ${\displaystyle M=E_{k}\,E_{k-1}\,\dots \,E_{2}\,E_{1}}$, where ${\displaystyle E_{i}}$ is an elementary matrix.

Types of elementary matrices (each corresponds to rules of matrix):

1. ${\displaystyle E}$ obtained by interchanging two rows of identity matrix ${\displaystyle I}$
• ${\displaystyle EA}$ permutes rows
• ${\displaystyle AE}$ permutes columns
2. ${\displaystyle E}$ obtained by multiplying row of identity matrix ${\displaystyle I}$ by a nonzero constant ${\displaystyle \alpha }$
• ${\displaystyle EA}$ scales corresponding row by ${\displaystyle \alpha }$
• ${\displaystyle AE}$ scales corresponding column by ${\displaystyle \alpha }$.
3. ${\displaystyle E}$ obtained from ${\displaystyle I}$ by adding a multiple of one row ${\displaystyle i}$ to another row ${\displaystyle i'}$:
• ${\displaystyle EA}$ adds multiple of row ${\displaystyle i}$ to row ${\displaystyle i'}$
• ${\displaystyle AE}$ adds multiple of column ${\displaystyle i}$ to column ${\displaystyle i'}$
Note: ${\displaystyle I}$ (and thus ${\displaystyle E}$) is always a square matrix, but ${\displaystyle A}$ can be of any size

${\displaystyle E}$ is a ${\displaystyle n\times n}$ matrix: we can think of it as being obtained from ${\displaystyle I}$ by either a row operation or column operation. If ${\displaystyle A}$ is a ${\displaystyle n\times r}$ matrix, premultiplying ${\displaystyle A}$ by ${\displaystyle E}$ has the effect of performing the same row operation on ${\displaystyle A}$. If ${\displaystyle B}$ is a ${\displaystyle m\times n}$ matrix, postmultiplying ${\displaystyle B}$ by ${\displaystyle E}$ is equivalent to performing that same column operation on ${\displaystyle B}$.

Theorem 1.4.1

If ${\displaystyle E}$ is an elementary matrix, then ${\displaystyle E}$ is nonsingular and ${\displaystyle E^{-1}}$ is an elementary matrix of the same type as ${\displaystyle E}$

1. ${\displaystyle EE=I}$, since interchanging two rows twice undoes the effect, so ${\displaystyle E=E^{-1}}$
2. if ${\displaystyle E}$ has an ${\displaystyle \alpha }$ at position ${\displaystyle i,i}$. ${\displaystyle E^{-1}}$ will have ${\displaystyle 1/\alpha }$ at ${\displaystyle i,i}$
3. ${\displaystyle E}$ has a ${\displaystyle m}$ at position ${\displaystyle i,j}$, where ${\displaystyle i\neq j}$. ${\displaystyle E^{-1}}$ will have ${\displaystyle -m}$ at ${\displaystyle i,j}$. In multiplying ${\displaystyle E}$ by ${\displaystyle E^{-1}}$, position ${\displaystyle i,j}$ will be product of ${\displaystyle i}$th row and ${\displaystyle j}$th column: ${\displaystyle \sum _{k=1}^{n}e_{i,k}\,e_{k,j}^{-1}=m-m=0}$

Row Equivalence

${\displaystyle B}$ is row equivalent to ${\displaystyle A}$ (written ${\displaystyle B\sim A\leftrightarrow A\sim B}$ iff there exists a finite sequence ${\displaystyle E_{1},E_{2},\ldots ,E_{k}}$ of elementary matrices such that ${\displaystyle B=E_{k}\,E_{k}-1\,\dots \,E_{2}\,E_{1}\,A}$. In other words, ${\displaystyle B}$ can be obtained from ${\displaystyle A}$ by a finite number of row operations.

By extension, two augmented matrices ${\displaystyle (A|{\vec {b}})}$ and ${\displaystyle (B|{\vec {c}})}$ are row equivalent iff ${\displaystyle A{\vec {x}}={\vec {b}}}$ and ${\displaystyle B{\vec {x}}={\vec {c}}}$ are equivalent systems.

Theorem 1.4.2

Equivlent Conditions for Nonsingularity

Let ${\displaystyle A}$ be a ${\displaystyle n\times n}$ matrix. Then the following are equivalent:

1. ${\displaystyle A}$ is nonsingular
2. ${\displaystyle A{\vec {x}}={\vec {0}}}$ has only trivial solution ${\displaystyle {\vec {0}}}$
3. ${\displaystyle A}$ is row equivalent to identity matrix of size ${\displaystyle n}$ (${\displaystyle A\sim I_{n}}$)

(1) → (2): Let ${\displaystyle {\hat {x}}}$ be solution of ${\displaystyle A{\vec {x}}={\vec {0}}}$. So ${\displaystyle A{\hat {x}}={\vec {0}}}$ can become ${\displaystyle A^{-1}A{\hat {x}}=A^{-1}{\vec {0}}}$. ${\displaystyle I{\hat {x}}={\vec {0}}}$, so ${\displaystyle {\hat {x}}={\vec {0}}}$

(2) → (3): Rewrite ${\displaystyle A{\vec {x}}={\vec {0}}}$ in row echelon form as ${\displaystyle U{\vec {x}}={\vec {0}}}$. If one diagonal entry of ${\displaystyle U}$ is 0, then there is at least one free variable and infinitely many solutions, one of which is nonzero. This leads to a contradiction, so all diagonal entries of ${\displaystyle U}$ must be 1, so rref will be identity matrix.

(3) → (1): If ${\displaystyle A}$ is row equivalent to ${\displaystyle I}$, then there exists a sequence of elementary matrices ${\displaystyle E_{1},\ldots ,E_{k}}$ such that ${\displaystyle \prod _{i=k}^{1}E_{i}=A}$. Therefore, ${\displaystyle A^{-1}=\prod _{k=1}^{k}E_{i}^{-1}}$ (note reverse order), so ${\displaystyle A}$ is nonsingular.

Corrolary 1.4.3

The ${\displaystyle n\times n}$ system of equations ${\displaystyle A{\vec {x}}={\vec {b}}}$ has a unique solution iff ${\displaystyle A}$ is nonsingular.

• ${\displaystyle A}$ is nonsingular, ${\displaystyle A{\vec {x}}={\vec {b}}}$ can be rewritten as ${\displaystyle {\hat {x}}=A^{-1}{\vec {b}}}$
• Assume unique solution ${\displaystyle {\hat {x}}}$ exists. If ${\displaystyle A}$ were singular then homogeneous system ${\displaystyle A{\vec {x}}={\vec {0}}}$ would have a nonzero solution ${\displaystyle {\vec {y}}\neq {\vec {0}}}$ (by Theorem 1.4.2). If this were the case, ... there would be another solution, which is a contradiction.