# MATH 323 Theorems

MATH 323 Theorems

## Chapter 1

### Theorem 1.3.2

For all ${\displaystyle \alpha ,\beta \in \mathbb {R} }$ and for all ${\displaystyle A,B,C}$, the indicated operations are defined:

1. ${\displaystyle A+B=B+A}$ (commutativity of addition)
2. ${\displaystyle (A+B)+C=A+(B+C)}$ (associativity of addition)
3. ${\displaystyle (A\,B)\,C=A\,(B\,C)}$ (associativity of multiplication)
4. ${\displaystyle A\,(B+C)=A\,B+A\,C}$ (right distributivity)
5. ${\displaystyle (A+B)\,C=A\,C+B\,C}$ (left distributivity)
6. ${\displaystyle (\alpha \,\beta )A=\alpha (\beta \,A)}$
7. ${\displaystyle \alpha \,(A\,B)=(\alpha \,A)\,B=A\,(\alpha \,B)}$
8. ${\displaystyle (\alpha +\beta )\,A=\alpha \,A+\beta \,A}$
9. ${\displaystyle \alpha \,(A+B)=\alpha \,A+\alpha \,B}$

### Theorem 1.3.3

If ${\displaystyle A}$ and ${\displaystyle B}$ are nonsingular ${\displaystyle n\times n}$ matrices, then ${\displaystyle A\,B}$ is also nonsingular and ${\displaystyle (A\,B)^{-1}=B^{-1}\,A^{-1}}$

(note the opposite order on the right-hand side)

#### Corollary

For nonsingular matrices ${\displaystyle A_{1},\ldots ,A_{k}}$, ${\displaystyle A_{1}\,A_{2}\,\dots \,A_{k}}$ is also nonsingular and ${\displaystyle (A_{1}\,A_{2}\,\dots \,A_{k})^{-1}=A_{k}^{-1}\,\dots \,A_{2}^{-1}\,A_{1}^{-1}}$

### Theorem 1.4.2

Equivlent Conditions for Nonsingularity

Let ${\displaystyle A}$ be a ${\displaystyle n\times n}$ matrix. Then the following are equivalent:

1. ${\displaystyle A}$ is nonsingular
2. ${\displaystyle A{\vec {x}}={\vec {0}}}$ has only trivial solution ${\displaystyle {\vec {0}}}$
3. ${\displaystyle A}$ is row equivalent to identity matrix of size ${\displaystyle n}$ (${\displaystyle A\sim I_{n}}$)

#### Proof

(1) → (2): Let ${\displaystyle {\hat {x}}}$ be solution of ${\displaystyle A{\vec {x}}={\vec {0}}}$. So ${\displaystyle A{\hat {x}}={\vec {0}}}$ can become ${\displaystyle A^{-1}A{\hat {x}}=A^{-1}{\vec {0}}}$. ${\displaystyle I{\hat {x}}={\vec {0}}}$, so ${\displaystyle {\hat {x}}={\vec {0}}}$

(2) → (3): Rewrite ${\displaystyle A{\vec {x}}={\vec {0}}}$ in row echelon form as ${\displaystyle U{\vec {x}}={\vec {0}}}$. If one diagonal entry of ${\displaystyle U}$ is 0, then there is at least one free variable and infinitely many solutions, one of which is nonzero. This leads to a contradiction, so all diagonal entries of ${\displaystyle U}$ must be 1, so rref will be identity matrix.

(3) → (1): If ${\displaystyle A}$ is row equivalent to ${\displaystyle I}$, then there exists a sequence of elementary matrices ${\displaystyle E_{1},\ldots ,E_{k}}$ such that ${\displaystyle \prod _{i=k}^{1}E_{i}=A}$. Therefore, ${\displaystyle A^{-1}=\prod _{k=1}^{k}E_{i}^{-1}}$ (note reverse order), so ${\displaystyle A}$ is nonsingular.

Q.E.D.

#### Corollary

The ${\displaystyle n\times n}$ system of equations ${\displaystyle A{\vec {x}}={\vec {b}}}$ has a unique solution iff ${\displaystyle A}$ is nonsingular.

##### Proof
• ${\displaystyle A}$ is nonsingular, ${\displaystyle A{\vec {x}}={\vec {b}}}$ can be rewritten as ${\displaystyle {\hat {x}}=A^{-1}{\vec {b}}}$
• Assume unique solution ${\displaystyle {\hat {x}}}$ exists. If ${\displaystyle A}$ were singular then homogeneous system ${\displaystyle A{\vec {x}}={\vec {0}}}$ would have a nonzero solution ${\displaystyle {\vec {y}}\neq {\vec {0}}}$ (by Theorem 1.4.2). If this were the case, ... there would be another solution, which is a contradiction.
Q.E.D.

## Chapter 2

### Theorem 2.1.1

The determinant can be expressed as a cofactor expansion using any row or column of ${\displaystyle A}$:

${\displaystyle \left|A\right|=\sum _{k=1}^{n}a_{ik}\,A_{ik}=\sum _{k=1}^{n}a_{kj}\,A_{kj}}$
${\displaystyle 1\leq i,j\leq n}$

### Theorem 2.1.2

If ${\displaystyle A}$ is a ${\displaystyle n\times n}$ matrix, then ${\displaystyle \left|A^{T}\right|=\left|A\right|}$

### Theorem 2.1.3

If ${\displaystyle A}$ is a triangular matrx, then ${\displaystyle \left|A\right|}$ is equal to the product of the diagonal entries of ${\displaystyle A}$.

#### Proof

Proof by Induction.

Basis step. determinant of ${\displaystyle 2\times 2}$ matrix is product of diagonals.

Inductive step. Assuming determinant of ${\displaystyle n-1\times n-1}$ matrix is product of diagonals, ${\displaystyle n\times n}$ can be formed by adding a nonzero row to the top, and a zero column with ${\displaystyle a_{1}1}$ nonzero to the ${\displaystyle n-1\times n-1matrix}$. The determinant would just be ${\displaystyle a_{1}1\,A_{11}}$.

Q.E.D.

### Theorem 2.1.4

For a ${\displaystyle n\times n}$ matrix ${\displaystyle A}$

1. If ${\displaystyle A}$ has a row or column consisting entirely of 0s, then ${\displaystyle \left|A\right|=0}$
2. If ${\displaystyle A}$ has two identical rows or two identical columns, then ${\displaystyle \left|A\right|=0}$

### Theorem x.x.x

A ${\displaystyle n\times n}$ matrix ${\displaystyle A}$ is singular iff ${\displaystyle |A|=0}$.

#### Proof

${\displaystyle A}$ can be reduced to row echelon form by a finite number of row operations. This means that ${\displaystyle U=\left(\prod _{i=k}^{1}E_{i}\right)\,A}$ is an upper triangular matrix whose determinant is ${\displaystyle \left(\prod _{i=k}^{1}|E_{i}|\right)\,|A|=\prod _{i=1}^{n}u_{ii}}$. The determinants of elementary matrices will never be zero, so if the determinant of ${\displaystyle A}$ is zero, then at least one row of ${\displaystyle U}$ must be all 0.

Such a matrix cannot be row-equivalent to ${\displaystyle I}$ and therefore cannot have an inverse.

Q.E.D.

### Theorem x.x.x

${\displaystyle \left|A\,B\right|=\left|A\right|\,\left|B\right|}$

#### Proof

If ${\displaystyle B}$ is singular, it follows from #Theorem 1.5.2 that ${\displaystyle A\,B}$ is also singular (see exercise 14 of Section 1.5). Therefore ${\displaystyle \left|A\,B\right|=0}$ if ${\displaystyle A}$ or ${\displaystyle B}$ is singular.

Let ${\displaystyle B}$ be nonsingular, so ${\displaystyle B=\prod _{i=k}^{1}E_{i}}$ Therefore, ${\displaystyle \left|A\,B\right|=\left|A\,\prod _{i=k}^{1}E_{i}\right|}$. By separating the elementary matrices from the determinant, we arrive that ${\displaystyle \left|A\right|\,\prod _{i=k}^{1}\left|E_{i}\right|=\left|A\right|\,\left|B\right|}$.

Q.E.D.