# MATH 323 Theorems

MATH 323 Theorems

## Chapter 1

### Theorem 1.3.2

For all $\alpha ,\beta \in \mathbb {R}$ and for all $A,B,C$ , the indicated operations are defined:

1. $A+B=B+A$ (commutativity of addition)
2. $(A+B)+C=A+(B+C)$ (associativity of addition)
3. $(A\,B)\,C=A\,(B\,C)$ (associativity of multiplication)
4. $A\,(B+C)=A\,B+A\,C$ (right distributivity)
5. $(A+B)\,C=A\,C+B\,C$ (left distributivity)
6. $(\alpha \,\beta )A=\alpha (\beta \,A)$ 7. $\alpha \,(A\,B)=(\alpha \,A)\,B=A\,(\alpha \,B)$ 8. $(\alpha +\beta )\,A=\alpha \,A+\beta \,A$ 9. $\alpha \,(A+B)=\alpha \,A+\alpha \,B$ ### Theorem 1.3.3

If $A$ and $B$ are nonsingular $n\times n$ matrices, then $A\,B$ is also nonsingular and $(A\,B)^{-1}=B^{-1}\,A^{-1}$ (note the opposite order on the right-hand side)

#### Corollary

For nonsingular matrices $A_{1},\ldots ,A_{k}$ , $A_{1}\,A_{2}\,\dots \,A_{k}$ is also nonsingular and $(A_{1}\,A_{2}\,\dots \,A_{k})^{-1}=A_{k}^{-1}\,\dots \,A_{2}^{-1}\,A_{1}^{-1}$ ### Theorem 1.4.2

Equivlent Conditions for Nonsingularity

Let $A$ be a $n\times n$ matrix. Then the following are equivalent:

1. $A$ is nonsingular
2. $A{\vec {x}}={\vec {0}}$ has only trivial solution ${\vec {0}}$ 3. $A$ is row equivalent to identity matrix of size $n$ ($A\sim I_{n}$ )

#### Proof

(1) → (2): Let ${\hat {x}}$ be solution of $A{\vec {x}}={\vec {0}}$ . So $A{\hat {x}}={\vec {0}}$ can become $A^{-1}A{\hat {x}}=A^{-1}{\vec {0}}$ . $I{\hat {x}}={\vec {0}}$ , so ${\hat {x}}={\vec {0}}$ (2) → (3): Rewrite $A{\vec {x}}={\vec {0}}$ in row echelon form as $U{\vec {x}}={\vec {0}}$ . If one diagonal entry of $U$ is 0, then there is at least one free variable and infinitely many solutions, one of which is nonzero. This leads to a contradiction, so all diagonal entries of $U$ must be 1, so rref will be identity matrix.

(3) → (1): If $A$ is row equivalent to $I$ , then there exists a sequence of elementary matrices $E_{1},\ldots ,E_{k}$ such that $\prod _{i=k}^{1}E_{i}=A$ . Therefore, $A^{-1}=\prod _{k=1}^{k}E_{i}^{-1}$ (note reverse order), so $A$ is nonsingular.

Q.E.D.

#### Corollary

The $n\times n$ system of equations $A{\vec {x}}={\vec {b}}$ has a unique solution iff $A$ is nonsingular.

##### Proof
• $A$ is nonsingular, $A{\vec {x}}={\vec {b}}$ can be rewritten as ${\hat {x}}=A^{-1}{\vec {b}}$ • Assume unique solution ${\hat {x}}$ exists. If $A$ were singular then homogeneous system $A{\vec {x}}={\vec {0}}$ would have a nonzero solution ${\vec {y}}\neq {\vec {0}}$ (by Theorem 1.4.2). If this were the case, ... there would be another solution, which is a contradiction.
Q.E.D.

## Chapter 2

### Theorem 2.1.1

The determinant can be expressed as a cofactor expansion using any row or column of $A$ :

$\left|A\right|=\sum _{k=1}^{n}a_{ik}\,A_{ik}=\sum _{k=1}^{n}a_{kj}\,A_{kj}$ $1\leq i,j\leq n$ ### Theorem 2.1.2

If $A$ is a $n\times n$ matrix, then $\left|A^{T}\right|=\left|A\right|$ ### Theorem 2.1.3

If $A$ is a triangular matrx, then $\left|A\right|$ is equal to the product of the diagonal entries of $A$ .

#### Proof

Proof by Induction.

Basis step. determinant of $2\times 2$ matrix is product of diagonals.

Inductive step. Assuming determinant of $n-1\times n-1$ matrix is product of diagonals, $n\times n$ can be formed by adding a nonzero row to the top, and a zero column with $a_{1}1$ nonzero to the $n-1\times n-1matrix$ . The determinant would just be $a_{1}1\,A_{11}$ .

Q.E.D.

### Theorem 2.1.4

For a $n\times n$ matrix $A$ 1. If $A$ has a row or column consisting entirely of 0s, then $\left|A\right|=0$ 2. If $A$ has two identical rows or two identical columns, then $\left|A\right|=0$ ### Theorem x.x.x

A $n\times n$ matrix $A$ is singular iff $|A|=0$ .

#### Proof

$A$ can be reduced to row echelon form by a finite number of row operations. This means that $U=\left(\prod _{i=k}^{1}E_{i}\right)\,A$ is an upper triangular matrix whose determinant is $\left(\prod _{i=k}^{1}|E_{i}|\right)\,|A|=\prod _{i=1}^{n}u_{ii}$ . The determinants of elementary matrices will never be zero, so if the determinant of $A$ is zero, then at least one row of $U$ must be all 0.

Such a matrix cannot be row-equivalent to $I$ and therefore cannot have an inverse.

Q.E.D.

### Theorem x.x.x

$\left|A\,B\right|=\left|A\right|\,\left|B\right|$ #### Proof

If $B$ is singular, it follows from #Theorem 1.5.2 that $A\,B$ is also singular (see exercise 14 of Section 1.5). Therefore $\left|A\,B\right|=0$ if $A$ or $B$ is singular.

Let $B$ be nonsingular, so $B=\prod _{i=k}^{1}E_{i}$ Therefore, $\left|A\,B\right|=\left|A\,\prod _{i=k}^{1}E_{i}\right|$ . By separating the elementary matrices from the determinant, we arrive that $\left|A\right|\,\prod _{i=k}^{1}\left|E_{i}\right|=\left|A\right|\,\left|B\right|$ .

Q.E.D.