# MATH 323 Lecture 16

« previous | Tuesday, October 23, 2012 | next »

## Linear Transformations

Let $L:V\to W$ be a linear transformation for all $v_{i}\in V$ and $\alpha _{i}\in \mathbb {R}$ :

1. $L({\vec {0}}_{V})={\vec {0}}_{W}$ 2. $L(\sum _{i=1}^{n}\alpha _{i}\,v_{i})=\sum _{i=1}^{n}\alpha _{i}\,L(v_{i})$ 3. $L(-{\vec {v}})=-L({\vec {v}})$ In general, linear transformations are of the form

$L_{A}({\vec {x}})=A{\vec {x}}$ .

### Transdimensional Transformations

If $L_{A}$ is a transformation from $\mathbb {R} ^{n}$ to $\mathbb {R} ^{m}$ , then $A$ will be a $m\times n$ matrix.

Let $L({\vec {x}})=x_{1}+x_{2}$ be a linear transformation from 2D space to 1D space.

{\begin{aligned}L(\alpha \,{\vec {x}}+\beta \,{\vec {y}})&=L\left({\begin{bmatrix}\alpha \,x_{1}+\beta \,y_{1}\\\alpha \,x_{2}+\beta \,y_{2}\end{bmatrix}}\right)\\&=\alpha \,x_{1}+\beta \,y_{1}+\alpha \,x_{2}+\beta \,y_{2}\\&=\alpha (x_{1}+x_{2})+\beta (y_{1}+y_{2})\\&=\alpha \,L({\vec {x}})+\beta \,L({\vec {y}})\end{aligned}} The following transformation is not linear because it fails on (1) above:

$M({\vec {x}})={\sqrt {x_{1}^{2}+x_{2}^{2}}}$ ### Identity Transformation

$I:V\to V$ such that $I({\vec {v}})={\vec {v}}$ .

### Image and Kernel

Giver $L:V\to W$ ,

kernel
Set of vectors in $V$ such that $L(v)=0$ Very analogous to null space of a matrix.
image
Written $L(V)$ Set of vectors in $W$ such that $w=L(v)$ for some vector in $V$ Subspace of $V$ will have image contained in $L(V)$ #### Theorem 4.1.1

Let $L:V\to W$ be a linear transformation, and $S\subseteq V$ be a subspace. Then

1. The kernel of $L$ is a subspace of $V$ 2. $L(S)$ is a subspace of $W$ . In particular, $L(V)$ is a subspace of $W$ ### Example

Let $L:\mathbb {R} ^{3}\to \mathbb {R} ^{2}$ be defined as follows:

$L({\vec {x}})={\begin{pmatrix}x_{1}+x_{2}\\x_{2}+x_{3}\end{pmatrix}}$ Kernel is $\{{\vec {x}}~|~L({\vec {x}})={\vec {0}}\}$ .

$x_{1}+x_{2}=x_{2}+x_{3}=0$ , therefore $\left\langle a,-a,a\right\rangle$ is the kernel of $L$ .

For a subspace $S=Span({\vec {e}}_{1},{\vec {e}}_{2})=\left\langle a,0b\right\rangle$ , The image $L(S)=\left\langle a+0,0+b\right\rangle =\mathbb {R} ^{2}$ .

## Matrix Representations of Linear Transformations

Let $A$ be a $m\times n$ matrix, and $L_{A}:\mathbb {R} ^{n}\to \mathbb {R} ^{m}$ . Then $L_{A}({\vec {x}})=A\,{\vec {x}}$ $A$ is called the standard matrix representation of $L$ .

### Theorem 4.2.1

If $L$ is a linear transformation mapping $\mathbb {R} ^{n}$ into $\mathbb {R} ^{m}$ , there is a $m\times n$ matrix $A$ such that $L({\vec {x}})=A\,{\vec {x}}$ for each $x\in \mathbb {R} ^{n}$ . In fact, the $j$ th column vector of $A$ is given by

${\vec {a}}_{j}=L({\vec {e}}_{j})$ for $j=1,\ldots ,n$ ### Example

$L({\vec {x}})={\begin{pmatrix}x_{1}+x_{2}\\x_{2}+x_{3}\end{pmatrix}}$ . Find the standard matrix representation.

• ${\vec {a}}_{1}=L({\vec {e}}_{1})={\begin{pmatrix}1+0\\0+0\end{pmatrix}}={\begin{pmatrix}1\\0\end{pmatrix}}$ • ${\vec {a}}_{2}=L({\vec {e}}_{2})={\begin{pmatrix}0+1\\1+0\end{pmatrix}}={\begin{pmatrix}1\\1\end{pmatrix}}$ • ${\vec {a}}_{3}=L({\vec {e}}_{3})={\begin{pmatrix}0+0\\0+1\end{pmatrix}}={\begin{pmatrix}0\\1\end{pmatrix}}$ So $A=(a_{1},a_{2},a_{3})={\begin{pmatrix}1&1&0\\0&1&1\end{pmatrix}}$ 