« previous | Thursday, April 24, 2014 | next »
Pre-Exam Discussion
- 6 problems total for possible score of over 100 (but 100 is perfect)
- 1-2 problems from first exam's sections
Chapter 5
Initial Value Problem (IVP):
, and
for
.
Is the IVP well-posed?
- domain should be convex:
— you should be able to draw straight line that never leaves the set between any two points
.
is continuous in 
is Lipschitz with respect to
, i.e.
.
Theorem.
, then
is Lipschitz with Lipschitz constant
.
Numerical Methods
Over interval
with
sample points, we define
Let
, where
, with
Rule is of form:
Euler's Method
Approximate
by left endpoint riemann rule:
.
Modified Euler's Method
Approximate
by trapezoidal rule:
.
Midpoint Method
Approximate
by midpoint rule:
.
Heun's Method
Too complicated for exam.
Example
, and
This is well-posed:
- domain of RHS is entire real plane

Let's use Modified Euler...



Chapter 6
(skip linear algebra review)
We wish to find
and
such that
:
Standard Gaussian elimination sets the pivot elements equal to 1, and this requires a little extra modification to find
. If we stick with just elementay operation #3 (adding a multiple of one row to another), then computation of
is easy:
Elementary matrix 3 is given by
: this will add
times row
to row
. The
-th entry of the identity matrix is
. Computing the inverse is incredibly straightforward:
Positive Definite Matrices
Theorem. A matrix
is positive definite if and only if the determinants of all the minors of
are positive:
Hence



From these constraints, we get
Cholesky [sp?] Factorization
Given symmetric matrix
, find
such that
.
To get this, perform LU decomposition, but replace diagonal pivot elements of
with
(and multiply the corresponding column by
).
Let
Then
This is often called the square root of a matrix, since the closest we can get to an abstract "square" of any matrix (of any size) is
.
Matrix Norms
All norms discussed here are called "native"
Defined as
, but this is a pain to compute. We settle for the following theorems:


, where
is the largest magnitude eigenvalue.
Theorem.
is a norm if
is positive definite.
Proof. By definition,
is a norm if and only if it satisfies the following:
, and
if and only if
.
for all constants 
, which holds iif and only if the Cauchy-Schwartz inequality (
).
Since
is symmetric, we can find
such that
. thus
We know this norm is always greater than
and equal to zero only when
.
Next,
Finally,
quod erat demonstrandum
Example
Find
norm of
Eigenvalues are found by
So
, and
Chapter 8
Given
, approximate
.
To do this, we find the best least squares fit (i.e. minimize Euclidean distance)
, where
.
Normal equations given by
- Given continuous functions, the inner product is given by

- Given point samples, the inner product is given by
