MATH 417 Lecture 25

From Notes
Jump to navigation Jump to search

« previous | Tuesday, April 22, 2014 | next »


Section 8.1 and 8.2

Least Squares approximation

usually (linear) or (quadratic)

is given, so we wish to minimize

In 8.1, , where is a probability weight distribution.

In 8.2,

Common definitions of :

  • (classic definition)
  • on
  • on

Basic Concept

We compute and find the minimum. To find this minimum, we solve for . This gives a system of normal equations:

This matrix is positive definite, so it has an inverse.

This system is solvable, but it's a pain to solve.

Original function .
Best Quadratic fit .

For example, find the best fit to by on


Simplifying Normal Equations

What if were a diagonal matrix? That is, what if for ?

Then , , etc. would be an orthogonal basis for our regression space.


Continuing with our example, let , where , , and . is an orthogonoal basis for our regression space.


This gives a much easier system to solve:


There are two ways to find an orthogonal basis for

  1. Legendre polynomials
  2. Gram-Schmidt

Legendre Polynomials

For , we have:

  • ,
  • , and

These only work on , so we shift and scale to by substituting (or ). Performing this substitution gives our 's above.


Gram-Schmidt

This will always give an orthonormal basis, so :

def ip(f, g, a, b):
    return integrate(f*g, x, a, b)

def norm(f, a, b):
    return sqrt(ip(f, f, a, b))

def normalize(f, a, b):
    return f/norm(f, a, b)

def gram_schmidt(basis, a, b):
    on_basis = [normalize(basis[0], a, b)]
    for i in xrange(1, len(basis)):
        p = sum(ip(basis[i], e, a, b) * e for e in on_basis)
        next_e = normalize(basis[i] - p, a, b)
        on_basis.append(next_e)
    return map(simplify, on_basis)


Now we have


Quiz Discussion: Matrix Norms

This is horrible. Bojan Popov


Find , , and for

is the max row sum, so

is just the max column sum, so

, so

  1. ,