« previous | Tuesday, January 29, 2013 | next »
Euler Totient Function
Very important function in number theory
- It's fundamental in the arithmatic of (e.g. Euler's theorem)
- It has fundamental applications in RSA
For any prime and ,
Proof
To count the number of integers in that are relatively prime to , just expand
Any such in base :
with
where is the remainder of
Now
There are exactly numbers with last digit , so therefore
Q.E.D.
Lemma
is weakly multiplicative; that is, if with GCD(m,n) = 1, then
Proof.
Q.E.D.
RSA Review
Start with primes and with approx. equal number of digits, and . Compute and with .
- Public: (n, e)
- Private: (p,q,d)
Why Does Decryption Work?
Why is ?
Proof.
How to break RSA
If you can factor (which is a(n) (apparently) very difficult to do, by the way), then you can get and . You can then do this:
- Compute
- Solve for
- Doing the Extended Euclidean Algorithm is much faster than factoring!!!
If you gnow , you can do the same...
Oddly, computing and factoring are actually almost equivalent complexity!
Lemma
If for primes and , then and are exactly the roots of
Proof.
Our quadratic =
Quadratics are easy to solve approximately over , but square roots are hard to calculate in
Chinese Remainder Theorem
If and are relatively prime integers and and are any integers, then the simultaneous congruences
have a unique solution, in particular in
Proof
Proof. (The formula works)
the stated formula, mod , reduces to:
Similarly, mod , reduces to:
Uniqueness: if are both solutions, then
Example
If you have fewer than 255 books and
- after stacking by 15, you have 3 leftovers
- after stacking by 17, you have 4 leftover
How many books do you have?
Let be the number of books. Equivalently,
Therefore,
Kerckhoffs' Principle
(See wikipedia:Kerckhoffs's principle→)
Any cryptosystem should remain secure even if all (except key) is known.
(c.f. security by obscurity)
This is why RSA is so good.
Complexity
How hard is it to...
- multiply two -digit integers? ( bit operations)
- reduce one -digit integer mod another? ( bit operations)
- compute ? ()
seems beeter than . For large , you want to NOT use the grade school multiplication algorithm. Use this instead.
Rule of thumb: case approx. number of digits:
- < 100: grade school is best.
- 100..200: Karatsuba
- > 200: FFT
Note: Architecture matters:
- Humans: Grade school multiplication is best for fewer than 4 digits (with some tricks), anything beyond we use computers
- Computer: Depending on chips/memory, number of digits dictates the algorithm.
Bit Operation?
- bit-by-bit operation: 1+1, 1×0, shift by a bit
- (with some constancy) digit-by-digit operation
Big-Oh Notation
(See Asymptotic Analysis→)
We say two functions satisfying iff there are constants with
Example:
- Grade school, every digit is multiplied by every other digit with added carry digits. .
- Gaussian Elimination takes arithmetic operations to reduce an matrix to row-echelon form.
Until Next Time
Quiz on Tuesday
Be able to compute stuff like