Math 21b: Linear Algebra with Differential Equations (Fall 2006)
Errata and notes for Prof.Taubes' ``Chapter 10'',
Math 21b Differential Equations, Spring 2006

Page 2, line 7, and later: missing e in ``forewarned''  and anyway
(with the exception of the occurrence in the middle of page 19)
the intention must be not ``be forewarned'' but something like
``be advised'' or ``rest assured''...

Page 5, last line before final (``Subspaces'') paragraph:
change ``particularly'' to ``particular''.

Page 6, near the middle: an alternative way to see that if p(t)
is always zero then p is the zero polynomial is to use
the algebraic fact noted in connection with Chapter 7.2
of the textbook: any (nonzero) polynomial p(t) of degree n
has at most n roots. Since R is infinite,
it follows that there are infinitely many choices of t for which
p(t) is not zero.

Page 7, top: Note that ``D^{2}'' is not an arbitrary notation:
D^{2}f is the same as D(D(f)); that is,
D^{2} is the linear transformation DD.
Likewise for D^{3}, D^{4}, etc.
(We shall come back to this in pages 12 and later, in connection
with what this chapter calls ``audacity''.)

Page 7, seventh line from the bottom: the roles of n and n+1
should be switched here. That is, if f'=g
then for any n=1,2,3,... the (n+1)st derivative of f
is the nth derivative of g, not the other way around.
Using the D notation, this is saying that if Df=g
then D^{n+1}f=D^{n}g, as expected.

Page 7, second line from the bottom:
``A little thought should convince you...'' 
this is not a matter of thought but of memory; the statement that
``a linear transformation of R^{n} whose
image is R^{n} must have trivial kernel''
is part of the textbook's ``Summary 3.3.9: Various characterizations of
invertible matrices'' (page 133), namely the equivalence of conditions
(v) and (vi).

Page 8, tenth line from the bottom: omit comma from ``The general, form''.

Page 9, Fact 10.1.2: NB the ``leading coefficient''  the coefficient
of the nth derivative d^{n}f/dt^{n}  must be 1,
else the assertion might fail. See also Exercise 5 on the next page.

Page 12, line 5: insert ``degree'' before
``from zero up to m_{k}1''

Page 12, line 6: The polynomial should be D^{2}+2D3,
not D^{2}2D+3, else the desired factorization fails
(thanks to D. Lietz).

Page 12, line 13/14: There is no ``Fact 10.1.3'' in this chapter;
it should say Fact 10.2.1, stated two paragraphs earlier
(thanks to E. Beh).

Page 12, third line from the bottom: this ``audacity'' is in fact
not a heedless manipulation of symbols but an application of the
composition of linear transformations. That is, (D1)(D1)(D1)
is the linear transformation sending any smooth function f to
the smooth function
f'''  3 f'' + 3 f'  f =
D^{3} f  3 D^{2} f + 3 D f  f
= (D^{3}  3 D^{2} + 3 D  1) f,
where the last ``1'' is the identity transformation.
In general, if p and q are any polynomials then one can check
without too much difficulty that
p(D)q(D) is the same as (pq)(D)
where pq is the product of the polynomials p and q.
(This only uses the fact that D is a linear transformation from
a space to itself; we have already been using this, at least
implicitly, for linear transformations given by square matrices.)
In particular, p(D) and q(D) commute
for all p and q.
This explains what's happening in pages 1415, and should allay
your possible nervousness about the ``outrageous move'' of
`` `factorizing T by maniuplating D as if it were just a `number'
or a variable like lambda rather than the much more subtle object
that says `take the derivative of whatever is in front of me' ''
(page 15, lines 810).

Page 12, second line from the bottom: several students note that
the formula for (D1)f is not correct. It should be
(c_{2} + 2 c_{3} t) e^{t},
with the t inside the parentheses.

Page 13, near the middle: omit ``until'' from
``Continue until sequentially''.

Page 13, second bullet point (just below the middle):
one must assume not only that no two of the lambda's
are the same, but also that no two are complex conjugates.

Page 13, sixth line from the end: change ``procede'' to ``proceed''.

Page 14, displayed equation near the top: needs a second closing
parenthesis ``)'' before the first equals sign.

Page 14, two lines above the second displayed equation:
the formula for the derivative of
t^{k} e^{nt}
is missing a factor t^{k} in the second term;
that is, the correct formula for this derivative is
k t^{k1} e^{nt}
+ n t^{k} e^{nt}
(thanks to G. Lazarevski for this and the next; I use the oversized
n to represent the lowercase Greek letter eta
in the text).

Page 15, middle: this should start with
D^{2}f + 2 D f + 2 f,
not D^{2}f  2 D f + 2,
to get the desired solutions 1+i and 1i.
[Not D^{2}f + 2 D f + 2
as I wrote earlier  that, like the text's
D^{2}f  2 D f + 2, isn't even linear.]

Page 15, bottom half (before the exericses): note that all those
prescribed values, derivatives, etc. are linear conditions
on the space of solutions; thus such problems can be solved using
the elimination techniques we developed back in Chapter 1.

Page 16, problem 4:
the third word in this sentence is the scalar a;
the fifth and ninth words are the indefinite article ``a''.

Page 17 and thereafter: instead of functions f on [PI,PI]
it can be useful to think of functions on all of R
that are periodic with period 2*PI: f(t)=f(t')
for all real t and t' such that t't is an integer multiple of 2*PI.
[It is enough to check that f(t+2*PI)=f(t) for all real t.]
Note that this is in fact true of the functions
sqrt(1/2), cos(kt), and sin(kt),
and that it is true of the linear combination af+bg of f,g
(a,b any scalars) if it is true of f and g.
Once we know the values of such a function
at all t in [PI,PI],
we know it on all of R by periodicity.
A Fourier series can then be regarded as a sequence
of approximations to an arbitrary periodic function
(finite) linear combinations of our basic periodic functions
sqrt(1/2), cos(kt), and sin(kt).
Conversely, if we begin from a function f on [PI,PI],
we can recover from it a periodic function,
provided the values f(PI) and f(PI)
at the endpoints are equal.
That is why, in the key Fact 10.3.2 (pages 2223), the condition
f(PI)=f(PI) is imposed:
if a continuous function on [PI,PI]
does not satisfy this condition, we must change one of
f(PI) or f(PI)
(thus making f discontinuous) to recover a periodic function.
Likewise, a function like f(t)=t^{2}
that satisfies f(PI)=f(PI)
but not f'(PI)=f'(PI)
corresponds to a periodic function that is continuous
but not smooth everywhere (not differentiable at PI),
so the Fourier series converges but not very quickly,
especially near the kink at t=PI, PI, 3*PI, etc.

Page 17, text between the two displayed equations: the formula for
a_{0} is the coefficient of 1, not of sqrt(1/2).
(For the coefficient of sqrt(1/2), multiply the given
a_{0} by sqrt(2).)

Page 17, second bullet point: should be `dot product'
rather than `dot' product.

Page 17, beginning of last paragraph: NB here we require only that
the function be continuous, not necessarily differentiable,
let alone smooth. Note for instance that the absolute value function
t is explicitly allowed even though it is not differentiable at t=0.

Page 1920, examples involving e^{Rt}:
the point of these examples is not so much the exact formulas
but the qualitative behavior for very large R:
fg has an increasingly thin spike at t=0
whose height is constant or even increasing (``blowing up'')
towards infinity, and yet fg approaches zero using the
``distance'' we have defined.

Page 20, around line 12: Those ``other notions of distance''
do not let us do Fourier analysis; there's also a subtler approach that
changes the notion of ``function''  if you're curious about this,
look up ``Hilbert space''.
(And yes, ``this phenomena illustrates...'' should be
``this phenomenon illustrates...'' or
``these phenomena illustrate...''.)

Page 20, a few lines above the bottom formula: the fact that
the functions 1 and t are orthogonal can also be seen using
the fact that 1 is an even function and t is an odd function,
so their proudct t is odd and has zero integral over an interval
such as [PI,PI] that is symmetrical about the origin.

Page 20, bottom formula: This set of functions is not as contrived
as it looks. It is the result of applying the GramSchmidt process
to the basis (1,t,t^{2}) of the space of polynomials of degree
at most 2.

Page 21, line 13 from the end: missing t in Schmidt.

Page 21, lines 5 and 6 to the end: ``the function that you get by
dividing the function
t^{2}  (1/3) Pi^{2}
by the square root of the integral from Pi to Pi of
(t^{2}  (1/3) Pi^{2})^{2} ''
 not quite: we want to divide by the norm of that function, which
is the square root not of that integral but of 1/Pi times the integral.
(Thanks again to G. Lazarevski for the correction; this would
have been much easier to see if the English description of this
factor were supplemented or replaced by a displayed formula.)

Page 21, last line: ``infinite orthonormal basis''  this should say
``infinite orthonormal set'', since ``basis'' would imply that
it spans our space C[PI,PI],
which is what the next sentence indicates 
and note that even then this is not the kind of
``spanning'' that we saw in finitedimensional spaces, because
typically an ``infinite linear combination'' of basis vectors
is needed.

Page 22, first line: ``many such basis'' should be
``many such bases''.

Page 23, line 3: see the paragraph above ``on page 17 and thereafter''
concerning the condition f(PI)=f(PI).

Page 23, bottom half: in fact the convergence is already
a consequence of the fact that f exceeds the norm of
each projection of f; it is the stronger claim that
the sum converges to f^{2} that requires
some of the subtle facts in 10.3.2 (pages 2223).

Page 24, line 3: this formula requires the integrals of
e^{t} cos(kt) dt and
e^{t} sin(kt) dt.
Now that you know that
cos(x) = (e^{ix} + e^{ix})/2 and
sin(x) = (e^{ix}  e^{ix})/2i,
you can write those integrals as linear combinations of
e^{(1+ki)t} and e^{(1ki)t},
which explains the denominators of 1+k^{2}:
they arise from rationalizing the denominators of
1/(1+ik) and 1/(1ik).

Page 26, problem 3: in case you haven't seen this already,
``cosh'' is the hyperbolic cosine function, defined by
cosh(x)=(e^{x}+e^{x})/2.
[You can check that this is the cosine of the imaginary number ix
using Euler's formula
cos(t) = (e^{it} + e^{it})/2.
Cf. "sinh" (hyperbolic sine) on p.3132 below.]

Page 27, line 1: extraneous s in ``Fourier seriess''.

Page 27, ``sample problem'': The choice of [PI,PI] here
is a convenient normalization so that we can use our
formulas involving sin(nx) and cos(nx) from 10.3.
See problem 5 on page 26 for what we would do
for general a,b.

Page 29, first few lines: note that this Laplacian was also
what we called the linear operator D^{2}
earlier in this chapter.

Page 31: note that these examples T(t,x) of solutions of the heat
equation are not periodic. Periodicity is another kind of
``boundary condition'' (corresponding to a circular rod
of variable temperature) that can be imposed to make
the solution of the heat equation unique.
[Also, line 4: missing space between the words ``in spite''.]

Page 3132, problem 2: in case you haven't seen this already,
``sinh'' is the hyperbolic sine function, defined by
sinh(x)=(e^{x}e^{x})/2.
[You can check that this is (1/i) times the sine of the imaginary
number ix using Euler's formula
sin(t) = (e^{it}  e^{it})/2i.
Cf. "cosh" (hyperbolic cosine) on p.27 above.
Can you verify the hyperbolic Pythagorean formula
(cosh(t))^{2}  (sinh(t))^{2}) = 1?]

Page 33, line 7 from the end (in italics): missing i in ``function''.

Page 38, Fact 10.5.3: this is also a consequence of the fact that
the solutions to the Laplace equation constitute the kernel
of the linear operator Delta, which is linear and thus has
a linear space for its kernel. (Also, in line 2 of this Fact
``define'' should be ``defined''.)

Page 42, final paragraph: note that (as stated explicitly next page)
this model of string vibration also assumes small displacements
between the string's position and its equilibrium (rest) state.
If you find more corrections, please let me know.