Skip to main content

Section 35 Inner Product Spaces

Subsection Application: Fourier Series

In calculus, a Taylor polynomial for a function f is a polynomial approximation that fits f well around the center of the approximation. For this reason, Taylor polynomials are good local approximations, but they are not in general good global approximations. In particular, if a function f has periodic behavior it is impossible to model f well globally with polynomials that have infinite limits at infinity. For these kinds of functions, trigonometric polynomials are better choices. Trigonometric polynomials lead us to Fourier series, and we will investigate how inner products allow us to use trigonometric polynomials to model musical tones later in this section.

Subsection Introduction

In Section 23 we were introduced to inner products in Rn. The concept of an inner product can be extended to vector spaces, as we will see in this section. This will allow us to measure lengths and angles between vectors and define orthogonality in certain vector spaces.

Recall that an inner product on Rn assigns to each pair of vectors u and v the scalar u,v. Thus, an inner product on Rn defines a mapping from Rn×Rn to R. Recall also that an inner product on Rn is commutative, distributes over vector addition, and respects scalar multiplication, and the inner product of a vector in Rn by itself is always non-negative and is equal to 0 only when the vector is the zero vector. We will investigate this ideas in vector spaces in Preview Activity 35.1.

Preview Activity 35.1.

Consider the vector space P2 of polynomials of degree less than or equal to 2 with real coefficients. Define a mapping from P2×P2 to R by

p(t),q(t)=01p(t)q(t)dt.
(a)

Calculate 1,t.

(b)

If p(t) and q(t) are in P2, is it true that

p(t),q(t)=q(t),p(t)?

Verify your answer.

(c)

If p(t), q(t), and r(t) are in P2, is it true that

p(t)+q(t),r(t)=p(t),r(t)+q(t),r(t)?

Verify your answer.

(d)

If p(t) and q(t) are in P2 and c is a scalar, is it true that

cp(t),q(t)=cp(t),q(t)?

Verify your answer.

(e)

If p(t) is in P2, must it be the case that p(t),p(t)0? When is p(t),p(t)=0?

Subsection Inner Product Spaces

As we saw in Preview Activity 35.1, we can define a mapping from P2×P2 to R that has the same properties of inner products on Rn. So we can extend the definition of inner product to arbitrary vector spaces.

Definition 35.1.

An inner product  ,  on a vector space V is a mapping from V×VR satisfying

  1. u,v=v,u for all u and v in V,

  2. u+v,w=u,w+v,w for all u, v, and w in V,

  3. cu,v=cu,v for all u, v in V and all scalars c,

  4. u,u0 for all u in V and u,u=0 if and only if u=0.

An inner product space is a vector space on which an inner product is defined.

Activity 35.2.

Consider the mapping  ,  from P1×P1 to R defined by

p(t),q(t)=p(0)q(0).
(a)

Show that this mapping satisfies the second property of an inner product.

(b)

Although this mapping satisfies the first three properties of an inner product, show that this mapping does not satisfy the fourth property and so is not an inner product.

Since inner products in vector spaces are defined in the same way as inner products in Rn, they will satisfy the same properties. Some of these properties are summarized in the following theorem.

One special inner product is indicated in Preview Activity 35.1. Recall that C[a,b] is the vector space of continuous functions on the closed interval [a,b]. Let  ,  map from C[a,b]×C[a,b] to R be defined by

f(x),g(x)=abf(x)g(x)dx.

The verification that this mapping is an inner product is left to the exercises.

Subsection The Length of a Vector

We can use inner products to define the length of any vector in an inner product space and the distance between two vectors in an inner product space. The idea comes right from the relationship between lengths of vectors in Rn and inner products on Rn (compare to Definition 23.5).

Definition 35.3.

Let v be a vector in an inner product space. The length of v is the real number

||v||=v,v.

The length of a vector in a vector space is also called magnitude or norm. Just as with inner products on Rn we can use the notion of length to define unit vectors in inner product spaces (compare to Definition 23.7).

Definition 35.4.

A vector v in an inner product space is a unit vector if ||v||=1.

We can find a unit vector in the direction of a nonzero vector v in an inner product space by dividing by the norm of the vector. That is, the vector v||v|| is a unit vector in the direction of the vector v, provided that v is not zero.

We define the distance between vectors u and v in an inner product space in the same way we defined distance using the dot product (compare to Definition 23.8).

Definition 35.5.

Let u and v be vectors in an inner product space. The distance between u and v is the length of the difference uv or

||uv||.

Activity 35.3.

The trace (see Definition 19.8) of an n×n matrix A=[aij] is the sum of the diagonal entries of A. That is,

trace(A)=a11+a22++ann.

If A and B are in the space Mn×n of n×n matrices with real entries, we define the mapping  ,  from Mn×n×Mn×n to R by

A,B=trace(ABT).

This mapping is an inner product on the space Mn×n called the Frobenius inner product (details are in the exercises). Let A=[1022] and B=[3421] in M2×2.

(a)

Find the length of the vectors A and B using the Frobenius inner product.

(b)

Find the distance between A and B using the Frobenius inner product.

Subsection Orthogonality in Inner Product Spaces

We defined orthogonality in Rn using inner products in Rn (see Definition 23.11) and the angle between vectors. We can extend those ideas to any inner product space.

If u and v are nonzero vectors in an inner product space, then the angle θ between u and v is such that

cos(θ)=u,v||u||||v||.

and 0θπ. This angle is well-defined due to the Cauchy-Schwarz inequality |u,v|||u||||v|| whose proof is left to the exercises.

With the angle between vectors in mind, we can define orthogonal vectors in an inner product space.

Definition 35.6.

Vectors u and v in an inner product space are orthogonal if

u,v=0.

Note that this defines the zero vector to be orthogonal to every vector.

Activity 35.4.

In this activity we use the Frobenius inner product (see Activity 35.3). Let A=[0110] and B=[0100] in M2×2.

(a)

Find a nonzero vector in M2×2 orthogonal to A.

(b)

Find the angle between A and B.

Using orthogonality we can generalize the notions of orthogonal sets and bases, orthonormal bases and orthogonal complements we defined in Rn to all inner product spaces in a natural way.

Subsection Orthogonal and Orthonormal Bases in Inner Product Spaces

As we did with inner products in Rn, we define an orthogonal set to be one in which all of the vectors in the set are orthogonal to each other (compare to Definition 24.1).

Definition 35.7.

A subset S of an inner product space for which u,v=0 for all uv in S is called an orthogonal set.

As in Rn, an orthogonal set of nonzero vectors is always linearly independent. The proof is similar to that of Theorem 24.3 and is left to the exercises.

A basis that is also an orthogonal set is given a special name (compare to Definition 24.2).

Definition 35.9.

An orthogonal basis B for a subspace W of an inner product space is a basis of W that is also an orthogonal set.

Using inner products in Rn, we saw that the representation of a vector as a linear combination of vectors in an orthogonal basis was quite elegant. The same is true in any inner product space. To see this, let B={v1,v2,,vm} be an orthogonal basis for a subspace W of an inner product space and let x be any vector in W. We know that

x=x1v1+x2v2++xmvm

for some scalars x1, x2, , xm. If 1km, then, using inner product properties and the orthogonality of the vectors vi, we have

vk,x=x1vk,v1+x2vk,v2++xmvk,vm=xkvk,vk.

So

xk=x,vkvk,vk.

Thus, we can calculate each weight individually with two simple inner product calculations.

We summarize this discussion in the next theorem (compare to Theorem 24.4).

Activity 35.5.

Let p1(t)=1t, p2(t)=2+4t+4t2, and p3(t)=741t+40t2 be vectors in the inner product space P2 with inner product defined by p(t),q(t)=01p(t)q(t) dt. Let B={p1(t),p2(t),p3(t)}. You may assume that B is an orthogonal basis for P2. Let z(t)=42t2. Find the weight x3 so that z(t)=x1p1(t)+x2p2(t)+x3p3(t). Use technology as appropriate to evaluate any integrals.

The decomposition (35.1) is even simpler if vk,vk=1 for each k. Recall that

v,v=||v||2,

so the condition v,v=1 implies that the vector v has norm 1. As with inner products in Rn, an orthogonal basis with this additional condition is given a special name (compare to Definition 24.5).

Definition 35.11.

An orthonormal basis B={v1,v2,,vm} for a subspace W of an inner product space is an orthogonal basis such that ||vk||=1 for 1km.

If B={v1,v2,,vm} is an orthonormal basis for a subspace W of an inner product space and x is a vector in W, then (35.1) becomes

(35.2)x=x,v1v1+x,v2v2++x,vmvm.

Recall that we can construct an orthonormal basis from an orthogonal basis by dividing each basis vector by its magnitude.

Subsection Orthogonal Projections onto Subspaces

In Section 25 we saw how to project a vector v in Rn onto a subspace W of Rn. The same process works for vectors in any inner product space.

Definition 35.12.

Let W be a subspace of an inner product space V and let B={w1,w2,,wm} be an orthogonal basis for W. For a vector v in V, the orthogonal projection of v onto W is the vector

projWv=v,w1w1,w1w1+v,w2w2,w2w2++v,wmwm,wmwm.

The projection of v orthogonal to W is the vector

projWv=vprojWv.

The notation projWv indicates that we expect this vector to be orthogonal to every vector in W.

Activity 35.6.

In Section 25 we showed that in the inner product space Rn using the dot product as inner product, if W is a subspace of Rn and v is in Rn, then projWv is orthogonal to every vector in W. In this activity we verify that same fact in an inner product space. That is, assume that B={w1,w2,,wm} is an orthogonal basis for a subspace W of an inner product space V and v is a vector in V. Follow the indicated steps to show that projWv is orthogonal to every vector in B.

(a)

Let z be the projection of v onto W. Write z in terms of the basis vectors in B.

(b)

The vector vz is the projection of v orthogonal to W. Let k be between 1 and m. Use the result of part (a) to show that vz is orthogonal to wk. Exercise 16 then shows that vz is orthogonal to every vector in W.

Activity 35.6 shows that the vector vw is orthogonal to vector for W. So, in fact, projWv is the projection of v onto the orthogonal complement of W, which will be defined shortly.

Subsection Best Approximations in Inner Product Spaces

We have seen, e.g., linear regression to fit a line to a set of data, that we often want to find a vector in a subspace that “best” approximates a given vector in a vector space. As in Rn, the projection of a vector onto a subspace has this important property. That is, projWv is the vector in W closest to v and therefore the best approximation of v by a vector in W. To see that this is true in any inner product space, we first need a generalization of the Pythagorean Theorem that holds in inner product spaces.

Proof.

Let u and v be orthogonal vectors in an inner product space V. Then

||uv||2=uv,uv=u,u2u,v+v,v=u,u2(0)+v,v=||u||2+||v||2.

Note that replacing v with v in the theorem also shows that ||u+v||2=||u||2+||v||2 if u and v are orthogonal.

Now we will prove that the projection of a vector u onto a subspace W of an inner product space V is the best approximation in W to the vector u.

Proof.

Let W be a subspace of an inner product space V and let u be a vector in V. Let x be a vector in W. Now

ux=(uprojWu)+(projWux).

Since both projWu and x are in W, we know that projWux is in W. Since projWu=uprojWu is orthogonal to every vector in W, we have that uprojWu is orthogonal to projWux. We can now use the Generalized Pythagorean Theorem to conclude that

||ux||2=||uprojWu||2+||projWux||2.

Since xprojWu, it follows that ||projWux||2>0 and

||ux||2>||uprojWu||2.

Since norms are nonnegative, we can conclude that ||uprojWu||<||ux|| as desired.

Theorem 35.14 shows that the distance from projWv to v is less than the distance from any other vector in W to v. So projWv is the best approximation to v of all the vectors in W.

In Rn using the dot product as inner product, if v=[v1 v2 v3  vn]T and projWv=[w1 w2 w3  wn]T, then the square of the error in approximating v by projWv is given by

||vprojWv||2=i=1n(viwi)2.

So projWv minimizes this sum of squares over all vectors in W. As a result, we call projWv the least squares approximation to v.

Activity 35.7.

The set B={1,t12,t3910t+15} is an orthogonal basis for a subspace W of the inner product space P3 using the inner product p(t),q(t)=01p(t)q(t) dt. Find the polynomial in W that is closest to the polynomial r(t)=t2 and give a numeric estimate of how good this approximation is.

Subsection Orthogonal Complements

If we have a set of vectors S in an inner product space V, we can define the orthogonal complement of S as we did in Rn (see Definition 23.14).

Definition 35.15.

The orthogonal complement of a subset S of an inner product space V is the set

S={vV:v,u=0 for all uS}.

As we saw in Rn, to show that a vector is in the orthogonal complement of a subspace, it is enough to show that the vector is orthogonal to every vector in a basis for that subspace. The same is true in any inner product space. The proof is left to the exercises.

Activity 35.8.

Consider P2 with the inner product p(t),q(t)=01p(t)q(t) dt.

(a)

Find p(t),1t where p(t)=a+bt+ct2 is in P2.

(b)

Describe as best you can the orthogonal complement of Span{1t} in P2. Is p(t)=12t2t2 in this orthogonal complement? Is p(t)=1+tt2?

As was the case in Rn, give a subspace W of an inner product space V, any vector in V can be written uniquely as a sum of a vector in W and a vector in W.

Activity 35.9.

Let V be an inner product space of dimension n, and let W be a subspace of V. Let x be any vector in V. We will demonstrate that x can be written uniquely as a sum of a vector in W and a vector in W.

(a)

Explain why projWx is in W.

(b)

Explain why projWx is in W.

(c)

Explain why x can be written as a sum of vectors, one in W and one in W.

(d)

Now we demonstrate the uniqueness of this decomposition. Suppose x=w+w1 and x=u+u1, where w and u are in W and w1 and u1 are in W. Show that w=u and w1=u1, so that the representation of x as a sum of a vector in W and a vector in W is unique. (Hint: What is WW?)

We summarize the result of Activity 35.9.

Theorem 35.17 is useful in many applications. For example, to compress an image using wavelets, we store the image as a collection of data, then rewrite the data using a succession of subspaces and their orthogonal complements. This new representation allows us to visualize the data in a way that compression is possible.

Subsection Examples

What follows are worked examples that use the concepts from this section.

Example 35.18.

Let V=P3 be the inner product space with inner product

p(t),q(t)=11p(t)q(t) dt.

Let p1(t)=1+t, p2(t)=13t, p3(t)=3t5t3, and p4(t)=13t2.

(a)

Show that the set B={p1(t),p2(t),p3(t),p4(t)} is an orthogonal basis for V.

Solution.

All calculations are done by hand or with a computer algebra system, so we leave those details to the reader.

If we show that the set B is an orthogonal set, then Theorem 35.8 shows that B is linearly independent. Since dim(P3)=4, the linearly independent set B that contains four vectors must be a basis for P3. To determine if the set B is an orthogonal set, we must calculate the inner products of pairs of distinct vectors in B. Since 1+t,13t=0, 1+t,3t5t3=0, 1+t,13t2=0, 13t,3t5t3=0, 13t,13t2=0, and 3t5t3,13t2=0, we conclude that B is an orthogonal basis for P3.

(b)

Use 35.10 to write the polynomial q(t)=t2+t3 as a linear combination of the basis vectors in B.

Solution.

All calculations are done by hand or with a computer algebra system, so we leave those details to the reader.

We can write the polynomial q(t)=1+t+t2+t3 as a linear combination of the basis vectors in B as follows:

q(t)=q(t),p1(t)p1(t),p1(t)p1(t)+q(t),p2(t)p2(t),p2(t)p2(t)+q(t),p3(t)p3(t),p3(t)p3(t)+q(t),p4(t)p4(t),p4(t)p4(t).

Now

q(t),p1(t)=1615,q(t),p2(t)=815,q(t),p3(t)=835,q(t),p4(t)=815,p1(t),p1(t)=83,p2(t),p2(t)=8,p3(t),p3(t)=87,p4(t),p4(t)=85

so

q(t)=161583p1(t)8158p2(t)83587p3(t)81585p4(t)25p1(t)115p2(t)15p3(t)13p4(t).

Example 35.19.

Let V be the inner product space R4 with inner product defined by

[u1 u2 u3 u4]T,[v1 v2 v3 v4]T=u1v1+2u2v2+3u3v3+4u4v4.
(a)

Let W be the plane spanned by [1 1 0 1]T and [6 1 7 1]T in V. Find the vector in W that is closest to the vector [2 0 1 3]T. Exactly how close is your best approximation to the vector [2 0 1 3]T?

Solution.

The vector we're looking for is the projection of [2 0 1 3]T onto the plane. A spanning set for the plane is B={[1 1 0 1]T,[6 1 7 1]T}. Neither vector in B is a scalar multiple of the other, so B is a basis for the plane. Since

[1 1 0 1]T,[6 1 7 1]T=6+2+0+4=0,

the set B is an orthogonal basis for the plane. The projection of the vector v=[2 0 1 3]T onto the plane spanned by w1=[1 1 0 1]T and w2=[6 1 7 1]T is given by

v,w1w1,w1w1+v,w2w2,w2w2=107[1 1 0 1]T+3189[6 1 7 1]T=1189[252 273 21 273]T=[43 139 19 139].

To measure how close close [43 139 19 139] is to [2 0 1 3]T, we calculate

||[43 139 19 139][2 0 1 3]T||=||[103 139 109 149]||=1009+33881+30081+78481=1923225.35.
(b)

Express the vector [2 0 1 3]T as the sum of a vector in W and a vector orthogonal to W.

If v=[2 0 1 3]T, then projWv is in W and

projWv=vprojWv=[103 139 109 149]

is in W, and v=projWv+projWv.

Subsection Summary

  • An inner product  ,  on a vector space V is a mapping from V×VR satisfying

    1. u,v=v,u for all u and v in V,

    2. u+v,w=u,w+v,w for all u, v, and w in V,

    3. cu,v=cu,v for all u, v in V and cR,

    4. u,u0 for all u in V and u,u=0 if and only if u=0.

  • An inner product space is a pair V,  ,  where V is a vector space and  ,  is an inner product on V.

  • The length of a vector v in an inner product space V is defined to be the real number ||v||=v,v.

  • The distance between two vectors u and v in an inner product space V is the scalar ||uv||.

  • The angle θ between two vectors u and v is the angle which satisfies 0θπ and

    cos(θ)=u,v||u||||v||.
  • Two vectors u and v in an inner product space V are orthogonal if u,v=0.

  • A subset S of an inner product space is an orthogonal set if u,v=0 for all uv in S.

  • A basis for a subspace of an inner product space is an orthogonal basis if the basis is also an orthogonal set.

  • Let B={v1,v2,,vm} be an orthogonal basis for a subspace of an inner product space V. Let x be a vector in W. Then

    x=i=1mcivi,

    where

    ci=x,vivi,vi

    for each i.

  • An orthogonal basis B={v1,v2,,vm} for a subspace W of an inner product space V is an orthonormal basis if ||vk||=1 for each k from 1 to m.

  • If B={w1,w2,,wm} is an orthogonal basis for V and xV, then

    [x]B=[x,w1w1,w1x,w2w2,w2x,wmwm,wm].
  • The projection of the vector v in an inner product space V onto a subspace W of V is the vector

    projWv=v,w1w1,w1w1+v,w2w2,w2w2++v,wmwm,wmwm,

    where {w1,w2,,wm} is an orthogonal basis of W. Projections are important in that projWv is the best approximation of the vector v by a vector in W in the least squares sense.

  • With W as in (a), the projection of v orthogonal to W is the vector

    projWv=vprojWv.

    The norm of projWv provides a measure of how well projWv approximates the vector v.

  • The orthogonal complement of a subset S of an inner product space V is the set

    S={vV:v,u=0 for all uS}.

Exercises Exercises

1.

Let C[a,b] be the set of all continuous real valued functions on the interval [a,b]. If f is in C[a,b], we can extend f to a continuous function from R to R by letting F be the function defined by

F(x)={f(a) if x<af(x) if axbf(b) if b<x.

In this way we can view C[a,b] as a subset of F, the vector space of all functions from R to R. Verify that C[a,b] is a vector space.

Hint.

Use properties of continuous functions.

2.

Use the definition of an inner product to determine which of the following defines an inner product on the indicated space. Verify your answers.

(a)

u,v=u1v1u2v1u1v2+3u2v2 for u=[u1 u2]T and v=[v1 v2]T in R2

(b)

f,g=abf(x)g(x) dx for f,gC[a,b] (where C[a,b] is the vector space of all continuous functions on the interval [a,b])

(c)

f,g=f(0)g(0) for f,gD(1,1) (where D(a,b) is the vector space of all differentiable functions on the interval (a,b))

(d)

u,v=(Au)(Av) for u,vRn and A an invertible n×n matrix

3.

We can sometimes visualize an inner product in R2 or R3 (or other spaces) by describing the unit circle S1, where

S1={vV:||v||=1}

in that inner product space. For example, in the inner product space R2 with the dot product as inner product, the unit circle is just our standard unit circle. Inner products, however, can distort this familiar picture of the unit circle. Describe the points on the unit circle S1 in the inner product space R2 with inner product [u1 u2],[v1 v2]=2u1v1+3u2v2 using the following steps.

(a)

Let x=[x y]R2. Set up an equation in x and y that is equivalent to the vector equation ||x||=1.

(b)

Describe the graph of the equation you found in R2. It should have a familiar form. Draw a picture to illustrate. What do you think of calling this graph a “circle”?

4.

Define  ,  on R2 by [u1 u2]T,[v1 v2]T=4u1v1+2u2v2.

(a)

Show that  ,  is an inner product.

(b)

The inner product  ,  can be represented as a matrix transformation u,v=uTAv, where u and v are written as column vectors. Find a matrix A that represents this inner product.

5.

This exercise is a generalization of Exercise 4. Define  ,  on Rn by

[u1 u2  un]T,[v1 v2  vn]T=a1u1v1+a2u2v2++anunvn

for some positive scalars a1, a2, , an.

(a)

Show that  ,  is an inner product.

(b)

The inner product  ,  can be represented as a matrix transformation u,v=uTAv, where u and v are written as column vectors. Find a matrix A that represents this inner product.

6.

Is the sum of two inner products on an inner product space V an inner product on V? If yes, prove it. If no, provide a counterexample. (By the sum of inner products we mean a function  ,  satisfying

u,v=u,v1+u,v2

for all u and v in V, where  , 1 and  , 2 are inner products on V.)

7.

(a)

Does u,v=uTAv define an inner product on Rn for every n×n matrix A? Verify your answer.

(b)

If your answer to part (a) is no, are there any types of matrices for which u,v=uTAv defines an inner product?

8.

The trace of an n×n matrix A=[aij] has some useful properties.

(a)

Show that trace(A+B)=trace(A)+trace(B) for any n×n matrices A and B.

(b)

Show that trace(cA)=ctrace(A) for any n×n matrix A and any scalar c.

(c)

Show that trace(AT)=trace(A) for any n×n matrix.

9.

Let V be an inner product space and u,v be two vectors in V.

(a)

Check that if v=0, the Cauchy-Schwarz inequality

|u,v|||u||||v||

holds.

Hint.

Evaluate each side of the inequality.

(b)

Assume v0. Let λ=u,v/||v||2 and w=uλv. Use the fact that ||w||20 to conclude the Cauchy-Schwarz inequality in this case.

Hint.

Write ||w||2 as an inner product and expand.

10.

The Frobenius inner product is defined as

A,B=trace(ABT).

for n×n matrices A and B. Verify that A,B defines an inner product on Mn×n.

11.

Let A=[aij] and B=[bij] be two n×n matrices.

(a)

Show that if n=2, then the Frobenius inner product (see Exercise 10) of A and B is

A,B=a11b11+a12b12+a21b21+a22b22.
Hint.

Expand the inner product.

(b)

Extend part (a) to the general case. That is, show that for an arbitrary n,

A,B=i=1nj=1naijbij.
Hint.

Expand the inner product.

(c)

Compare the Frobenius inner product to the scalar product of two vectors.

Hint.

Convert A and B to vectors in Rn2 whose entries are the entries in the first row followed by the entries in the second row and so on.

12.

Let B={[1 1 1]T,[1 1 0]T} and let W=Span B in R3.

(a)

Show that B is an orthogonal basis for W, using the dot product as inner product.

(b)

Explain why the vector v=[0 2 2]T is not in W.

(c)

Find the vector in W that is closest to v. How close is this vector to v?

13.

Let R3 be the inner product space with inner product

[u1 u2 u3]T,[v1 v2 v3]T=u1v1+2u2v2+u3v3.

Let B={[1 1 1]T,[1 1 1]T} and let W=Span B in R3.

(a)

Show that B is an orthogonal basis for W, using the given inner product.

(b)

Explain why the vector v=[0 2 2]T is not in W.

Hint.

Try to write v in terms of the basis vectors for W.

(c)

Find the vector in W that is closest to v. How close is this vector to v?

14.

Let P2 be the inner product space with inner product

p(t),q(t)=01p(t)q(t) dt.

Let B={1,12t} and let W=Span B in P2.

(a)

Show that B is an orthogonal basis for W, using the given inner product.

(b)

Explain why the polynomial q(t)=t2 is not in W.

(c)

Find the vector in W that is closest to q(t). How close is this vector to q(t)?

15.

Prove the remaining properties of Theorem 35.2. That is, if  ,  is an inner product on a vector space V and u,v, and w are vectors in V and c is any scalar, then

(a)

0,v=v,0=0

Hint.

Use the fact that 0=0+0.

(b)

u,cv=cu,v

Hint.

Use the fact that u,v=v,u.

(c)

v+w,u=v,u+w,u

Hint.

Same hint as part (b).

(d)

uv,w=w,uv=u,wv,w=w,uw,v

Hint.

Use the fact that uv=u+(v).

16.

Prove the following theorem referenced in Activity 35.6.

18.

Let V be a vector space with basis {v1,v2,,vn}. Define  ,  as follows:

u,w=i=1nuiwi

if u=i=1nuivi and w=i=1nwivi in V. (Since the representation of a vector as a linear combination of basis elements is unique, this mapping is well-defined.) Show that  ,  is an inner product on V and conclude that any finite dimensional vector space can be made into an inner product space.

19.

Label each of the following statements as True or False. Provide justification for your response.

(a) True/False.

An inner product on a vector space V is a function from V to the real numbers.

(b) True/False.

If  ,  is an inner product on a vector space V, and if v is a vector in V, then the set W={xV:x,v=0} is a subspace of V.

(c) True/False.

There is exactly one inner product on each inner product space.

(d) True/False.

If x, y, and z are vectors in an inner product space with x,z=y,z, then x=y.

(e) True/False.

If x,y=0 for all vectors y in an inner product space V, then x=0.

(f) True/False.

If u and v are vectors in an inner product space and the distance from u to v is the same as the distance from u to v, then u and v are orthogonal.

(g) True/False.

If W is a subspace of an inner product space and a vector v is orthogonal to every vector in a basis of W, then v is in W.

(h) True/False.

If {v1,v2,v3} is an orthogonal basis for an inner product space V, then so is {cv1,v2,v3} for any nonzero scalar c.

(i) True/False.

An inner product u,v in an inner product space V results in another vector in V.

(j) True/False.

An inner product in an inner product space V is a function that maps pairs of vectors in V to the set of non-negative real numbers.

(k) True/False.

The vector space of all n×n matrices can be made into an inner product space.

(l) True/False.

Any non-zero multiple of an inner product on space V is also an inner product on V.

(m) True/False.

Every set of k non-zero orthogonal vectors in a vector space V of dimension k is a basis for V.

(n) True/False.

For any finite-dimensional inner product space V and a subspace W of V, W is a subspace of (W).

(o) True/False.

If W is a subspace of an inner product space, then WW={0}.

Subsection Project: Fourier Series and Musical Tones

Joseph Fourier first studied trigonometric polynomials to understand the flow of heat in metallic plates and rods. The resulting series, called Fourier series, now have applications in a variety of areas including electrical engineering, vibration analysis, acoustics, optics, signal processing, image processing, geology, quantum mechanics, and many more. For our purposes, we will focus on synthesized music.

Pure musical tones are periodic sine waves. Simple electronic circuits can be designed to generate alternating current. Alternating current is current that is periodic, and hence is described by a combination of sin(kx) and cos(kx) for integer values of k. To synthesize an instrument like a violin, we can project the instrument's tones onto trigonometric polynomials — and then we can produce them electronically. As we will see, these projections are least squares approximations onto certain vector spaces. The website falstad.com/fourier/ provides a tool for hearing sounds digitally created by certain functions. For example, you can listen to the sound generated by a sawtooth function f of the form

f(x)={x if π<xπ,f(x2π), if π<x,f(x+2π), if xπ.

Try out some of the tones on this website (click on the Sound button to hear the tones). You can also alter the tones by clicking on any one of the white dots and moving it up or down. and play with the buttons. We will learn much about what this website does in this project.

Pure tones are periodic and so are modeled by trigonometric functions. In general, trigonometric polynomials can be used to produce good approximations to periodic phenomena. A trigonometric polynomial is an object of the form

c0+c1cos(x)+d1sin(x)+c2cos(2x)+d2sin(2x)++cncos(nx)+dnsin(nx)+,

where the ci and dj are real constants. With judicious choices of these constants, we can approximate periodic and other behavior with trigonometric polynomials. The first step for us will be to understand the relationships between the summands of these trigonometric polynomials in the inner product space C[π,π] 61  of continuous functions from [π,π] to R with the inner product

(35.3)f,g=1πππf(x)g(x) dx.

Our first order of business is to verify that (35.3) is, in fact, an inner product.

Project Activity 35.10.

Let C[a,b] be the set of continuous real-valued functions on the interval [a,b]. In Exercise 1 in Section 35 we are asked to show that C[a,b] is a vector space, while Exercise 2 in Section 35 asks us to show that f,g=abf(x)g(x)dx defines an inner product on C[a,b]. However, (36.2) is slightly different than this inner product. Show that any positive scalar multiple of an inner product is an inner product, and conclude that (35.3) defines an inner product on C[π,π]. (We will see why we introduce the factor of 1π later.)

Now we return to our inner product space C[π,π] with inner product (35.3). Given a function g in C[π,π], we approximate g using only a finite number of the terms in a trigonometric polynomial. Let Wn be the subspace of C[π,π] spanned by the functions

1,cos(x),sin(x),cos(2x),sin(2x),,cos(nx),sin(nx).

One thing we need to know is the dimension of Wn.

Project Activity 35.11.

We start with the initial case of W1.

(a)

Show directly that the functions 1, cos(x), and sin(x) are orthogonal.

(b)

What is the dimension of W1? Explain.

Now we need to see if what happened in Project Activity 35.11 happens in general. A few tables of integrals and some basic facts from trigonometry can help.

Project Activity 35.12.

A table of integrals shows the following for km (up to a constant):

(35.4)cos(mx)cos(kx) dx=12(sin((km)x)km+sin((k+m)x)k+m)(35.5)sin(mx)sin(kx) dx=12(sin((km)x)kmsin((k+m)x)k+m)(35.6)cos(mx)sin(kx) dx=12(cos((mk)x)mksin((m+k)x)m+k)(35.7)cos(mx)sin(mx) dx=12mcos2(mx)
(a)

Use (35.4) to show that cos(mx) and cos(kx) are orthogonal in C[π,π] if km.

(b)

Use (35.5) to show that sin(mx) and sin(kx) are orthogonal in C[π,π] if km.

(c)

Use (35.6) to show that cos(mx) and sin(kx) are orthogonal in C[π,π] if km.

(d)

Use (35.7) to show that cos(mx) and sin(mx) are orthogonal in C[π,π].

(e)

What is dim(Wn)? Explain.

Once we have an orthogonal basis for Wn, we might want to create an orthonormal basis for Wn. Throughout the remainder of this project, unless otherwise specified, you should use a table of integrals or any appropriate technological tool to find integrals for any functions you need.

Project Activity 35.13.

Show that the set

Bn={12,cos(x),cos(2x),,cos(nx),sin(x),sin(2x),,sin(nx)}

is an orthonormal basis for Wn. Use the fact that the norm of a vector v in an inner product space with inner product  ,  is defined to be v,v. (This is where the factor of 1π will be helpful.)

Now we need to recall how to find the best approximation to a vector by a vector in a subspace, and apply that idea to approximate an arbitrary function g with a trigonometric polynomial in Wn. Recall that the best approximation of a function g in C[π,π] is the projection of g onto Wn. If we have an orthonormal basis {h0,h1,h2,,h2n} of Wn, then the projection of g onto Wn is

projWng=g,h0h0+g,h1h1+g,h2h2++g,h2nh2n.

With this idea, we can find formulas for the coefficients when we project an arbitrary function onto Wn.

Project Activity 35.14.

If g is an arbitrary function in C[π,π], we will write the projection of g onto Wn as

a0(12)+a1cos(x)+b1sin(x)+a2cos(2x)+b2sin(2x)++ancos(nx)+bnsin(nx).

The ai and bj are the Fourier coefficients for f. The expression ancos(nx)+bnsin(nx) is called the nth harmonic of g. The first harmonic is called the fundamental frequency. The human ear cannot hear tones whose frequencies exceed 20000 Hz, so we only hear finitely many harmonics (the projections onto Wn for some n).

(a)

Show that

(35.8)a0=12πππg(x) dx.

Explain why a02 gives the average value of g on [π,π]. You may want to go back and review average value from calculus. This is saying that the best constant approximation of g on [π,π] is its average value, which makes sense.

(b)

Show that for m1,

(35.9)am=1πππg(x)cos(mx) dx.
(c)

Show that for m1,

(35.10)bm=1πππg(x)sin(mx) dx.

Let us return to the sawtooth function defined earlier and find its Fourier coefficients.

Project Activity 35.15.

Let f be defined by f(x)=x on [π,π] and repeated periodically afterwards with period 2π. Let pn be the projection of f onto Wn.

(a)

Evaluate the integrals to find the projection p1.

(b)

Use appropriate technology to find the projections p10, p20, and p30 for the sawtooth function f. Draw pictures of these approximations against f and explain what you see.

(c)

Now we find formulas for all the Fourier coefficients. Use the fact that xcos(mx) is an odd function to explain why am=0 for each m. Then show that bm=(1)m+12m for each m.

(d)

Go back to the website falstad.com/fourier/ and replay the sawtooth tone. Explain what the white buttons represent.

Project Activity 35.16.

This activity is not connected to the idea of musical tones, so can be safely ignored if so desired. We conclude with a derivation of a very fascinating formula that you may have seen for n=11n2. To do so, we need to analyze the error in approximating a function g with a function in Wn.

Let pn be the projection of g onto Wn. Notice that pn is also in Wn+1. It is beyond the scope of this project, but in “nice” situations we have ||gpn||0 as n. Now gpn is orthogonal to pn, so the Pythagorean theorem shows that

||gpn||2+||pn||2=||g||2.

Since ||gpn||20 as n, we can conclude that

(35.11)limn||pn||2=||g||2.

We use these ideas to derive a formula for n=11n2.

(a)

Use the fact that Bn is an orthonormal basis to show that

||pn||2=a02+a12+b12++an2+bn2.

Conclude that

(35.12)||g||2=a02+a12+b12++an2+bn2+.
(b)

For the remainder of this activity, let f be the sawtooth function defined by f(x)=x on [π,π] and repeated periodically afterwards. We determined the Fourier coefficients ai and bj of this function in Project Activity 35.15.

(i)

Show that

a02+a12+b12++an2+bn2+=4n=11n2.
(ii)

Calculate ||f||2 using the inner product and compare to (35.12) to find a surprising formula for n=11n2.

With suitable adjustments, we can work over any interval that is convenient, but for the sake of simplicity in this project, we will restrict ourselves to the interval [π,π].