Skip to main content

Section 39 Eigenvalues of Linear Transformations

Subsection Application: Linear Differential Equations

A body in motion obeys Newton's second law that force equals mass times acceleration, or F=ma. Here F is the force acting on the object, m the mass of the object, and a the acceleration of the object. For example, if a mass is hanging from a spring, gravity acts to pull the mass downward and the spring acts to pull the mass up. Hooke?s law says that the force of the spring acting on mass is proportional to the displacement y of the spring from equilibrium. There is also a damping force that weakens the action of the spring that might be due to air resistance or the medium in which the system is enclosed. If this mass is not too large, then the resistance can be taken to be proportional to the velocity v of the mass. This produces forces F=ma acting to pull the mass down and forces ky and bv acting in the opposite direction, where k and b are constant. A similar type of analysis would apply to an object falling through the air, where the resistant to falling could be proportional to the velocity of the object. Since v=y and a=y, equating the forces produces the equation'

(39.1)my=kyby.

Any equation that involves one or more derivatives of a function is called a differential equation. Differential equations are used to model many different types of behavior in disciplines including engineering, physics, economics, and biology. Later in this section we will see how differential equations can be represented by linear transformations, and see how we can exploit this idea to solve certain types of differential equations (including (39.1)).

Subsection Introduction

Recall that a scalar λ is an eigenvalue of an n×n matrix A if Ax=λx for some nonzero vector x in Rn. Now that we have seen how to represent a linear transformation T from a finite dimensional vector space V to itself with a matrix transformation, we can exploit this idea to define and find eigenvalues and eigenvectors for T just as we did for matrices.

Definition 39.1.

Let V be an n dimensional vector space and T:VV a linear transformation. A scalar λ is an eigenvalue for T if there is a nonzero vector v in V such that T(v)=λv. The vector v is an eigenvector for T corresponding to the eigenvalue λ.

We can exploit the fact that we can represent linear transformations as matrix transformations to find eigenvalues and eigenvectors of a linear transformation.

Preview Activity 39.1.

Let T:P1P1 be defined by T(a0+a1t)=(a0+2a1)+(3a0+2a1)t. Assume T is a linear transformation.

(a)

Let S={1,t} be the standard basis for P1. Find the matrix [T]S. (Recall that we use the shorthand notation [T]S for [T]SS.)

(b)

Check that λ1=4 and λ2=1 are the eigenvalues of [T]S. Find an eigenvector v1 for [T]S corresponding to the eigenvalue 4 and an eigenvector v2 of [T]S corresponding to the eigenvalue 1.

(c)

Find the vector in P1 corresponding to the eigenvector of [T]S for the eigenvalue λ=4. Check that this is an eigenvector of T.

(d)

Explain why in general, if V is a vector space with basis B and S:VV is a linear transformation, and if v in V satisfies [v]B=w, where w is an eigenvector of [S]B with eigenvalue λ, then v is an eigenvector of S with eigenvalue λ.

Subsection Finding Eigenvalues and Eigenvectors of Linear Transformations

Preview Activity 39.1 presents a method for finding eigenvalues and eigenvectors of linear transformations. That is, if T:VV is a linear transformation and B is a basis for V, and if x is an eigenvector of [T]B with eigenvalue λ, then the vector v in V satisfying [v]B=x has the property that

[T(v)]B=[T]B[v]B=[T]Bx=λx=[λv]B.

Since the coordinate transformation is one-to-one, it follows that T(v)=λv and v is an eigenvector of T with eigenvalue λ. So every eigenvector of [T]B corresponds to an eigenvector of T. The fact that every eigenvector and eigenvalue of T can be obtained from the eigenvectors and eigenvalues of [T]B is left as Exercise 2. Now we address the question of how eigenvalues of matrices of T with respect to different bases are related.

Activity 39.2.

Let T:P1P1 be defined by T(a0+a1t)=(a0+2a1)+(2a1+3a0)t. Assume T is a linear transformation.

(a)

Let B={1+t,1t} be a basis for P1. Find the matrix [T]B.

(b)

Check that λ1=4 and λ2=1 are the eigenvalues of [T]B. Find an eigenvector w1 for [T]B corresponding to the eigenvalue 4.

(c)

Use the matrix [T]B to find an eigenvector for T corresponding to the eigenvalue λ1=4.

You should notice that the matrices [T]S and [T]B in Preview Activity 39.1 and Activity 39.2 are similar matrices. In fact, if P=PBS is the change of basis matrix from S to B, then P1[T]BP=[T]S. So it must be the case that [T]S and [T]B have the same eigenvalues. What Preview Activity 39.1, Exercise 2, and Activity 39.2 demonstrate is that the eigenvalues of a linear transformation T:VV for any n dimensional vector space V are the eigenvalues of the matrix for T relative to any basis for V. We can then find corresponding eigenvectors for T using the methods demonstrated in Preview Activity 39.1 and Activity 39.2. That is, if v is an eigenvector of T with eigenvalue λ, and B is any basis for V, then

[T]B[v]B=[T(v)]B=[λv]B=λ[v]B.

Thus, once we find an eigenvector [v]B for [T]B, the vector v is an eigenvector for T. This is summarized in the following theorem.

Subsection Diagonalization

If T:VV is a linear transformation, one important question is that of finding a basis for V in which the matrix of T has as simple a form as possible. In other words, can we find a basis B for V for which the matrix [T]B is a diagonal matrix? Since any matrices for T with respect to different bases are similar, this will happen if we can diagonalize any matrix for T.

Definition 39.3.

Let V be a vector space of dimension n. A linear transformation T from V to V is diagonalizable if there is a basis B for V for which [T]B is a diagonalizable matrix.

Now the question is, if T is diagonalizable, how do we find a basis B to that [T]B is a diagonal matrix? Recall that to diagonalize a matrix A means to find a matrix P so that P1AP is a diagonal matrix.

Activity 39.3.

Let V=P1, T:VV defined by T(a0+a1t)=(a0+2a1)+(2a0+a1)t. Let S={1,t} be the standard basis for V.

(a)

Find the matrix [T]S for T relative to the basis S.

(b)

Use the fact that the eigenvalues of [T]S are 1 and 3 with corresponding eigenvectors [11] and [11], respectively, to find a matrix P so that P1[T]SP is a diagonal matrix D.

(c)

To find a basis B for which [T]B is diagonal, note that if P=PBS of the previous part is the change of basis matrix from S to B, then the matrices of T with respect to S and B are related by P1[T]SP=[T]B, which makes [T]B a diagonal matrix. Use the fact that P[x]B=[x]S to find the vectors in the basis B.

(d)

Now show directly that [T]B=D, where B={v1,v2}, and verify that we have found a basis for V for which the matrix for T is diagonal.

The general idea for diagonalizing a linear transformation is contained in Activity 39.3. Let V be an n dimensional vector space and assume the linear transformation T:VV is diagonalizable. So there exists a basis B={v1,v2,,vn} for which [T]B is a diagonal matrix. To find this basis, note that for any other basis C we know that [T]C and [T]B are similar. That means that there is an invertible matrix P so that P1[T]CP=[T]B, where P=PCB is the change of basis matrix from B to C. So to find B, we choose B so that P is the matrix that diagonalizes [T]C. Using the definition of the change of basis matrix, we then know that each basis vector vi in B satisfies [vi]C=P[vi]B=Pei. From this we can find vi. Note that a standard basis S is often a convenient choice to make for the basis C. This is the process we used in Activity 39.3.

Recall that an n×n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors. We apply this idea to a linear transformation as well, summarized by the following theorem.

Subsection Examples

What follows are worked examples that use the concepts from this section.

Example 39.5.

Let T:M2×2M2×2 be defined by T(A)=AT. Let M1=[1100], M2=[0110], M3=[1010], and M4=[0011], and let B be the ordered basis {M1,M2,M3,M4}.

(a)

Show that T is a linear transformation.

Solution.

Let A and B be 2×2 matrices and let c be a scalar. The properties of the transpose show that

T(A+B)=(A+B)T=AT+BT=T(A)+T(B)

and

T(cA)=(cA)T=cAT=cT(A).

Thus, T is a linear transformation.

(b)

Find [T]B.

Solution.

Notice that T(M1)=M3, T(M2)=M2, T(M3)=M1, and T(M4)=M1M3+M4. So

[T]B=[[T(M1)B [T(M2)B [T(M3)B [T(M4)B]=[0011010010010001].
(c)

Is the matrix [T]B diagonalizable? If so, find a matrix P such that P1[T]BP is a diagonal matrix. Use technology as appropriate.

Solution.

Technology shows that the characteristic polynomial of [T]B is p(λ)=(λ+1)(λ1)3. It follows that the eigenvalues of [T]B are 1 and 1 (with multiplicity 3). Technology also shows that {[1 0 1 0]T} is a basis for the eigenspace corresponding to the eigenvalue 1 and the vectors [1 0 0 1]T, [1 0 1 0]T, [0 1 0 0]T form a basis for the eigenspace corresponding to the eigenvalue 1. If we let P=[1101001001011000], then P1AP=[1000010000100001].

(d)

Use part (c) to find a basis C for M2×2 for which [T]C is a diagonal matrix.

Solution.

Suppose C={Q1,Q2,Q3,Q4} is a basis for M2×2 satisfying P1[T]BP=[T]C. This makes [T]C a diagonal matrix with the eigenvalues of [T]B along the diagonal. In this case we have

[T]C[A]C=P1[T]BP[A]C

for any matrix AM2×2. So P is the change of basis matrix from C to B. That is, P[A]C=[A]B. It follows that P[Qi]C=[Qi]B for i from 1 to 4. Since [Qi]C=ei, we can see that [Qi]B is the ith column of P. So the columns of P provide the weights for basis vectors in C in terms of the basis B. Letting Q1=M1+M4, Q2=M1+M3, Q3=M2, and Q4=M3M1, we conclude that [T]C is a diagonal matrix.

Example 39.6.

We have shown that every linear transformation from a finite dimensional vector space V to V can be represented as a matrix transformation. As a consequence, all such linear transformations have eigenvalues. In this example we consider the problem of determining eigenvalues and eigenvectors of the linear transformation T:VV defined by T(f)(x)=0xf(t)dt, where V is the vector space of all infinitely differentiable functions from R to R. Suppose that f is an eigenvector of T with a nonzero eigenvalue.

(a)

Use the Fundamental Theorem of Calculus to show that f must satisfy the equation f(x)=1λf(x) for some scalar nonzero scalar λ.

Solution.

Assuming that f is an eigenvector of T with nonzero eigenvalue λ, then

(39.2)λf(x)=T(f)(x)=0xf(t)dt.

Recall that ddx0xf(t)dt=f(x) by the Fundamental Theorem of Calculus. Differentiating both sides of (39.2) with respect to x leaves us with λf(x)=f(x), or f(x)=1λf(x).

(b)

From calculus, we know that the functions f that satisfy the equation f(x)=1λf(x) all have the form f(x)=Aex/λ for some scalar A. So if f is an eigenvector of T, then f(x)=Aex/λ for some scalar A. Show, however, that Aex/λ cannot be an eigenvector of T. Thus, T has no eigenvectors with nonzero eigenvalues.

Solution.

We can directly check from the definition that T(f) is not a multiple of f(x)=Aex/λ unless A=0, which is not allowed. Another method is to note that T(f)(0)=00f(t)dt=0 by definition. But if f(x)=Aex/λ, then

0=T(f)(0)=λf(0)=Ae0/λ=A.

This means that f(x)=0ex/λ=0. But 0 can never be an eigenvector by definition. So T has no eigenvectors with nonzero eigenvalues.

(c)

Now show that 0 is not an eigenvalue of T. Conclude that T has no eigenvalues or eigenvectors.

Solution.

Suppose that 0 is an eigenvalue of T. Then there is a nonzero function g such that T(g)=0. In other words, 0=0xg(t)dt. Again, differentiating with respect to x yields the equation g(x)=0. So T has no eigenvectors with eigenvalue 0. Since T does not have a nonzero eigenvalue or zero as an eigenvalue, T has no eigenvalues (and eigenvectors).

(d)

Explain why this example does not contradict the statement that every linear transformation from a finite dimensional vector space V to V has eigenvalues.

Solution.

The reason this example does not contradict the statement is that V is an infinite dimensional vector space. In fact, the linearly independent monomials tm are all in V for any positive integer m.

Subsection Summary

  • We can define eigenvalues and eigenvectors of a linear transformation T:VV, where V is a finite dimensional vector space. In this case, a scalar λ is an eigenvalue for T if there exists a non-zero vector v in V so that T(v)=λv.

  • To find eigenvalues and eigenvectors of a linear transformation T:VV, where V is a finite dimensional vector space, we find the eigenvalues and eigenvectors for [T]B, where B is an basis for V. If C is any other basis for V, then [T]B and [T]C are similar matrices and have the same eigenvalues. Once we find an eigenvector [v]B for [T]B, then v is an eigenvector for T.

  • A linear transformation T:VV, where V is a finite dimensional vector space, is diagonalizable if there is a basis C for V for which [T]C is a diagonalizable matrix.

  • To determine if a linear transformation T:VV is diagonalizable, we pick a basis C for V. If the matrix [T]C has n linearly independent eigenvectors, then T is diagonalizable.

Exercises Exercises

1.

Let V=P2 and define T:VV by T(p(t))=ddt((1t)p(t)).

(a)

Show that T is a linear transformation.

Hint.

Use properties of the derivative.

(b)

Let S be the standard basis for P2. Find [T]S.

(c)

Find the eigenvalues and a basis for each eigenspace of T.

(d)

Is T diagonalizable? If so, find a basis B for P2 so that [T]B is a diagonal matrix. If not, explain why not.

2.

Let T:VV be a linear transformation, and let B be a basis for V. Show that every eigenvector v of T with eigenvalue λ corresponds to an eigenvector of [T]B with eigenvalue λ.

3.

Let C be the set of all functions from R to R that have derivatives of all orders.

(a)

Explain why C is a subspace of F.

Hint.

Use properties of differentiable functions.

(b)

Let D:CC be defined by D(f)=f. Explain why D is a linear transformation.

Hint.

Use properties of the derivative.

(c)

Let λ be any real number and let fλ be the exponential function defined by fλ(x)=eλx. Show that fλ is an eigenvector of D. What is the corresponding eigenvalue? How many eigenvalues does D have?

Hint.

What is D(eλx)?

4.

Consider D:P2P2, where D is the derivative operator defined as in Exercise 3. Find all of the eigenvalues of D and a basis for each eigenspace of D.

5.

Let n be a positive integer and define T:Mn×nMn×n by T(A)=AT.

(a)

Show that T is a linear transformation.

Hint.

Use properties of the matrix transpose.

(b)

Is λ=1 an eigenvalue of T? Explain. If so, describe in detail the vectors in the corresponding eigenspace.

Hint.

For which matrices is T(A)=A?

(c)

Does T have any other eigenvalues? If so, what are they and what are the vectors in the corresponding eigenspaces? If not, why not?

Hint.

When is it possible to have AT=λA?

6.

Label each of the following statements as True or False. Provide justification for your response.

(a) True/False.

The number 0 cannot be an eigenvalue of a linear transformation.

(b) True/False.

The zero vector cannot be an eigenvector of a linear transformation.

(c) True/False.

If v is an eigenvector of a linear transformation T, then so is 2v.

(d) True/False.

If v is an eigenvector of a linear transformation T, then v is also an eigenvector of the transformation T2=TT.

(e) True/False.

If v and u are eigenvectors of a linear transformation T with the same eigenvalue, then v+u is also an eigenvector of T with the same eigenvalue.

(f) True/False.

If λ is an eigenvalue of a linear transformation T, then λ2 is an eigenvalue of T2.

(g) True/False.

Let S and T be two linear transformations with the same domain and codomain. If v is an eigenvector of both S and T, then v is an eigenvector of S+T.

(h) True/False.

Let V be a vector space and let T:VV be a linear transformation. Then T has 0 as an eigenvalue if and only if [T]B has linearly dependent columns for any basis B of V.

Subsection Project: Linear Transformations and Differential Equations

There are many different types of differential equations, but we will focus on differential equations of the form my=kyby presented in the introduction, a second order (the highest derivative in the equation is a second order derivative) linear (the coefficients are constants) differential equation (also called damped harmonic oscillators).

To solve a differential equation means to find all solutions to the differential equation. That is, find all functions y that satisfy the differential equation. For example, since ddtt2=2t, we see that y=t2 satisfies the differential equation y=2t. But t2 is not the only solution to this differential equation. In fact, y=t2+C is a solution to y=2t for any scalar C. We will see how to represent solutions to the differential equation my=kyby in this project.

The next activity shows that the set of solutions to the linear differential equation my=kyby is a subspace of the vector space F of all functions from R to R. So we should expect close connections between differential equations and linear algebra, and we will make some of these connections as we proceed.

Project Activity 39.4.

We can represent differential equations using linear transformations. To see how, let D be the function from the space C1 of all differentiable real-valued functions to F given by D(f)=dfdt.

(a)

Show that D is a linear transformation.

(b)

In order for a function f to be a solution to a differential equation of the form (39.1), it is necessary for f to be twice differentiable. We will assume that D acts only on such functions from this point on. Use the fact that D is a linear transformation to show that the differential equation my=kyby can be written in the form

(mD2+bD)(y)=ky.

Project Activity 39.4 shows that any solution to the differential equation my=kyby is an eigenvector for the linear transformation mD2+bD. That is, the solutions to the differential equation my=kyby form the eigenspace Ek of mD2+bD with eigenvalue k. The eigenvectors for a linear transformation acting on a function space are also called eigenfunctions. To build up to solutions to the second order differential equation we have been considering, we start with solutions to the first order equation.

Project Activity 39.5.

Let k be a scalar. Show that the solutions of the differential equation D(y)=ky form a one-dimensional subspace of F. Find a basis for this subspace. Note that this is the eigenspace of the transformation D corresponding to the eigenvalue k.

Hint.

To find the solutions to y=ky, write y as dydt and express the equation y=ky in the form dydt=ky. Divide by y and integrate with respect to t.

Before considering the general second order differential equation, we start with a simpler example.

Project Activity 39.6.

As a specific example of a second order linear equation, as discussed at the beginning of this section, Hooke's law states that if a mass is hanging from a spring, the force acting on the spring is proportional to the displacement of the spring from equilibrium. If we let y be the displacement of the object from its equilibrium, and ignore any resistance, the position of the mass-spring system can be represented by the differential equation

mD2(y)=ky,

where m is the mass of the object and k is a positive constant that depends on the spring. Assuming that the mass is positive, we can divide both sides by m and rewrite this differential equation in the form

(39.3)D2(y)=cy

where c=km. So the solutions to the differential equation (39.3) make up the eigenspace Ec for D2 with eigenvalue c.

(a)

Since the derivatives of sin(t) and cos(t) are scalar multiples of sin(t) and cos(t), it may be reasonable that these make up solutions to (39.3). Show that y1=ccos(t) and y2=csin(t) are both in Ec.

(b)

As functions, the cosine and sine are related in many ways (e.g., the Pythagorean Identity). An important property for this application is the linear independence of the cosine and sine. Show, using the definition of linear independence, that the cosine and sine functions are linearly independent in F.

(c)

Part (a) shows that there are at least two different functions in Ec. To solve a differential equation is to find all of the solutions to the differential equation. In other words, we want to completely determine the eigenspace Ec. We have already seen that any function y of the form y(t)=c1cos(t)+c2sin(t) is a solution to the differential equation mD2(y)=ky. The theory of linear differential equations tells us that there is a unique solution to mD2(y)=ky if we specify two initial conditions. What this means is that to show that any solution z to the differential equation mD2(y)=ky with two initial values z(t0) and z(t0) for some scalar t0 is of the form y=c1cos(t)+c2sin(t), we need to verify that there are values of c1 and c2 such that y(t0)=z(t0) and y(t0)=z(t0). Here we will use this idea to show that any function in Ec is a linear combination of cos(t) and sin(t). That is, that the set {cos,sin} spans Ec. Let y=c1cos(t)+c2sin(t). Show that there are values for c1 and c2 such that

y(0)=z(0)  and  y(0)=z(0).

This result, along with part(b), shows that {cos,sin} is a basis for Ec. (Note: That the solutions to differential equation (39.3) involve sines and cosines models the situation that a mass hanging from a spring will oscillate up and down.)

As we saw in Project Activity 39.5, the eigenspace Ek of the linear transformation D is one-dimensional. The key idea in Project Activity 39.6 that allowed us to find a basis for the eigenspace of D2 with eigenvalue c is that cos and sin are linearly independent eigenfunctions that span Ec. We won't prove this result, but the general theory of linear differential equations states that if y1, y2, , yn are linearly independent solutions to the nth order linear differential equation

(anDn+an1Dn1++a1D)(y)=a0y,

then {y1,y2,,yn} is a basis for the eigenspace of the linear transformation anDn+an1Dn1++a1D with eigenvalue a0. Any basis for the solution set to a differential equation is called a fundamental set of solutions for the differential equation. Consequently, it is important to be able to determine when a set of functions is linearly independent. One tool for doing so is the Wronskian, which we study in the next activity.

Project Activity 39.7.

Suppose we have n functions f1, f2, , fn, each with n1 derivatives. To determine the independence of the functions we must understand the solutions to the equation

(39.4)c1f1(t)+c2f2(t)++cnfn(t)=0.

We can differentiate both sides of Equation (39.4) to obtain the new equation

c1f1(t(t))+c2f2(t)++cnfn(t)=0.

We can continue to differentiate as long as the functions are differentiable to obtain the system

f1(t)c1+f2(t)c2++fn(t)cn= 0f1(t)c1+f2(t)c2++fn(t)cn= 0f1(t)c1+f2(t)c2++fn(t)cn= 0f1(n1)(t)c1+f2(n1)(t)c2++fn(n1)(t)cn= 0
(a)

Write this system in matrix form, with coefficient matrix

A=[f1(t)f2(t)fn(t)f1(t)f2(t)fn(t)f1(n1)(t)f2(n1)(t)fn(n1)(t)].
(b)

The matrix in part (a) is called the Wronskian matrix of the system. The scalar

W(f1,f2,,fn)=det(A)

is called the Wronskian of f1, f2, , fn. What must be true about the Wronskian for our system to have a unique solution? If the system has a unique solution, what is the solution? What does this result tell us about the functions f1, f2, , fn?

(c)

Use the Wronskian to show that the cosine and sine functions are linearly independent.

We can apply the Wronskian to help find bases for the eigenspace of the linear transformation mD2+bD with eigenvalue k.

Project Activity 39.8.

The solution to the Hooke's Law differential equation in Project Activity 39.6 indicates that the spring will continue to oscillate forever. In reality, we know that this does not happen. In the non-ideal case, there is always some force (e.g., friction, air resistance, a physical damper as in a piston) that acts to dampen the motion of the spring causing the oscillations to die off. Damping acts to oppose the motion, and we generally assume that the faster an object moves, the higher the damping. For this reason we assume the damping force is proportional to the velocity. That is, the damping force has the form by for some positive constant b. This produces the differential equation

(39.5)my+by+ky=0

or (mD2+bD)=ky. We will find bases for the eigenspace of the linear transformation mD2+bD with eigenvalue k in this activity.

(a)

Since derivatives of exponential functions are still exponential functions, it seems reasonable to try an exponential function as a solution to (39.5). Show that if y=ert for some constant r is a solution to (39.5), then mr2+br+k=0. The equation mr2+br+k=0 is the characteristic or auxiliary equation for the differential equation.

(b)

Part (a) shows that our solutions to the differential equation (39.5) are exponential of the form ert, where r is a solution to the auxiliary equation. Recall that if we can find two linearly independent solutions to (39.5), then we have found a basis for the eigenspace Ek of mD2+bD with eigenvalue k. The quadratic equation shows that the roots of the auxiliary equation are

b±b24mk2m.

As we will see, our basis depends on the types of roots the auxiliary equation has.

(i)

Assume that the roots r1 and r2 of the auxiliary equation are real and distinct. That means that y1=er1t and y2=er2t are eigenfunctions in Ek. Use the Wronskian to show that {y1,y2} is a basis for Ek in this case. Then describe the behavior of an arbitrary eigenfunction in E2 if mD2+bD=D2+3D and how it relates to damping. Draw a representative solution to illustrate. (In this case we say that the system is overdamped. These systems can oscillate at most once, then they quickly damp to 0.)

(ii)

Now suppose that we have a repeated real root r to the auxiliary equation. Then there is only one exponential function y1=ert in Ek. In this case, show that y2=tert is also in Ek and that {y1,y2} is a basis for Ek. Then describe the behavior of an arbitrary eigenfunction in E1 if mD2+bD=D2+2D and how it relates to damping. Draw a representative solution to illustrate. (In this case we say that the system is critically damped. These systems behave similar to the overdamped systems in that they do not oscillations. However, if the damping is reduced just a little, the system can oscillate.)

(iii)

The last case is when the auxiliary equation has complex roots z1=u+vi and z2=uvi. We want to work with real valued functions, so we need to determine real valued solutions from these complex roots. To resolve this problem, we note that if x is a real number, then eix=cos(x)+isin(x). So

e(u+vi)t=euteivt=eutcos(vt)+eutsin(vt)i.

Show that {eutcos(vt),eutsin(vt)} is a basis for Ek in this case. Then describe the behavior of an arbitrary eigenfunction in E5 if mD2+bD=D2+2D and how it relates to damping. Draw a representative solution to illustrate. (In this case we say that the system is underdamped. These systems typically exhibit some oscillation.)

Project Activity 39.7 tells us that if W(f1,f2,,fn) is not zero, then f1, f2, , fn are linearly independent. You might wonder what conclusion we can draw if W(f1,f2,,fn) is zero.

Project Activity 39.9.

In this activity we consider the Wronskian of two different pairs of functions.

(a)

Calculate W(t,2t). Are t and 2t linearly independent or dependent? Explain.

(b)

Now let f(t)=t|t| and g(t)=t2.

(i)

Calculate f(t) and g(t).

Hint.

Recall that |x|={x if x0x if x<0.

(ii)

Calculate W(f,g). Are f and g linearly independent or dependent in F? Explain.

Hint.

Consider the cases when t0 and t<0.

(iii)

What conclusion can we draw about the functions f1, f2, , fn if W(f1,f2,,fn) is zero? Explain.