Skip to main content

Section 39 Eigenvalues of Linear Transformations

Subsection Application: Linear Differential Equations

A body in motion obeys Newton's second law that force equals mass times acceleration, or \(F = ma\text{.}\) Here \(F\) is the force acting on the object, \(m\) the mass of the object, and \(a\) the acceleration of the object. For example, if a mass is hanging from a spring, gravity acts to pull the mass downward and the spring acts to pull the mass up. Hooke?s law says that the force of the spring acting on mass is proportional to the displacement \(y\) of the spring from equilibrium. There is also a damping force that weakens the action of the spring that might be due to air resistance or the medium in which the system is enclosed. If this mass is not too large, then the resistance can be taken to be proportional to the velocity \(v\) of the mass. This produces forces \(F = ma\) acting to pull the mass down and forces \(-ky\) and \(- bv\) acting in the opposite direction, where \(k\) and \(b\) are constant. A similar type of analysis would apply to an object falling through the air, where the resistant to falling could be proportional to the velocity of the object. Since \(v = y'\) and \(a = y''\text{,}\) equating the forces produces the equation'

\begin{equation} my'' = -ky-by'\text{.}\tag{39.1} \end{equation}

Any equation that involves one or more derivatives of a function is called a differential equation. Differential equations are used to model many different types of behavior in disciplines including engineering, physics, economics, and biology. Later in this section we will see how differential equations can be represented by linear transformations, and see how we can exploit this idea to solve certain types of differential equations (including (39.1)).

Subsection Introduction

Recall that a scalar \(\lambda\) is an eigenvalue of an \(n \times n\) matrix \(A\) if \(A \vx = \lambda \vx\) for some nonzero vector \(\vx\) in \(\R^n\text{.}\) Now that we have seen how to represent a linear transformation \(T\) from a finite dimensional vector space \(V\) to itself with a matrix transformation, we can exploit this idea to define and find eigenvalues and eigenvectors for \(T\) just as we did for matrices.

Definition 39.1.

Let \(V\) be an \(n\) dimensional vector space and \(T : V \to V\) a linear transformation. A scalar \(\lambda\) is an eigenvalue for \(T\) if there is a nonzero vector \(\vv\) in \(V\) such that \(T(\vv) = \lambda \vv\text{.}\) The vector \(\vv\) is an eigenvector for \(T\) corresponding to the eigenvalue \(\lambda\text{.}\)

We can exploit the fact that we can represent linear transformations as matrix transformations to find eigenvalues and eigenvectors of a linear transformation.

Preview Activity 39.1.

Let \(T: \pol_1 \to \pol_1\) be defined by \(T(a_0+a_1t) = (a_0+2a_1) + (3a_0+2a_1)t\text{.}\) Assume \(T\) is a linear transformation.

(a)

Let \(\CS = \{1,t\}\) be the standard basis for \(\pol_1\text{.}\) Find the matrix \([T]_{\CS}\text{.}\) (Recall that we use the shorthand notation \([T]_{\CS}\) for \([T]_{\CS}^{\CS}\text{.}\))

(b)

Check that \(\lambda_1 = 4\) and \(\lambda_2 = -1\) are the eigenvalues of \([T]_{\CS}\text{.}\) Find an eigenvector \(\vv_1\) for \([T]_{\CS}\) corresponding to the eigenvalue \(4\) and an eigenvector \(\vv_2\) of \([T]_{\CS}\) corresponding to the eigenvalue \(-1\text{.}\)

(c)

Find the vector in \(\pol_1\) corresponding to the eigenvector of \([T]_{\CS}\) for the eigenvalue \(\lambda=4\text{.}\) Check that this is an eigenvector of \(T\text{.}\)

(d)

Explain why in general, if \(V\) is a vector space with basis \(\CB\) and \(S: V \to V\) is a linear transformation, and if \(\vv\) in \(V\) satisfies \([\vv]_{\CB}=\vw\text{,}\) where \(\vw\) is an eigenvector of \([S]_{\CB}\) with eigenvalue \(\lambda\text{,}\) then \(\vv\) is an eigenvector of \(S\) with eigenvalue \(\lambda\text{.}\)

Subsection Finding Eigenvalues and Eigenvectors of Linear Transformations

Preview Activity 39.1 presents a method for finding eigenvalues and eigenvectors of linear transformations. That is, if \(T: V \to V\) is a linear transformation and \(\CB\) is a basis for \(V\text{,}\) and if \(\vx\) is an eigenvector of \([T]_{\CB}\) with eigenvalue \(\lambda\text{,}\) then the vector \(\vv\) in \(V\) satisfying \([\vv]_{\CB}=\vx\) has the property that

\begin{equation*} [T(\vv)]_{\CB} = [T]_{\CB}[\vv]_{\CB} = [T]_{\CB} \vx = \lambda \vx = [\lambda \vv]_{\CB}\text{.} \end{equation*}

Since the coordinate transformation is one-to-one, it follows that \(T(\vv) = \lambda \vv\) and \(\vv\) is an eigenvector of \(T\) with eigenvalue \(\lambda\text{.}\) So every eigenvector of \([T]_{\CB}\) corresponds to an eigenvector of \(T\text{.}\) The fact that every eigenvector and eigenvalue of \(T\) can be obtained from the eigenvectors and eigenvalues of \([T]_{\CB}\) is left as Exercise 2. Now we address the question of how eigenvalues of matrices of \(T\) with respect to different bases are related.

Activity 39.2.

Let \(T: \pol_1 \to \pol_1\) be defined by \(T(a_0+a_1t) = (a_0+2a_1) + (2a_1+3a_0)t\text{.}\) Assume \(T\) is a linear transformation.

(a)

Let \(\CB = \{1+t,1-t\}\) be a basis for \(\pol_1\text{.}\) Find the matrix \([T]_{\CB}\text{.}\)

(b)

Check that \(\lambda_1 = 4\) and \(\lambda_2 = -1\) are the eigenvalues of \([T]_{\CB}\text{.}\) Find an eigenvector \(\vw_1\) for \([T]_{\CB}\) corresponding to the eigenvalue \(4\text{.}\)

(c)

Use the matrix \([T]_{\CB}\) to find an eigenvector for \(T\) corresponding to the eigenvalue \(\lambda_1 = 4\text{.}\)

You should notice that the matrices \([T]_{\CS}\) and \([T]_{\CB}\) in Preview Activity 39.1 and Activity 39.2 are similar matrices. In fact, if \(P = \underset{\CB \leftarrow \CS}{P}\) is the change of basis matrix from \(\CS\) to \(\CB\text{,}\) then \(P^{-1}[T]_{\CB}P = [T]_{\CS}\text{.}\) So it must be the case that \([T]_{\CS}\) and \([T]_{\CB}\) have the same eigenvalues. What Preview Activity 39.1, Exercise 2, and Activity 39.2 demonstrate is that the eigenvalues of a linear transformation \(T : V \to V\) for any \(n\) dimensional vector space \(V\) are the eigenvalues of the matrix for \(T\) relative to any basis for \(V\text{.}\) We can then find corresponding eigenvectors for \(T\) using the methods demonstrated in Preview Activity 39.1 and Activity 39.2. That is, if \(\vv\) is an eigenvector of \(T\) with eigenvalue \(\lambda\text{,}\) and \(\CB\) is any basis for \(V\text{,}\) then

\begin{equation*} [T]_{\CB}[\vv]_{\CB} = [T(\vv)]_{\CB} = [\lambda \vv]_{\CB} = \lambda [\vv]_{\CB}\text{.} \end{equation*}

Thus, once we find an eigenvector \([\vv]_{\CB}\) for \([T]_{\CB}\text{,}\) the vector \(\vv\) is an eigenvector for \(T\text{.}\) This is summarized in the following theorem.

Subsection Diagonalization

If \(T : V \to V\) is a linear transformation, one important question is that of finding a basis for \(V\) in which the matrix of \(T\) has as simple a form as possible. In other words, can we find a basis \(\CB\) for \(V\) for which the matrix \([T]_{\CB}\) is a diagonal matrix? Since any matrices for \(T\) with respect to different bases are similar, this will happen if we can diagonalize any matrix for \(T\text{.}\)

Definition 39.3.

Let \(V\) be a vector space of dimension \(n\text{.}\) A linear transformation \(T\) from \(V\) to \(V\) is diagonalizable if there is a basis \(\CB\) for \(V\) for which \([T]_{\CB}\) is a diagonalizable matrix.

Now the question is, if \(T\) is diagonalizable, how do we find a basis \(\CB\) to that \([T]_{\CB}\) is a diagonal matrix? Recall that to diagonalize a matrix \(A\) means to find a matrix \(P\) so that \(P^{-1}AP\) is a diagonal matrix.

Activity 39.3.

Let \(V = \pol_1\text{,}\) \(T : V \to V\) defined by \(T(a_0+a_1t) = (a_0+2a_1) + (2a_0+a_1)t\text{.}\) Let \(\CS = \{1, t\}\) be the standard basis for \(V\text{.}\)

(a)

Find the matrix \([T]_{\CS}\) for \(T\) relative to the basis \(\CS\text{.}\)

(b)

Use the fact that the eigenvalues of \([T]_{\CS}\) are \(-1\) and 3 with corresponding eigenvectors \(\left[ \begin{array}{r} -1 \\ 1 \end{array} \right]\) and \(\left[ \begin{array}{c} 1 \\ 1 \end{array} \right]\text{,}\) respectively, to find a matrix \(P\) so that \(P^{-1}[T]_{\CS}P\) is a diagonal matrix \(D\text{.}\)

(c)

To find a basis \(\CB\) for which \([T]_{\CB}\) is diagonal, note that if \(P= \underset{\CB \leftarrow \CS}{P}\) of the previous part is the change of basis matrix from \(\CS\) to \(\CB\text{,}\) then the matrices of \(T\) with respect to \(\CS\) and \(\CB\) are related by \(P^{-1} [T]_{\CS} P = [T]_{\CB}\text{,}\) which makes \([T]_{\CB}\) a diagonal matrix. Use the fact that \(P[\vx]_{\CB}=[\vx]_{\CS}\) to find the vectors in the basis \(\CB\text{.}\)

(d)

Now show directly that \([T]_{\CB} = D\text{,}\) where \(\CB = \{\vv_1, \vv_2\}\text{,}\) and verify that we have found a basis for \(V\) for which the matrix for \(T\) is diagonal.

The general idea for diagonalizing a linear transformation is contained in Activity 39.3. Let \(V\) be an \(n\) dimensional vector space and assume the linear transformation \(T : V \to V\) is diagonalizable. So there exists a basis \(\CB = \{\vv_1, \vv_2, \ldots, \vv_n\}\) for which \([T]_{\CB}\) is a diagonal matrix. To find this basis, note that for any other basis \(\CC\) we know that \([T]_{\CC}\) and \([T]_{\CB}\) are similar. That means that there is an invertible matrix \(P\) so that \(P^{-1}[T]_{\CC}P = [T]_{\CB}\text{,}\) where \(P= \underset{\CC \leftarrow \CB}{P}\) is the change of basis matrix from \(\CB\) to \(\CC\text{.}\) So to find \(\CB\text{,}\) we choose \(\CB\) so that \(P\) is the matrix that diagonalizes \([T]_{\CC}\text{.}\) Using the definition of the change of basis matrix, we then know that each basis vector \(\vv_i\) in \(\CB\) satisfies \([\vv_i]_{\CC}=P[\vv_i]_{\CB}= P \ve_i\text{.}\) From this we can find \(\vv_i\text{.}\) Note that a standard basis \(\CS\) is often a convenient choice to make for the basis \(\CC\text{.}\) This is the process we used in Activity 39.3.

Recall that an \(n \times n\) matrix \(A\) is diagonalizable if and only if \(A\) has \(n\) linearly independent eigenvectors. We apply this idea to a linear transformation as well, summarized by the following theorem.

Subsection Examples

What follows are worked examples that use the concepts from this section.

Example 39.5.

Let \(T : \M_{2 \times 2} \to \M_{2 \times 2}\) be defined by \(T(A) = A^{\tr}\text{.}\) Let \(M_1 = \left[ \begin{array}{cc} 1\amp 1\\0\amp 0 \end{array} \right]\text{,}\) \(M_2 = \left[ \begin{array}{cc} 0\amp 1\\1\amp 0 \end{array} \right]\text{,}\) \(M_3 = \left[ \begin{array}{cc} 1\amp 0\\1\amp 0 \end{array} \right]\text{,}\) and \(M_4 = \left[ \begin{array}{cc} 0\amp 0\\1\amp 1 \end{array} \right]\text{,}\) and let \(\CB\) be the ordered basis \(\{M_1, M_2, M_3, M_4\}\text{.}\)

(a)

Show that \(T\) is a linear transformation.

Solution.

Let \(A\) and \(B\) be \(2 \times 2\) matrices and let \(c\) be a scalar. The properties of the transpose show that

\begin{equation*} T(A+B) = (A+B)^{\tr} = A^{\tr}+B^{\tr} = T(A) + T(B) \end{equation*}

and

\begin{equation*} T(cA) = (cA)^{\tr} = cA^{\tr} = cT(A)\text{.} \end{equation*}

Thus, \(T\) is a linear transformation.

(b)

Find \([T]_{\CB}\text{.}\)

Solution.

Notice that \(T(M_1) = M_3\text{,}\) \(T(M_2) = M_2\text{,}\) \(T(M_3) = M_1\text{,}\) and \(T(M_4) = M_1-M_3+M_4\text{.}\) So

\begin{equation*} [T]_{\CB} = \left[ [T(M_1)_{\CB} \ [T(M_2)_{\CB} \ [T(M_3)_{\CB} \ [T(M_4)_{\CB} \right] = \left[ \begin{array}{cccr} 0\amp 0\amp 1\amp 1 \\ 0\amp 1\amp 0\amp 0 \\ 1\amp 0\amp 0\amp -1 \\ 0\amp 0\amp 0\amp 1 \end{array} \right]\text{.} \end{equation*}
(c)

Is the matrix \([T]_{\CB}\) diagonalizable? If so, find a matrix \(P\) such that \(P^{-1}[T]_{\CB}P\) is a diagonal matrix. Use technology as appropriate.

Solution.

Technology shows that the characteristic polynomial of \([T]_{\CB}\) is \(p(\lambda) = (\lambda+1)(\lambda-1)^3\text{.}\) It follows that the eigenvalues of \([T]_{\CB}\) are \(-1\) and \(1\) (with multiplicity \(3\)). Technology also shows that \(\{[-1 \ 0 \ 1 \ 0]^{\tr}\}\) is a basis for the eigenspace corresponding to the eigenvalue \(-1\) and the vectors \([1 \ 0 \ 0 \ 1]^{\tr}\text{,}\) \([1 \ 0 \ 1 \ 0]^{\tr}\text{,}\) \([0 \ 1 \ 0 \ 0]^{\tr}\) form a basis for the eigenspace corresponding to the eigenvalue \(1\text{.}\) If we let \(P = \left[ \begin{array}{cccr} 1\amp 1\amp 0\amp -1 \\ 0\amp 0\amp 1\amp 0 \\ 0\amp 1\amp 0\amp 1 \\ 1\amp 0\amp 0\amp 0 \end{array} \right]\text{,}\) then \(P^{-1}AP = \left[ \begin{array}{cccr} 1\amp 0\amp 0\amp 0 \\ 0\amp 1\amp 0\amp 0 \\ 0\amp 0\amp 1\amp 0 \\ 0\amp 0\amp 0\amp -1 \end{array} \right]\text{.}\)

(d)

Use part (c) to find a basis \(\CC\) for \(\M_{2 \times 2}\) for which \([T]_{\CC}\) is a diagonal matrix.

Solution.

Suppose \(\CC = \{Q_1, Q_2, Q_3, Q_4\}\) is a basis for \(\M_{2 \times 2}\) satisfying \(P^{-1}[T]_{\CB}P = [T]_{\CC}\text{.}\) This makes \([T]_{\CC}\) a diagonal matrix with the eigenvalues of \([T]_{\CB}\) along the diagonal. In this case we have

\begin{equation*} [T]_{\CC}[A]_{\CC} = P^{-1}[T]_{\CB}P[A]_{\CC} \end{equation*}

for any matrix \(A \in \M_{2 \times 2}\text{.}\) So \(P\) is the change of basis matrix from \(\CC\) to \(\CB\text{.}\) That is, \(P[A]_{\CC} = [A]_{\CB}\text{.}\) It follows that \(P[Q_i]_{\CC} = [Q_i]_{\CB}\) for \(i\) from \(1\) to \(4\text{.}\) Since \([Q_i]_{\CC} = \ve_i\text{,}\) we can see that \([Q_i]_{\CB}\) is the \(i\)th column of \(P\text{.}\) So the columns of \(P\) provide the weights for basis vectors in \(\CC\) in terms of the basis \(\CB\text{.}\) Letting \(Q_1 = M_1+M_4\text{,}\) \(Q_2 = M_1+M_3\text{,}\) \(Q_3=M_2\text{,}\) and \(Q_4 = M_3-M_1\text{,}\) we conclude that \([T]_{\CC}\) is a diagonal matrix.

Example 39.6.

We have shown that every linear transformation from a finite dimensional vector space \(V\) to \(V\) can be represented as a matrix transformation. As a consequence, all such linear transformations have eigenvalues. In this example we consider the problem of determining eigenvalues and eigenvectors of the linear transformation \(T : V \to V\) defined by \(T(f)(x) = \int_0^x f(t) \, dt\text{,}\) where \(V\) is the vector space of all infinitely differentiable functions from \(\R\) to \(\R\text{.}\) Suppose that \(f\) is an eigenvector of \(T\) with a nonzero eigenvalue.

(a)

Use the Fundamental Theorem of Calculus to show that \(f\) must satisfy the equation \(f'(x) = \frac{1}{\lambda} f(x)\) for some scalar nonzero scalar \(\lambda\text{.}\)

Solution.

Assuming that \(f\) is an eigenvector of \(T\) with nonzero eigenvalue \(\lambda\text{,}\) then

\begin{equation} \lambda f(x) = T(f)(x) = \int_0^x f(t) \, dt\text{.}\tag{39.2} \end{equation}

Recall that \(\frac{d}{dx} \int_0^x f(t) \, dt = f(x)\) by the Fundamental Theorem of Calculus. Differentiating both sides of (39.2) with respect to \(x\) leaves us with \(\lambda f'(x) = f(x)\text{,}\) or \(f'(x) = \frac{1}{\lambda} f(x)\text{.}\)

(b)

From calculus, we know that the functions \(f\) that satisfy the equation \(f(x) = \frac{1}{\lambda} f'(x)\) all have the form \(f(x) = Ae^{x/\lambda}\) for some scalar \(A\text{.}\) So if \(f\) is an eigenvector of \(T\text{,}\) then \(f(x) = Ae^{x/\lambda}\) for some scalar \(A\text{.}\) Show, however, that \(Ae^{x/\lambda}\) cannot be an eigenvector of \(T\text{.}\) Thus, \(T\) has no eigenvectors with nonzero eigenvalues.

Solution.

We can directly check from the definition that \(T(f)\) is not a multiple of \(f(x)=Ae^{x/\lambda}\) unless \(A=0\text{,}\) which is not allowed. Another method is to note that \(T(f)(0) = \int_0^0 f(t) \, dt = 0\) by definition. But if \(f(x) = Ae^{x/\lambda}\text{,}\) then

\begin{equation*} 0 = T(f)(0) = \lambda f(0) = Ae^{0/\lambda} = A\text{.} \end{equation*}

This means that \(f(x) = 0e^{x/\lambda} = 0\text{.}\) But \(0\) can never be an eigenvector by definition. So \(T\) has no eigenvectors with nonzero eigenvalues.

(c)

Now show that \(0\) is not an eigenvalue of \(T\text{.}\) Conclude that \(T\) has no eigenvalues or eigenvectors.

Solution.

Suppose that \(0\) is an eigenvalue of \(T\text{.}\) Then there is a nonzero function \(g\) such that \(T(g) = 0\text{.}\) In other words, \(0 = \int_0^x g(t) \, dt\text{.}\) Again, differentiating with respect to \(x\) yields the equation \(g(x) = 0\text{.}\) So \(T\) has no eigenvectors with eigenvalue \(0\text{.}\) Since \(T\) does not have a nonzero eigenvalue or zero as an eigenvalue, \(T\) has no eigenvalues (and eigenvectors).

(d)

Explain why this example does not contradict the statement that every linear transformation from a finite dimensional vector space \(V\) to \(V\) has eigenvalues.

Solution.

The reason this example does not contradict the statement is that \(V\) is an infinite dimensional vector space. In fact, the linearly independent monomials \(t^m\) are all in \(V\) for any positive integer \(m\text{.}\)

Subsection Summary

  • We can define eigenvalues and eigenvectors of a linear transformation \(T : V \to V\text{,}\) where \(V\) is a finite dimensional vector space. In this case, a scalar \(\lambda\) is an eigenvalue for \(T\) if there exists a non-zero vector \(\vv\) in \(V\) so that \(T(\vv) = \lambda \vv\text{.}\)

  • To find eigenvalues and eigenvectors of a linear transformation \(T : V \to V\text{,}\) where \(V\) is a finite dimensional vector space, we find the eigenvalues and eigenvectors for \([T]_{\CB}\text{,}\) where \(\CB\) is an basis for \(V\text{.}\) If \(\CC\) is any other basis for \(V\text{,}\) then \([T]_{\CB}\) and \([T]_{\CC}\) are similar matrices and have the same eigenvalues. Once we find an eigenvector \([\vv]_{\CB}\) for \([T]_{\CB}\text{,}\) then \(\vv\) is an eigenvector for \(T\text{.}\)

  • A linear transformation \(T: V \to V\text{,}\) where \(V\) is a finite dimensional vector space, is diagonalizable if there is a basis \(\CC\) for \(V\) for which \([T]_{\CC}\) is a diagonalizable matrix.

  • To determine if a linear transformation \(T : V \to V\) is diagonalizable, we pick a basis \(\CC\) for \(V\text{.}\) If the matrix \([T]_{\CC}\) has \(n\) linearly independent eigenvectors, then \(T\) is diagonalizable.

Exercises Exercises

1.

Let \(V = \pol_2\) and define \(T: V \to V\) by \(T(p(t)) = \frac{d}{dt} \left((1-t)p(t)\right)\text{.}\)

(a)

Show that \(T\) is a linear transformation.

Hint.

Use properties of the derivative.

(b)

Let \(\CS\) be the standard basis for \(\pol_2\text{.}\) Find \([T]_{\CS}\text{.}\)

(c)

Find the eigenvalues and a basis for each eigenspace of \(T\text{.}\)

(d)

Is \(T\) diagonalizable? If so, find a basis \(\CB\) for \(\pol_2\) so that \([T]_{\CB}\) is a diagonal matrix. If not, explain why not.

2.

Let \(T: V \to V\) be a linear transformation, and let \(\CB\) be a basis for \(V\text{.}\) Show that every eigenvector \(\vv\) of \(T\) with eigenvalue \(\lambda\) corresponds to an eigenvector of \([T]_{\CB}\) with eigenvalue \(\lambda\text{.}\)

3.

Let \(C^{\infty}\) be the set of all functions from \(\R\) to \(\R\) that have derivatives of all orders.

(a)

Explain why \(C^{\infty}\) is a subspace of \(\F\text{.}\)

Hint.

Use properties of differentiable functions.

(b)

Let \(D : C^{\infty} \to C^{\infty}\) be defined by \(D(f) = f'\text{.}\) Explain why \(D\) is a linear transformation.

Hint.

Use properties of the derivative.

(c)

Let \(\lambda\) be any real number and let \(f_{\lambda}\) be the exponential function defined by \(f_{\lambda}(x) = e^{\lambda x}\text{.}\) Show that \(f_{\lambda}\) is an eigenvector of \(D\text{.}\) What is the corresponding eigenvalue? How many eigenvalues does \(D\) have?

Hint.

What is \(D\left(e^{\lambda x}\right)\text{?}\)

4.

Consider \(D : \pol_2 \to \pol_2\text{,}\) where \(D\) is the derivative operator defined as in Exercise 3. Find all of the eigenvalues of \(D\) and a basis for each eigenspace of \(D\text{.}\)

5.

Let \(n\) be a positive integer and define \(T : \M_{n \times n} \to M_{n \times n}\) by \(T(A) = A^{\tr}\text{.}\)

(a)

Show that \(T\) is a linear transformation.

Hint.

Use properties of the matrix transpose.

(b)

Is \(\lambda = 1\) an eigenvalue of \(T\text{?}\) Explain. If so, describe in detail the vectors in the corresponding eigenspace.

Hint.

For which matrices is \(T(A) = A\text{?}\)

(c)

Does \(T\) have any other eigenvalues? If so, what are they and what are the vectors in the corresponding eigenspaces? If not, why not?

Hint.

When is it possible to have \(A^{\tr} = \lambda A\text{?}\)

6.

Label each of the following statements as True or False. Provide justification for your response.

(a) True/False.

The number 0 cannot be an eigenvalue of a linear transformation.

(b) True/False.

The zero vector cannot be an eigenvector of a linear transformation.

(c) True/False.

If \(\vv\) is an eigenvector of a linear transformation \(T\text{,}\) then so is \(2\vv\text{.}\)

(d) True/False.

If \(\vv\) is an eigenvector of a linear transformation \(T\text{,}\) then \(\vv\) is also an eigenvector of the transformation \(T^2 = T \circ T\text{.}\)

(e) True/False.

If \(\vv\) and \(\vu\) are eigenvectors of a linear transformation \(T\) with the same eigenvalue, then \(\vv+\vu\) is also an eigenvector of \(T\) with the same eigenvalue.

(f) True/False.

If \(\lambda\) is an eigenvalue of a linear transformation \(T\text{,}\) then \(\lambda^2\) is an eigenvalue of \(T^2\text{.}\)

(g) True/False.

Let \(S\) and \(T\) be two linear transformations with the same domain and codomain. If \(\vv\) is an eigenvector of both \(S\) and \(T\text{,}\) then \(\vv\) is an eigenvector of \(S+T\text{.}\)

(h) True/False.

Let \(V\) be a vector space and let \(T : V \to V\) be a linear transformation. Then \(T\) has 0 as an eigenvalue if and only if \([T]_{\CB}\) has linearly dependent columns for any basis \(\CB\) of \(V\text{.}\)

Subsection Project: Linear Transformations and Differential Equations

There are many different types of differential equations, but we will focus on differential equations of the form \(my'' = -ky-by'\) presented in the introduction, a second order (the highest derivative in the equation is a second order derivative) linear (the coefficients are constants) differential equation (also called damped harmonic oscillators).

To solve a differential equation means to find all solutions to the differential equation. That is, find all functions \(y\) that satisfy the differential equation. For example, since \(\frac{d}{dt} t^2 = 2t\text{,}\) we see that \(y = t^2\) satisfies the differential equation \(y' = 2t\text{.}\) But \(t^2\) is not the only solution to this differential equation. In fact, \(y = t^2 + C\) is a solution to \(y' = 2t\) for any scalar \(C\text{.}\) We will see how to represent solutions to the differential equation \(my'' = -ky-by'\) in this project.

The next activity shows that the set of solutions to the linear differential equation \(my'' = -ky-by'\) is a subspace of the vector space \(\F\) of all functions from \(\R\) to \(\R\text{.}\) So we should expect close connections between differential equations and linear algebra, and we will make some of these connections as we proceed.

Project Activity 39.4.

We can represent differential equations using linear transformations. To see how, let \(D\) be the function from the space \(C^1\) of all differentiable real-valued functions to \(\F\) given by \(D(f) = \frac{df}{dt}\text{.}\)

(a)

Show that \(D\) is a linear transformation.

(b)

In order for a function \(f\) to be a solution to a differential equation of the form (39.1), it is necessary for \(f\) to be twice differentiable. We will assume that \(D\) acts only on such functions from this point on. Use the fact that \(D\) is a linear transformation to show that the differential equation \(my'' = -ky-by'\) can be written in the form

\begin{equation*} \left(mD^2 + bD \right)(y) = -ky\text{.} \end{equation*}

Project Activity 39.4 shows that any solution to the differential equation \(my'' = -ky-by'\) is an eigenvector for the linear transformation \(mD^2 + bD\text{.}\) That is, the solutions to the differential equation \(my'' = -ky-by'\) form the eigenspace \(E_{-k}\) of \(mD^2 + bD\) with eigenvalue \(-k\text{.}\) The eigenvectors for a linear transformation acting on a function space are also called eigenfunctions. To build up to solutions to the second order differential equation we have been considering, we start with solutions to the first order equation.

Project Activity 39.5.

Let \(k\) be a scalar. Show that the solutions of the differential equation \(D(y) = -ky\) form a one-dimensional subspace of \(\F\text{.}\) Find a basis for this subspace. Note that this is the eigenspace of the transformation \(D\) corresponding to the eigenvalue \(-k\text{.}\)

Hint.

To find the solutions to \(y' = -ky\text{,}\) write \(y'\) as \(\frac{dy}{dt}\) and express the equation \(y' = -ky\) in the form \(\frac{dy}{dt} = -ky\text{.}\) Divide by \(y\) and integrate with respect to \(t\text{.}\)

Before considering the general second order differential equation, we start with a simpler example.

Project Activity 39.6.

As a specific example of a second order linear equation, as discussed at the beginning of this section, Hooke's law states that if a mass is hanging from a spring, the force acting on the spring is proportional to the displacement of the spring from equilibrium. If we let \(y\) be the displacement of the object from its equilibrium, and ignore any resistance, the position of the mass-spring system can be represented by the differential equation

\begin{equation*} mD^2(y) = -ky\text{,} \end{equation*}

where \(m\) is the mass of the object and \(k\) is a positive constant that depends on the spring. Assuming that the mass is positive, we can divide both sides by \(m\) and rewrite this differential equation in the form

\begin{equation} D^2(y) = -cy\tag{39.3} \end{equation}

where \(c = \frac{k}{m}\text{.}\) So the solutions to the differential equation (39.3) make up the eigenspace \(E_{-c}\) for \(D^2\) with eigenvalue \(-c\text{.}\)

(a)

Since the derivatives of \(\sin(t)\) and \(\cos(t)\) are scalar multiples of \(\sin(t)\) and \(\cos(t)\text{,}\) it may be reasonable that these make up solutions to (39.3). Show that \(y_1 = c\cos(t)\) and \(y_2 = c\sin(t)\) are both in \(E_{-c}\text{.}\)

(b)

As functions, the cosine and sine are related in many ways (e.g., the Pythagorean Identity). An important property for this application is the linear independence of the cosine and sine. Show, using the definition of linear independence, that the cosine and sine functions are linearly independent in \(\F\text{.}\)

(c)

Part (a) shows that there are at least two different functions in \(E_{-c}\text{.}\) To solve a differential equation is to find all of the solutions to the differential equation. In other words, we want to completely determine the eigenspace \(E_{-c}\text{.}\) We have already seen that any function \(y\) of the form \(y(t) = c_1 \cos(t) + c_2 \sin(t)\) is a solution to the differential equation \(mD^2(y) = -ky\text{.}\) The theory of linear differential equations tells us that there is a unique solution to \(mD^2(y) = -ky\) if we specify two initial conditions. What this means is that to show that any solution \(z\) to the differential equation \(mD^2(y) = -ky\) with two initial values \(z(t_0)\) and \(z'(t_0)\) for some scalar \(t_0\) is of the form \(y = c_1 \cos(t) + c_2 \sin(t)\text{,}\) we need to verify that there are values of \(c_1\) and \(c_2\) such that \(y(t_0) = z(t_0)\) and \(y'(t_0) = z'(t_0)\text{.}\) Here we will use this idea to show that any function in \(E_{-c}\) is a linear combination of \(\cos(t)\) and \(\sin(t)\text{.}\) That is, that the set \(\{\cos, \sin\}\) spans \(E_{-c}\text{.}\) Let \(y = c_1 \cos(t) + c_2 \sin(t)\text{.}\) Show that there are values for \(c_1\) and \(c_2\) such that

\begin{equation*} y(0) = z'(0) \ \text{ and } \ y'(0) = z'(0)\text{.} \end{equation*}

This result, along with part(b), shows that \(\{\cos, \sin\}\) is a basis for \(E_{-c}\text{.}\) (Note: That the solutions to differential equation (39.3) involve sines and cosines models the situation that a mass hanging from a spring will oscillate up and down.)

As we saw in Project Activity 39.5, the eigenspace \(E_{k}\) of the linear transformation \(D\) is one-dimensional. The key idea in Project Activity 39.6 that allowed us to find a basis for the eigenspace of \(D^2\) with eigenvalue \(-c\) is that \(\cos\) and \(\sin\) are linearly independent eigenfunctions that span \(E_{-c}\text{.}\) We won't prove this result, but the general theory of linear differential equations states that if \(y_1\text{,}\) \(y_2\text{,}\) \(\ldots\text{,}\) \(y_n\) are linearly independent solutions to the \(n\)th order linear differential equation

\begin{equation*} \left(a_nD^n + a_{n-1}D^{n-1} + \cdots + a_1D\right)(y) = -a_0y\text{,} \end{equation*}

then \(\{y_1, y_2, \ldots, y_n\}\) is a basis for the eigenspace of the linear transformation \(a_nD^n + a_{n-1}D^{n-1} + \cdots + a_1D\) with eigenvalue \(-a_0\text{.}\) Any basis for the solution set to a differential equation is called a fundamental set of solutions for the differential equation. Consequently, it is important to be able to determine when a set of functions is linearly independent. One tool for doing so is the Wronskian, which we study in the next activity.

Project Activity 39.7.

Suppose we have \(n\) functions \(f_1\text{,}\) \(f_2\text{,}\) \(\ldots\text{,}\) \(f_n\text{,}\) each with \(n-1\) derivatives. To determine the independence of the functions we must understand the solutions to the equation

\begin{equation} c_1 f_1(t) + c_2f_2(t) + \cdots + c_nf_n(t) = 0\text{.}\tag{39.4} \end{equation}

We can differentiate both sides of Equation (39.4) to obtain the new equation

\begin{equation*} c_1 f'_1(t(t)) + c_2f'_2(t) + \cdots + c_nf'_n(t) = 0\text{.} \end{equation*}

We can continue to differentiate as long as the functions are differentiable to obtain the system

\begin{align*} {f_1(t)}c_1 \amp {}+{} \amp {f_2(t)}c_2 \amp {}+{} \amp \cdots \amp {}+{} \amp {f_n(t)}c_n \amp = \amp \ 0\amp {}\\ {f'_1(t)}c_1 \amp {}+{} \amp {f'_2(t)}c_2 \amp {}+{} \amp \cdots \amp {}+{} \amp {f'_n(t)}c_n \amp = \amp \ 0\amp {}\\ {f''_1(t)}c_1 \amp {}+{} \amp {f''_2(t)}c_2 \amp {}+{} \amp \cdots \amp {}+{} \amp {f''_n(t)}c_n \amp = \amp \ 0\amp {}\\ {} \amp {} \amp \amp \amp \vdots \amp \amp \amp \amp \amp {}\\ {f^{(n-1)}_1(t)}c_1 \amp {}+{} \amp {f^{(n-1)}_2(t)}c_2 \amp {}+{} \amp \cdots \amp {}+{} \amp {f^{(n-1)}_n(t)}c_n \amp = \amp \ 0\amp \end{align*}
(a)

Write this system in matrix form, with coefficient matrix

\begin{equation*} A = \left[ \begin{array}{ccccc} f_1(t) \amp f_2(t)\amp \cdots \amp f_n(t) \\ f'_1(t) \amp f'_2(t)\amp \cdots \amp f'_n(t) \\ \vdots \amp \amp \amp \\ f^{(n-1)}_1(t) \amp f^{(n-1)}_2(t)\amp \cdots \amp f^{(n-1)}_n(t) \end{array} \right]\text{.} \end{equation*}
(b)

The matrix in part (a) is called the Wronskian matrix of the system. The scalar

\begin{equation*} W(f_1, f_2, \ldots, f_n) = \det(A) \end{equation*}

is called the Wronskian of \(f_1\text{,}\) \(f_2\text{,}\) \(\ldots\text{,}\) \(f_n\text{.}\) What must be true about the Wronskian for our system to have a unique solution? If the system has a unique solution, what is the solution? What does this result tell us about the functions \(f_1\text{,}\) \(f_2\text{,}\) \(\ldots\text{,}\) \(f_n\text{?}\)

(c)

Use the Wronskian to show that the cosine and sine functions are linearly independent.

We can apply the Wronskian to help find bases for the eigenspace of the linear transformation \(mD^2 + bD\) with eigenvalue \(k\text{.}\)

Project Activity 39.8.

The solution to the Hooke's Law differential equation in Project Activity 39.6 indicates that the spring will continue to oscillate forever. In reality, we know that this does not happen. In the non-ideal case, there is always some force (e.g., friction, air resistance, a physical damper as in a piston) that acts to dampen the motion of the spring causing the oscillations to die off. Damping acts to oppose the motion, and we generally assume that the faster an object moves, the higher the damping. For this reason we assume the damping force is proportional to the velocity. That is, the damping force has the form \(-by'\) for some positive constant \(b\text{.}\) This produces the differential equation

\begin{equation} my'' + by' + ky = 0\tag{39.5} \end{equation}

or \(\left(mD^2+bD\right) = -ky\text{.}\) We will find bases for the eigenspace of the linear transformation \(mD^2+bD\) with eigenvalue \(-k\) in this activity.

(a)

Since derivatives of exponential functions are still exponential functions, it seems reasonable to try an exponential function as a solution to (39.5). Show that if \(y = e^{rt}\) for some constant \(r\) is a solution to (39.5), then \(mr^2+br+k = 0\text{.}\) The equation \(mr^2+br+k = 0\) is the characteristic or auxiliary equation for the differential equation.

(b)

Part (a) shows that our solutions to the differential equation (39.5) are exponential of the form \(e^{rt}\text{,}\) where \(r\) is a solution to the auxiliary equation. Recall that if we can find two linearly independent solutions to (39.5), then we have found a basis for the eigenspace \(E_{-k}\) of \(mD^2+bD\) with eigenvalue \(-k\text{.}\) The quadratic equation shows that the roots of the auxiliary equation are

\begin{equation*} \frac{-b \pm \sqrt{b^2-4mk}}{2m}\text{.} \end{equation*}

As we will see, our basis depends on the types of roots the auxiliary equation has.

(i)

Assume that the roots \(r_1\) and \(r_2\) of the auxiliary equation are real and distinct. That means that \(y_1 = e^{r_1t}\) and \(y_2 = e^{r_2t}\) are eigenfunctions in \(E_{-k}\text{.}\) Use the Wronskian to show that \(\{y_1, y_2\}\) is a basis for \(E_{-k}\) in this case. Then describe the behavior of an arbitrary eigenfunction in \(E_{-2}\) if \(mD^2+bD = D^2+3D\) and how it relates to damping. Draw a representative solution to illustrate. (In this case we say that the system is overdamped. These systems can oscillate at most once, then they quickly damp to \(0\text{.}\))

(ii)

Now suppose that we have a repeated real root \(r\) to the auxiliary equation. Then there is only one exponential function \(y_1 = e^{rt}\) in \(E_{-k}\text{.}\) In this case, show that \(y_2 = te^{rt}\) is also in \(E_{-k}\) and that \(\{y_1, y_2\}\) is a basis for \(E_{-k}\text{.}\) Then describe the behavior of an arbitrary eigenfunction in \(E_{-1}\) if \(mD^2+bD = D^2+2D\) and how it relates to damping. Draw a representative solution to illustrate. (In this case we say that the system is critically damped. These systems behave similar to the overdamped systems in that they do not oscillations. However, if the damping is reduced just a little, the system can oscillate.)

(iii)

The last case is when the auxiliary equation has complex roots \(z_1 = u+vi\) and \(z_2 = u-vi\text{.}\) We want to work with real valued functions, so we need to determine real valued solutions from these complex roots. To resolve this problem, we note that if \(x\) is a real number, then \(e^{ix} = \cos(x) + i \sin(x)\text{.}\) So

\begin{equation*} e^{(u+vi)t} = e^{ut}e^{ivt} = e^{ut} \cos(vt) + e^{ut}\sin(vt)i\text{.} \end{equation*}

Show that \(\{e^{ut} \cos(vt), e^{ut} \sin(vt)\}\) is a basis for \(E_{-k}\) in this case. Then describe the behavior of an arbitrary eigenfunction in \(E_{-5}\) if \(mD^2+bD = D^2+2D\) and how it relates to damping. Draw a representative solution to illustrate. (In this case we say that the system is underdamped. These systems typically exhibit some oscillation.)

Project Activity 39.7 tells us that if \(W(f_1, f_2, \ldots, f_n)\) is not zero, then \(f_1\text{,}\) \(f_2\text{,}\) \(\ldots\text{,}\) \(f_n\) are linearly independent. You might wonder what conclusion we can draw if \(W(f_1, f_2, \ldots, f_n)\) is zero.

Project Activity 39.9.

In this activity we consider the Wronskian of two different pairs of functions.

(a)

Calculate \(W\left(t, 2t\right)\text{.}\) Are \(t\) and \(2t\) linearly independent or dependent? Explain.

(b)

Now let \(f(t) = t|t|\) and \(g(t) = t^2\text{.}\)

(i)

Calculate \(f'(t)\) and \(g'(t)\text{.}\)

Hint.

Recall that \(|x| = \begin{cases}x \amp \text{ if } x \geq 0 \\ -x \amp \text{ if } x \lt 0. \end{cases}\)

(ii)

Calculate \(W\left(f, g \right)\text{.}\) Are \(f\) and \(g\) linearly independent or dependent in \(\F\text{?}\) Explain.

Hint.

Consider the cases when \(t \geq 0\) and \(t \lt 0\text{.}\)

(iii)

What conclusion can we draw about the functions \(f_1\text{,}\) \(f_2\text{,}\) \(\ldots\text{,}\) \(f_n\) if \(W(f_1,f_2, \ldots, f_n)\) is zero? Explain.