Skip to main content

Section 14 Eigenspaces of a Matrix

Subsection Application: Population Dynamics

The study of population dynamics β€” how and why people move from one place to another β€” is important to economists. The movement of people corresponds to the movement of money, and money makes the economy go. As an example, we might consider a simple model of population migration to and from the state of Michigan.

According to the Michigan Department of Technology, Management, and Budget, from 2011 to 2012, approximately 0.05% of the U.S. population outside of Michigan moved to the state of Michigan, while approximately 2% of Michigan's population moved out of Michigan. A reasonable question to ask about this situation is, if these numbers don't change, what is the long-term distribution of the US population inside and outside of Michigan (under the assumption that the total US population doesn't change.). The answer to this question involves eigenvalues and eigenvectors of a matrix. More details can be found later in this section.

Subsection Introduction

Preview Activity 14.1.

Consider the matrix transformation T from R2 to R2 defined by T(x)=Ax, where

A=[3113].

We are interested in understanding what this matrix transformation does to vectors in R2. The matrix A has eigenvalues Ξ»1=2 and Ξ»2=4 with corresponding eigenvectors v1=[βˆ’11] and v2=[11].

(a)

Explain why v1 and v2 are linearly independent.

(b)

Explain why any vector b in R2 can be written uniquely as a linear combination of v1 and v2.

(c)

We now consider the action of the matrix transformation T on a linear combination of v1 and v2. Explain why

(14.1)T(c1v1+c2v2)=2c1v1+4c2v2.

Equation (14.1) illustrates that it would be convenient to view the action of T in the coordinate system where Span{v1} serves as the x-axis and Span{v2} as the y-axis. In this case, we can visualize that when we apply the transformation T to a vector b=c1v1+c2v2 in R2 the result is an output vector is scaled by a factor of 2 in the v1 direction and by a factor of 4 in the v2 direction. For example, consider the box with vertices at (0,0), v1, v2, and v1+v2 as shown at left in Figure 14.1. The transformation T stretches this box by a factor of 2 in the v1 direction and a factor of 4 in the v2 direction as illustrated at right in Figure 14.1. In this situation, the eigenvalues and eigenvectors provide the most convenient perspective through which to visualize the action of the transformation T. Here, Span{v1} and Span{v2} are the eigenspaces of the matrix A.

Figure 14.1. A box and a transformed box.

This geometric perspective illustrates how each the span of each eigenvalue of A tells us something important about A. In this section we explore the idea of eigenvalues and spaces defined by eigenvectors in more detail.

Subsection Eigenspaces of Matrix

Recall that the eigenvectors of an nΓ—n matrix A satisfy the equation

Ax=Ξ»x

for some scalar Ξ». Equivalently, the eigenvectors of A with eigenvalue Ξ» satisfy the equation

(Aβˆ’Ξ»In)x=0.

In other words, the eigenvectors for A with eigenvalue Ξ» are the non-zero vectors in Nul Aβˆ’Ξ»In. Recall that the null space of an nΓ—n matrix is a subspace of Rn. In Preview Activity 14.1 we say how these subspaces provided a convenient coordinate system through which to view a matrix transformation. These special null spaces are called eigenspaces.

Definition 14.2.

Let A be an nΓ—n matrix with eigenvalue Ξ». The eigenspace for A corresponding to Ξ» is the null space of Aβˆ’Ξ»In.

Activity 14.2.

The matrix A=[20102βˆ’1001] has two distinct eigenvalues.

(a)

Find a basis for the eigenspace of A corresponding to the eigenvalue Ξ»1=1. In other words, find a basis for Nul Aβˆ’I3.

(b)

Find a basis for the eigenspace of A corresponding to the eigenvalue Ξ»2=2.

(c)

Is it true that if v1 and v2 are two distinct eigenvectors for A, that v1 and v2 are linearly independent? Explain.

(d)

Is it possible to have two linearly independent eigenvectors corresponding to the same eigenvalue?

(e)

Is it true that if v1 and v2 are two distinct eigenvectors corresponding to different eigenvalues for A, that v1 and v2 are linearly independent? Explain.

If we know an eigenvalue Ξ» of an nΓ—n matrix A, Activity 14.2 shows us how to find a basis for the corresponding eigenspace β€” just row reduce Aβˆ’Ξ»In to find a basis for Nul Aβˆ’Ξ»In. To this point we have always been given eigenvalues for our matrices, and have not seen how to find these eigenvalues. That process will come a bit later. For now, we just want to become more familiar with eigenvalues and eigenvectors. The next activity should help connect eigenvalues to ideas we have discussed earlier.

Activity 14.3.

Let A be an nΓ—n matrix with eigenvalue Ξ».

(a)

How many solutions does the equation (Aβˆ’Ξ»In)x=0 have? Explain.

(b)

Can Aβˆ’Ξ»In have a pivot in every column? Why or why not?

(c)

Can Aβˆ’Ξ»In have a pivot in every row? Why or why not?

(d)

Can the columns of Aβˆ’Ξ»In be linearly independent? Why or why not?

Subsection Linearly Independent Eigenvectors

An important question we will want to answer about a matrix is how many linearly independent eigenvectors the matrix has. Activity 14.2 shows that eigenvectors for the same eigenvalue may be linearly dependent or independent, but all of our examples so far seem to indicate that eigenvectors corresponding to different eigenvalues are linearly independent. This turns out to be universally true as our next theorem demonstrates. The next activity should help prepare us for the proof of this theorem

Activity 14.4.

Let Ξ»1 and Ξ»2 be distinct eigenvalues of a matrix A with corresponding eigenvectors v1 and v2. The goal of this activity is to demonstrate that v1 and v2 are linearly independent. To prove that v1 and v2 are linearly independent, suppose that

(14.2)x1v1+x2v2=0.
(a)

Multiply both sides of equation (14.2) on the left by the matrix A and show that

(14.3)x1Ξ»1v1+x2Ξ»2v2=0.
(b)

Now multiply both sides of equation (14.2) by the scalar Ξ»1 and show that

(14.4)x1Ξ»1v1+x2Ξ»1v2=0.
(c)

Combine equations (14.3) and (14.4) to obtain the equation

(14.5)x2(Ξ»2βˆ’Ξ»1)v2=0.
(d)

Explain how we can conclude that x2=0. Why does it follow that x1=0? What does this tell us about v1 and v2?

Activity 14.4 contains the basic elements of the proof of the next theorem.

Proof.

Let A be a matrix with k distinct eigenvalues Ξ»1, Ξ»2, …, Ξ»k and corresponding eigenvectors v1, v2, …, vk. To understand why v1, v2, …, vk are linearly independent, we will argue by contradiction and suppose that the vectors v1, v2, …, vk are linearly dependent. Note that v1 cannot be the zero vector (why?), so the set S1={v1} is linearly independent. If we include v2 into this set, the set S2={v1,v2} may be linearly independent or dependent. If S2 is linearly independent, then the set S3={v1,v2,v3} may be linearly independent or dependent. We can continue adding additional vectors until we reach the set Sk={v1,v2,v3,…,vk} which we are assuming is linearly dependent. So there must be a smallest integer mβ‰₯2 such that the set Sm is linearly dependent while Smβˆ’1 is linearly independent. Since Sm={v1,v2,v3,…,vm} is linearly dependent, there is a linear combination of v1, v2, …, vm with weights not all 0 that is the zero vector. Let c1, c2, …, cm be such weights, not all zero, so that

(14.6)c1v1+c2v2+β‹―+cmβˆ’1vmβˆ’1+cmvm=0

If we multiply both sides of (14.6) on the left by the matrix A we obtain

A(c1v1+c2v2+β‹―+cmvm)=A0c1Av1+c2Av2+β‹―+cmAvm=0(14.7)c1Ξ»1v1+c2Ξ»2v2+β‹―+cmΞ»mvm=0.

If we multiply both sides of (14.6) by Ξ»m we obtain the equation

(14.8)c1Ξ»mv1+c2Ξ»mv2+β‹―+cmΞ»mvm=0.

Subtracting corresponding sides of equation (14.8) from (14.7) gives us

(14.9)c1(Ξ»1βˆ’Ξ»m)v1+c2(Ξ»2βˆ’Ξ»m)v2+β‹―+cmβˆ’1(Ξ»mβˆ’1βˆ’Ξ»m)vmβˆ’1=0.

Recall that Smβˆ’1 is a linearly independent set, so the only way a linear combination of vectors in Smβˆ’1 can be 0 is if all of the weights are 0. Therefore, we must have

c1(Ξ»1βˆ’Ξ»m)=0,  c2(Ξ»2βˆ’Ξ»m)=0,  β€¦,  cmβˆ’1(Ξ»mβˆ’1βˆ’Ξ»m)=0.

Since the eigenvalues are all distinct, this can only happen if

c1=c2=β‹―=cmβˆ’1=0.

But equation (14.6) then implies that cm=0 and so all of the weights c1, c2, …, cm are 0. However, when we assumed that the eigenvectors v1, v2, …, vk were linearly dependent, this led to having at least one of the weights c1, c2, …, cm be nonzero. This cannot happen, so our assumption that the eigenvectors v1, v2, …, vk were linearly dependent must be false and we conclude that the eigenvectors v1, v2, …, vk are linearly independent.

Subsection Examples

What follows are worked examples that use the concepts from this section.

Example 14.4.

Let A=[4βˆ’3βˆ’3βˆ’3433βˆ’3βˆ’2] and let T be the matrix transformation defined by T(x)=Ax.

(a)

Show that 4 is an eigenvalue for A and find a basis for the corresponding eigenspace of A.

Solution.

Recall that Ξ» is an eigenvalue of A if Aβˆ’Ξ»I3 is not invertible. To show that 4 is an eigenvalue for A we row reduce the matrix

Aβˆ’(4)I3=[0βˆ’3βˆ’330βˆ’33βˆ’3βˆ’6]

to [10βˆ’1011000]. Since the third column of Aβˆ’4I3 is not a pivot column, the matrix Aβˆ’4I3 is not invertible. We conclude that 4 is an eigenvalue of A. The eigenspace of A for the eigenvalue 4 is Nul (Aβˆ’4I3). The reduced row echelon form of Aβˆ’4I3 shows that if x=[x1x2x3] and (Aβˆ’4I3)x=0, then x3 is free, x2=βˆ’x3, and x1=x3. Thus,

x=[x1x2x3]=[x3βˆ’x3x3]=x3[1βˆ’11].

Therefore, {[1βˆ’11]} is a basis for the eigenspace of A corresponding to the eigenvalue 4.

(b)

Geometrically describe the eigenspace of A corresponding to the eigenvalue 4. Explain what the transformation T does to this eigenspace.

Solution.

Since the eigenspace of A corresponding to the eigenvalue 4 is the span of a single nonzero vector v=[1βˆ’11], this eigenspace is the line in R3 through the origin and the point (1,βˆ’1,1). Any vector in this eigenspace has the form cv for some scalar c. Notice that

T(cv)=Acv=cAv=4cv,

so T expands any vector in this eigenspace by a factor of 4.

(c)

Show that 1 is an eigenvalue for A and find a basis for the corresponding eigenspace of A.

Solution.

To show that 1 is an eigenvalue for A we row reduce the matrix

Aβˆ’(1)I3=[3βˆ’3βˆ’3βˆ’3333βˆ’3βˆ’3]

to [1βˆ’1βˆ’1000000]. Since the third column of Aβˆ’I3 is not a pivot column, the matrix Aβˆ’I3 is not invertible. We conclude that 1 is an eigenvalue of A. The eigenspace of A for the eigenvalue 1 is Nul (Aβˆ’I3). The reduced row echelon form of Aβˆ’I3 shows that if x=[x1x2x3] and (Aβˆ’I3)x=0, then x2 and x3 are free, and x1=x2+x3. Thus,

x=[x1x2x3]=[x2+x3x2x3]=x2[110]+x3[101].

Therefore, {[110],[101]} is a basis for the eigenspace of A corresponding to the eigenvalue 1.

(d)

Geometrically describe the eigenspace of A corresponding to the eigenvalue 1. Explain what the transformation T does to this eigenspace.

Solution.

Since the eigenspace of A corresponding to the eigenvalue 1 is the span of two linearly independent vectors v1=[110] and v2=[101], this eigenspace is the plane in R3 through the origin and the points (1,1,0) and (1,0,1). Any vector in this eigenspace has the form av1+bv2 for some scalars a and b. Notice that

T(av1+bv2)=A(av1+bv2)=aAv1+bAv2=av1+bv2,

so T fixes every vector in this plane.

Example 14.5.

(a)

Let A=[1221]. Note that the vector v=[11] satisfies Av=3v.

(i)

Show that v is an eigenvector of A2. What is the corresponding eigenvalue?

Solution.

We use the fact that v is an eigenvector of the matrix A with eigenvalue 3.

We have that

A2v=A(Av)=A(3v)=3(Av)=3(3v)=9v.

So v is an eigenvector of A2 with eigenvalue 9=32.

(ii)

Show that v is an eigenvector of A3. What is the corresponding eigenvalue?

Solution.

We use the fact that v is an eigenvector of the matrix A with eigenvalue 3.

We have that

A3v=A(A2v)=A(9v)=9(Av)=9(3v)=27v.

So v is an eigenvector of A3 with eigenvalue 27=33.

(iii)

Show that v is an eigenvector of A4. What is the corresponding eigenvalue?

Solution.

We use the fact that v is an eigenvector of the matrix A with eigenvalue 3.

We have that

A4v=A(A3v)=A(27v)=27(Av)=27(3v)=81v.

So v is an eigenvector of A4 with eigenvalue 81=34.

(iv)

If k is a positive integer, do you expect that v is an eigenvector of Ak? If so, what do you think is the corresponding eigenvalue?

Solution.

We use the fact that v is an eigenvector of the matrix A with eigenvalue 3.

The results of the previous parts of this example indicate that Akv=3kv, or that v is an eigenvector of Ak with corresponding eigenvalue 3k.

(b)

The result of part (a) is true in general. Let M be an nΓ—n matrix with eigenvalue Ξ» and corresponding eigenvector x.

(i)

Show that Ξ»2 is an eigenvalue of M2 with eigenvector x.

Solution.

Let M be an nΓ—n matrix with eigenvalue Ξ» and corresponding eigenvector x.

We have that

M2x=M(Mx)=M(Ξ»x)=Ξ»(Mx)=Ξ»(Ξ»x)=Ξ»2x.

So x is an eigenvector of M2 with eigenvalue Ξ»2.

(ii)

Show that Ξ»3 is an eigenvalue of M3 with eigenvector x.

Solution.

Let M be an nΓ—n matrix with eigenvalue Ξ» and corresponding eigenvector x.

We have that

M3x=M(M2x)=M(Ξ»2x)=Ξ»2(Mx)=Ξ»2(Ξ»x)=Ξ»3x.

So x is an eigenvector of M3 with eigenvalue Ξ»3.

(iii)

Suppose that Ξ»k is an eigenvalue of Mk with eigenvector x for some integer kβ‰₯1. Show then that Ξ»k+1 is an eigenvalue of Mk+1 with eigenvector x. This argument shows that Ξ»k is an eigenvalue of Mk with eigenvector x for any positive integer k.

Solution.

Let M be an nΓ—n matrix with eigenvalue Ξ» and corresponding eigenvector x.

Assume that Mkx=Ξ»kx. Then

Mk+1x=M(Mkx)=M(Ξ»kx)=Ξ»k(Mx)=2Ξ»k(Ξ»x)=Ξ»k+1x.

So x is an eigenvector of Mk+1 with eigenvalue Ξ»k+1.

(c)

We now investigate the eigenvalues of a special type of matrix.

(i)

Let B=[010001000]. Show that B3=0. (A square matrix M is nilpotent ) if Mk=0 for some positive integer k, so B is an example of a nilpotent matrix.) What are the eigenvalues of B? Explain.

Solution.

Now we investigate a special type of matrix.

Straightforward calculations show that B3=0. Since B is an upper triangular matrix, the eigenvalues of B are the entries on the diagonal. That is, the only eigenvalue of B is 0.

(ii)

Show that the only eigenvalue of a nilpotent matrix is 0.

Solution.

Assume that M is a nilpotent matrix. Suppose that Ξ» is an eigenvalue of M with corresponding eigenvector v. Since M is a nilpotent matrix, there is a positive integer k such that Mk=0. But Ξ»k is an eigenvalue of Mk with eigenvector v. The only eigenvalue of the zero matrix is 0, so Ξ»k=0. This implies that Ξ»=0. We conclude that the only eigenvalue of a nilpotent matrix is 0.

Subsection Summary

  • An eigenspace of an nΓ—n matrix A corresponding to an eigenvalue Ξ» of A is the null space of Aβˆ’Ξ»In.

  • To find a basis for an eigenspace of a matrix A corresponding to an eigenvalue Ξ», we row reduce Aβˆ’Ξ»In and find a basis for Nul Aβˆ’Ξ»In.

  • Eigenvectors corresponding to different eigenvalues are always linearly independent.

Exercises Exercises

1.

For each of the following, find a basis for the eigenspace of the indicated matrix corresponding to the given eigenvalue.

(a)

[107βˆ’14βˆ’11] with eigenvalue 3

(b)

[1118βˆ’3βˆ’4] with eigenvalue 2

(c)

[21βˆ’10] with eigenvalue 1

(d)

[100002102] with eigenvalue 2

(e)

[100002102] with eigenvalue 1

(f)

[224112336] with eigenvalue 0

2.

Suppose A is an invertible matrix.

(a)

Use the definition of an eigenvalue and an eigenvector to algebraically explain why if Ξ» is an eigenvalue of A, then Ξ»βˆ’1 is an eigenvalue of Aβˆ’1.

(b)

To provide an alternative explanation to the result in the previous part, let v be an eigenvector of A corresponding to Ξ». Consider the matrix transformation TA corresponding to A and TAβˆ’1 corresponding to Aβˆ’1. Considering what happens to v if TA and then TAβˆ’1 are applied, describe why this justifies v is also an eigenvector of Aβˆ’1.

3.

If A=[01ab] has two eigenvalues 4 and 6, what are the values of a and b?

4.

(a)

What are the eigenvalues of the identity matrix I2? Describe each eigenspace.

(b)

Now let n>2 be a positive integer. What are the eigenvalues of the identity matrix In? Describe each eigenspace.

5.

(a)

What are the eigenvalues of the 2Γ—2 zero matrix (the matrix all of whose entries are 0)? Describe each eigenspace.

(b)

Now let n>2 be a positive integer. What are the eigenvalues of the nΓ—n zero matrix? Describe each eigenspace.

6.

Label each of the following statements as True or False. Provide justification for your response.

(a) True/False.

If Av=Ξ»v, then Ξ» is an eigenvalue of A with eigenvector v.

(b) True/False.

The scalar Ξ» is an eigenvalue of a square matrix A if and only if the equation (Aβˆ’Ξ»In)x=0 has a nontrivial solution.

(c) True/False.

If Ξ» is an eigenvalue of a matrix A, then there is only one nonzero vector v with Av=Ξ»v.

(d) True/False.

The eigenspace of an eigenvalue of an nΓ—n matrix A is the same as Nul (Aβˆ’Ξ»In).

(e) True/False.

If v1 and v2 are eigenvectors of a matrix A corresponding to the same eigenvalue Ξ», then v1+v2 is also an eigenvector of A.

(f) True/False.

If v1 and v2 are eigenvectors of a matrix A, then v1+v2 is also an eigenvector of A.

(g) True/False.

If v is an eigenvector of an invertible matrix A, then v is also an eigenvector of Aβˆ’1.

Subsection Project: Modeling Population Migration

As introduced earlier, data from the Michigan Department of Technology, Management, and Budget shows that from 2011 to 2012, approximately 0.05% of the U.S. population outside of Michigan moved to the state of Michigan, while approximately 2% of Michigan's population moved out of Michigan. We are interested in determining the long-term distribution of population in Michigan.

Let xn=[mnun] be the 2Γ—1 vector where mn is the population of Michigan and un is the U.S. population outside of Michigan in year n. Assume that we start our analysis at generation 0 and x0=[m0u0].

Project Activity 14.5.

(a)

Explain how the data above shows that

m1=0.98m0+0.0005u0u1=0.02m0+0.9995u0
(b)

Identify the matrix A such that x1=Ax0.

One we have the equation x1=Ax0, we can extend it to subsequent years:

x2=Ax1,    x3=Ax2,    ,...,    xn+1=Axn

for each nβ‰₯0.

This example illustrates the general nature of what is called a Markov process (see Definition 9.4). Recall that the matrix A that provides the link from one generation to the next is called the transition matrix.

In situations like these, we are interested in determining if there is a steady-state vector, that is a vector that satisfies

(14.10)x=Ax.

Such a vector would show us the long-term population of Michigan provided the population dynamics do not change.

Project Activity 14.6.

(a)

Explain why a steady-state solution to (14.10) is an eigenvector of A. What is the corresponding eigenvalue?

(b)

Consider again the transition matrix A from Project Activity 14.5. Recall that the solutions to equation (14.10) are all the vectors in Nul (Aβˆ’I2). In other words, the eigenvectors of A for this eigenvalue are the nonzero vectors in Nul (Aβˆ’I2). Find a basis for the eigenspace of A corresponding to this eigenvalue. Use whatever technology is appropriate.

(c)

Once we know a basis for the eigenspace of the transition matrix A, we can use it to estimate the steady-state population of Michigan (assuming the stated migration trends are valid long-term). According to the US Census Bureau 29 , the resident US population on December 1, 2019 was 330,073,471. Assuming no population growth in the U.S., what would the long-term population of Michigan be? How realistic do you think this is?

census.gov/data/tables/time-series/demo/popest/2010s-national-total.html