Section 13 The Null Space and Column Space of a Matrix
Focus Questions
By the end of this section, you should be able to give precise and thorough answers to the questions listed below. You may want to keep these questions in mind to focus your thoughts as you complete the section.
What is the null space of a matrix?
What is the column space of a matrix?
What important structure do the null space and column space of a matrix have?
What is the kernel of a matrix transformation?
How is the kernel of a matrix transformation \(T\) defined by \(T(\vx) = A\vx\) related to the null space of \(A\text{?}\)
What is the range of a matrix transformation?
How is the range of a matrix transformation \(T\) defined by \(T(\vx) = A\vx\) related to the column space of \(A\text{?}\)
How do we find a basis for \(\Nul A\text{?}\)
How do we find a basis for \(\Col A\text{?}\)
Subsection Application: The Lights Out Game
Lights Out (LO) is a commercial game released by Tiger Toys in 1995 (later bought out by Hasbro). The game consists of a \(5 \times 5\) grid in which each square is either lit or unlit. Pressing a square changes the status of the square itself and all the squares to the left, right, up, or down. The player's job is to turn all the lights out. You can play a sample game at geogebra.org/m/wcmctahp
. There is a method to solve any solvable Lights Out game that can be uncovered through linear algebra that we will uncover later in this section. Column spaces and null spaces play important roles in this method.
Subsection Introduction
Recall that a subspace of \(\R^n\) is a subset of \(\R^n\) which is a vector space in itself. More specifically, a subset \(W\) or \(\R^n\) is a subspace of \(\R^n\) if
whenever \(\vu\) and \(\vv\) are in \(W\) it is also true that \(\vu + \vv\) is in \(W\) (that is, \(W\) is closed under addition),
whenever \(\vu\) is in \(W\) and \(a\) is a scalar it is also true that \(a\vu\) is in \(W\) (that is, \(W\) is closed under scalar multiplication),
\(\vzero\) is in \(W\text{.}\)
Given a matrix \(A\text{,}\) there are several subspaces that are connected to \(A\text{.}\) Two specific such subspaces are the null space of \(A\) and the column space of \(A\text{.}\) We will see that these subspaces provide answers to the big questions we have been considering since the beginning of the semester, such as “Do columns of \(A\) span \(\R^m\text{?}\)” “Are the columns of \(A\) linearly independent?” “Is the transformation \(T\) defined by matrix multiplication by \(A\) one-to-one?” “Is the transformation \(T\) onto?”
In this preview activity, we start examining the null space.
Preview Activity 13.1.
(a)
Let \(A = \left[ \begin{array}{ccc} 2\amp 1\amp 3\\ 1\amp 1\amp 4 \end{array} \right]\text{.}\)
(i)
Find the general solution to the homogeneous equation \(A \vx = \vzero\text{.}\) Write your solutions in parametric vector form. (Recall that the parametric vector form expresses the solutions to an equation as linear combinations of vectors with free variables as the weights. An example would be \(x_3 \left[ \begin{array}{r} 1\\0\\-1\\0 \end{array} \right] + x_4 \left[ \begin{array}{r} -2\\1\\0\\1 \end{array} \right]\text{.}\))
(ii)
Find two specific solutions \(\vx_1\) and \(\vx_2\) to the homogeneous equation \(A \vx = \vzero\text{.}\) Is \(\vx_1 + \vx_2\) a solution to \(A \vx = \vzero\text{?}\) Explain.
(iii)
Is \(3\vx_1\) a solution to \(A \vx = \vzero\text{?}\) Explain.
(iv)
Is \(\vzero\) a solution to \(A \vx = \vzero\text{?}\)
(v)
What does the above seem to indicate about the set of solutions to the homogeneous system \(A \vx = \vzero\text{?}\)
(b)
Let \(A\) be an \(m \times n\) matrix. As problem 1 implies, the set of solutions to a homogeneous matrix-vector equation \(A \vx = \vzero\) appears to be a subspace. We give a special name to this set. The null space of an \(m \times n\) matrix \(A\) is the set of all solutions to \(A \vx = \vzero\text{.}\)
Definition 13.1.
Note that since \(A\vx=\vzero\) corresponds to a homogeneous system of linear equations, \(\Nul A\) also represents the solution set of a homogeneous system. Let \(A = \left[ \begin{array}{cccc} 2\amp 1\amp 3\amp 0\\ 1\amp 1\amp 4\amp 1 \end{array} \right]\text{.}\) Find all vectors in \(\Nul A\text{.}\)
(c)
So far we considered specific examples of null spaces. But what are the properties of a null space in general? Let \(A\) be an arbitrary \(m \times n\) matrix.
(i)
The null space of an \(m \times n\) matrix is a subset of \(\R^k\) for some integer \(k\text{.}\) What is \(k\text{?}\)
(ii)
Now suppose \(\vu\) and \(\vv\) are two vectors in \(\Nul A\text{.}\) By definition, that means \(A\vu=\vzero\text{,}\) \(A\vv=\vzero\text{.}\) Use properties of the matrix-vector product to show that \(\vu + \vv\) is also in \(\Nul A\text{.}\)
(iii)
Now suppose \(\vu\) is a vector in \(\Nul A\) and \(a\) is a scalar. Explain why \(a\vu\) is also in \(\Nul A\text{.}\)
(iv)
Explain why \(\Nul A\) is a subspace of \(\R^n\text{.}\)
Subsection The Null Space of a Matrix and the Kernel of a Matrix Transformation
In this section we explore the null space and see how the null space of a matrix is related to the matrix transformation defined by the matrix.
Let \(A\) be an \(m \times n\) matrix. In Preview Activity 13.1 we defined the null space of a matrix \(A\) (see Definition 13.1) as the set of solutions to the matrix equation \(A \vx = \vzero\text{.}\) Note that the null space of an \(m \times n\) matrix is a subset of \(\R^n\text{.}\) We saw that the null space of \(A\) is closed under addition and scalar multiplication — that is, if \(\vu\) and \(\vv\) are in \(\Nul A\) and \(a\) and \(b\) are any scalars, then \(\vu + \vv\) and \(a \vu\) are also in \(\Nul A\text{.}\) Since the zero vector is always in \(\Nul A\text{,}\) we can conclude that the null space of \(A\) is a subspace of \(\R^n\text{.}\)
There is a connection between the null space of a matrix and the matrix transformation it defines. Recall that any \(m \times n\) matrix \(A\) defines a matrix transformation \(T\) from \(\R^n\) to \(\R^m\) by \(T(\vx) = A \vx\text{.}\) The null space of \(A\) is then the collection of vectors \(\vx\) in \(\R^n\) so that \(T(\vx) = \vzero\text{.}\) So if \(T\) is a matrix transformation from \(\R^n\) to \(\R^m\text{,}\) then the set
is a subspace of \(\R^n\) equal to the null space of \(A\text{.}\) This set is is given a special name.
Definition 13.2.
Let \(T : \R^n \to \R^n\) be a matrix transformation. The kernel of \(T\) is the set
Activity 13.2.
If \(T\) is a matrix transformation defined by a matrix \(A\text{,}\) then there is a convenient way to determine if \(T\) is one-to-one.
(a)
Let \(T\) be the matrix transformation defined by \(T(\vx) = A\vx\text{,}\) where
Find all of the vectors in \(\Nul A\text{.}\) If \(\Nul A\) contains more than one vector, can \(T\) be one-to-one? Why?
(b)
Let \(T\) be the matrix transformation defined by \(T(\vx) = A\vx\text{,}\) where
Find all of the vectors in \(\Nul A\text{.}\) Is \(T\) one-to-one? Why?
(c)
To find the vectors in the null space of a matrix \(A\) we solve the system \(A \vx = \vzero\text{.}\) Since a homogeneous system is always consistent, there are two possibilities for \(\Nul A\text{:}\) either \(\Nul A = \{\vzero\}\) or \(\Nul A\) contains infinitely many vectors.
(i)
Under what conditions on \(A\) is \(\Nul A = \{\vzero\}\text{?}\) What does that mean about \(T\) being one-to-one or not? Explain.
(ii)
Under what conditions is \(\Nul A\) infinite? What does that mean about \(T\) being one-to-one or not? Explain.
(iii)
Is is possible for \(\Nul A\) to be the whole space \(\R^n\text{?}\) If so, give an example. If not, explain why not.
Recall that for a function to be one-to-one, each output must come from exactly one input. Since a matrix transformation \(T\) defined by \(T(\vx) = A \vx\) always maps the zero vector to the zero vector, for \(T\) to be one-to-one it must be the case that the zero vector is the only vector that \(T\) maps to the zero vector. This means that the null space of \(A\) must be \(\{\vzero\}\text{.}\) Activity 13.2 demonstrates that if the matrix \(A\) that defines the transformation \(T\) has a pivot in every column, then \(T(\vx) = \vb\) will have exactly one solution for each \(\vb\) in the range of \(T\text{.}\) So a trivial null space is enough to characterize a one-to-one matrix transformation.
Theorem 13.3.
A matrix transformation \(T\) from \(\R^n\) to \(\R^m\) defined by \(T(\vx) = A\vx\) is one-to-one if and only if
Subsection The Column Space of a Matrix and the Range of a Matrix Transformation
Given an \(m \times n\) matrix \(A\text{,}\) we have seen that the matrix-vector product \(A \vx\) is a linear combination of the columns of \(A\) with weights from \(\vx\text{.}\) It follows that the equation \(A \vx = \vb\) has a solution if and only if \(\vb\) is a linear combination of the columns of \(A\text{.}\) So the span of the columns of \(A\) tells us for which vectors the equation \(A \vx = \vb\) is consistent. We give the span of the columns of a matrix \(A\) a special name.
Definition 13.4.
The column space of an \(m \times n\) matrix \(A\) is the span of the columns of \(A\text{.}\)
We denote the column space of \(A\) as \(\Col A\text{.}\) Given that \(A \vx\) is a linear combination of the columns of \(A\text{,}\) we can also write the column space of an \(m \times n\) matrix \(A\) as
For the matrix transformation \(T\) defined by \(T(\vx) = A\vx\text{,}\) the set of all vectors of the form \(A \vx\) is also the range of the transformation \(T\text{.}\) So for a matrix transformation \(T\) with matrix \(A\) we have \(\Range(T) = \Col A\text{.}\)
Activity 13.3.
As a span of a set of vectors, we know that \(\Col A\) is a subspace of \(\R^k\) for an appropriate value of \(k\text{.}\)
(a)
Let \(M = \left[ \begin{array}{ccccr} 1\amp 1\amp 1\amp 0\amp 2 \\ 0\amp 1\amp 0\amp 1\amp 1 \\ 1\amp 0\amp 1\amp 1\amp -1 \\ 0\amp 1\amp 0\amp 1\amp 1 \end{array} \right]\text{.}\) The space \(\Col M\) is a subspace of \(\R^k\) for some positive integer \(k\text{.}\) What is \(k\) in this case?
(b)
If \(A\) is an \(m \times n\) matrix, then \(\Col A\) is a subspace of \(\R^k\) for some positive integer \(k\text{.}\) What is \(k\) in this case?
(c)
Recall that a matrix transformation \(T\) given by \(T(\vx)=A\vx\) where \(A\) is an \(m\times n\) matrix is onto if for every \(\vb\) in \(\R^m\) there exists a \(\vx\) in \(\R^n\) for which \(T(\vx)=\vb\text{.}\) How does \(T\) being onto relate to the \(\Col A\text{?}\)
As you saw in Activity 13.3, a matrix transformation \(T\) defined by \(T(\vx)=A\vx\) is onto if the column space of \(A\text{,}\) which consists of the image vectors under the transformation \(T\text{,}\) is equal to \(\R^m\text{.}\) In other words, we want the \(\Range(T)\) to equal \(\R^m\text{.}\)
Theorem 13.5.
A matrix transformation \(T\) from \(\R^n\) to \(\R^m\) defined by \(T(\vx) = A\vx\) is onto if and only if
Subsection The Row Space of a Matrix
As you might expect, if there is a column space for a matrix then there is also a row space for a matrix. The row space is defined just as the column space as the span of the rows of a matrix.
Definition 13.6.
The row space of an \(m \times n\) matrix \(A\) is the span of the row of \(A\text{.}\)
There is really nothing new here, though. Since the rows of \(A\) are the columns of \(A^{\tr}\text{,}\) it follows that \(\Row A = \Col A^{\tr}\text{.}\) So if we want to learn anything about the row space of \(A\text{,}\) we can just translate all of our questions to the column space of \(A^{\tr}\text{.}\)
Subsection Bases for \(\Nul A\) and \(\Col A\)
When confronted with a subspace of \(\R^n\text{,}\) we will usually want to find a minimal spanning set — a smallest spanning set — for the space. Recall that a minimal spanning set is also called a basis for the space. So a basis for a space must span that space, and to be a minimal spanning set we have seen that a basis must also be linearly independent. So to prove that a set is a basis for a subspace of \(\R^n\) we need to demonstrate two things: that the set is linearly independent, and that the set spans the subspace.
Activity 13.4.
In this activity we see how to find a basis for \(\Col A\) and \(\Nul A\) for a specific matrix \(A\text{.}\) Let
Assume that the reduced row echelon form of \(A\) is
(a)
First we examine \(\Col A\text{.}\) Recall that to find a minimal spanning set of a set of vectors \(\{\vv_1, \vv_2, \ldots, \vv_k\}\) in \(\R^n\) we just select the pivot columns of the matrix \([\vv_1 \ \vv_2 \ \cdots \ \vv_k]\text{.}\)
(i)
Find a basis for \(\Col A\text{.}\)
(ii)
Does \(\Col A\) equal \(\Col R\text{?}\) Explain.
(b)
Now we look at \(\Nul A\text{.}\)
(i)
Write the general solution to the homogeneous system \(A \vx = \vzero\) in vector form.
(ii)
Find a spanning set for \(\Nul A\text{.}\)
(iii)
Find a basis for \(\Nul A\text{.}\) Explain how you know you have a basis.
You should have noticed that Activity 13.4 (a) provides a process for finding a basis for \(\Col A\) — the pivot columns of \(A\) form a basis for \(\Col A\text{.}\) Similarly, Activity 13.4 (b) shows us that we can find a basis for \(\Nul A\) by writing the general solution to the homogeneous equation \(A\vx = \vzero\) as a linear combination of vectors whose weights are the variables corresponding to the non-pivot columns of \(A\) — and these vectors form a basis for \(\Nul A\text{.}\) As we will argue next, these process always give us bases for \(\Col A\) and \(\Nul A\text{.}\)
Let \(A\) be an \(m \times n\) matrix, and let \(R\) be the reduced row echelon form of \(A\text{.}\) Suppose \(R\) has \(k\) non-pivot columns and \(n-k\) pivot columns. We can rearrange the columns so that the non-pivot columns of \(R\) are the last \(k\) columns (this just amounts to relabeling the unknowns in the system).
Basis for \(\Nul A\).
Here we argue that the method described following Activity 13.4 to find a spanning set for the null space always yields a basis for the null space. First note that \(\Nul R = \Nul A\text{,}\) since the system \(A\vx = \vzero\) has the same solution set as \(R \vx = \vzero\text{.}\) So it is enough to find a basis for \(\Nul R\text{.}\) If every column of \(R\) is a pivot column, then \(R \vx = \vzero\) has only the trivial solution and the null space of \(R\) is \(\{ \vzero \}\text{.}\) Let us now consider the case where \(R\) contains non-pivot columns. If we let \(\vx = [x_1 \ x_2 \ \ldots \ x_n]^{\tr}\text{,}\) and if \(R\vx = \vzero\) then we can write \(x_1\text{,}\) \(x_2\text{,}\) \(\ldots\text{,}\) \(x_{n-k}\) in terms of \(x_{n-k+1}\text{,}\) \(x_{n-k+2}\text{,}\) \(\ldots\text{,}\) and \(x_n\text{.}\) From these equations we can write \(\vx\) as a linear combination of some vectors \(\vv_1\text{,}\) \(\vv_2\text{,}\) \(\ldots\text{,}\) \(\vv_k\) with \(x_{n-k+1}\text{,}\) \(x_{n-2+2}\text{,}\) \(\ldots\text{,}\) \(x_n\) as weights. By construction, each of the vectors \(\vv_1\text{,}\) \(\vv_2\text{,}\) \(\ldots\text{,}\) \(\vv_k\) has a component that is 1 with the corresponding component as 0 in all the other \(\vv_i\text{.}\) Therefore, the vectors \(\vv_1\text{,}\) \(\vv_2\text{,}\) \(\ldots\text{,}\) \(\vv_k\) are linearly independent and span \(\Nul R\) (and \(\Nul A\)). In other words, the method we have developed to find the general solution to \(A \vx = \vzero\) always produces a basis for \(\Nul A\text{.}\)
Basis for \(\Col A\).
Here we explain why the pivot columns of \(A\) form a basis for \(\Col A\text{.}\) Recall that the product \(A \vx\) expresses a linear combination of the columns of \(A\) with weights from \(\vx\text{,}\) and every such linear combination is matched with a product \(R \vx\) giving a linear combination of the columns of \(R\) using the same weights. So if a set of columns of \(R\) is linearly independent (or dependent), then the set of corresponding columns in \(A\) is linearly independent (or dependent) and vice versa. Since each pivot column of \(R\) is a vector with 1 in one entry (a different entry for different pivot columns) and zeros elsewhere, the pivot columns of \(R\) are clearly linearly independent. It follows that the pivot columns of \(A\) are linearly independent. All that remains is to explain why the pivot columns of \(A\) span \(\Col A\text{.}\) Let \(\vr_1\text{,}\) \(\vr_2\text{,}\) \(\ldots\text{,}\) \(\vr_n\) be the columns of \(R\) so that \(R~=~[\vr_1 \ \ \vr_2 \ \ \cdots \ \ \vr_n]\text{,}\) and let \(\va_1\text{,}\) \(\va_2\text{,}\) \(\ldots\text{,}\) \(\va_n\) be the columns of \(A\) so that \(A~=~[\va_1 \ \ \va_2 \ \ \cdots \ \ \va_n]\text{.}\) Suppose \(\va_i\) is a non-pivot column for \(A\) and \(\vr_i\) the corresponding non-pivot column in \(R\text{.}\) Each pivot column is composed of a single 1 with the rest of its entries 0. Also, if a non-pivot column contains a nonzero entry, then there is a corresponding pivot column that contains a 1 in the corresponding position. So \(\vr_i\) is a linear combination of \(\vr_1\text{,}\) \(\vr_2\text{,}\) \(\ldots\text{,}\) \(\vr_{n-k}\) — the pivot columns of \(R\text{.}\) Thus,
for some scalars \(c_1\text{,}\) \(c_2\text{,}\) \(\ldots\text{,}\) \(c_{n-k}\text{.}\) Let \(\vx = [c_1 \ c_2 \ \cdots \ c_{n-k} \ 0 \ \cdots 0 \ -1 \ 0 \ \cdots \ 0]^{\tr}\text{,}\) where the \(-1\) is in position \(i\text{.}\) Then \(R \vx = \vzero\) and so \(A \vx = \vzero\text{.}\) Thus,
and \(\va_i\) is a linear combination of the pivot columns of \(A\text{.}\) So every non-pivot column of \(A\) is in the span of \(A\) and we conclude that the pivot columns of \(A\) form a basis for \(\Col A\text{.}\)
IMPORTANT POINT.
It is the pivot columns of \(A\) that form a basis for \(\Col A\text{,}\) not the pivot columns of the reduced row echelon form \(R\) of \(A\text{.}\) In general, \(\Col R \neq \Col A\text{.}\)
We can incorporate the ideas of this section to expand the Invertible Matrix Theorem.
Theorem 13.7. The Invertible Matrix Theorem.
Let \(A\) be an \(n \times n\) matrix. The following statements are equivalent.
The matrix \(A\) is an invertible matrix.
The matrix equation \(A \vx = \vzero\) has only the trivial solution.
The matrix \(A\) has \(n\) pivot columns.
Every row of \(A\) contains a pivot.
The columns of \(A\) span \(\R^n\text{.}\)
The matrix \(A\) is row equivalent to the identity matrix \(I_n\text{.}\)
The columns of \(A\) are linearly independent.
The columns of \(A\) form a basis for \(\R^n\text{.}\)
The matrix transformation \(T\) from \(\R^n\) to \(\R^n\) defined by \(T(\vx) = A\vx\) is one-to-one.
The matrix equation \(A \vx = \vb\) has exactly one solution for each vector \(\vb\) in \(\R^n\text{.}\)
The matrix transformation \(T\) from \(\R^n\) to \(\R^n\) defined by \(T(\vx) = A\vx\) is onto.
There is an \(n \times n\) matrix \(C\) so that \(AC = I_n\text{.}\)
There is an \(n \times n\) matrix \(D\) so that \(DA = I_n\text{.}\)
The scalar 0 is not an eigenvalue of \(A\text{.}\)
The matrix \(A^{\tr}\) is invertible.
\(\Nul A = \{\vzero\}\text{.}\)
\(\Col A = \R^n\text{.}\)
Subsection Examples
What follows are worked examples that use the concepts from this section.
Example 13.8.
(a)
Let \(A = \left[ \begin{array}{rrrr} 1\amp 0\amp -2\amp 3 \\ -2\amp -4\amp 0\amp -14 \\ 1\amp 3\amp 1\amp 9 \end{array} \right]\text{.}\)
(i)
Find a basis for \(\Col A\text{.}\)
Technology shows that the reduced row echelon form of \(A\) is
The first two columns of \(A\) are the pivot columns of \(A\text{.}\) Since the pivot columns of \(A\) form a basis for \(\Col A\text{,}\) a basis for \(\Col A\) is
Solution.
We use \(A = \left[ \begin{array}{rrrr} 1\amp 0\amp -2\amp 3 \\ -2\amp -4\amp 0\amp -14 \\ 1\amp 3\amp 1\amp 9 \end{array} \right]\text{.}\)
Let \(\vv_1 = \left[ \begin{array}{r} 1\\ -2 \\ 1 \end{array} \right]\) and \(\vv_2 = \left[ \begin{array}{r} 0\\ -4 \\ 3 \end{array} \right]\text{.}\) Since neither \(\vv_1\) nor \(\vv_2\) is a scalar multiple of the other, we see that \(\Col A\) is the span of two linearly independent vectors in \(\R^3\text{.}\) Thus, we conclude that \(\Col A\) is the plane in \(\R^3\) through the origin and the points \((1,-2,1)\) and \((0,-4,3)\text{.}\)
(ii)
Describe \(\Col A\) geometrically (e.g., as a line, a plane, a union of lines, etc.) in the appropriate larger space.
Solution.
(b)
Let \(B = \left[ \begin{array}{rrr} 0\amp -2\amp 1\\-1\amp 0\amp 1 \\ 6\amp -10\amp -1 \\ 1\amp -4\amp 1 \end{array} \right]\text{.}\)
(i)
Find a basis for \(\Nul B\text{.}\)
Solution.
We use \(B = \left[ \begin{array}{rrr} 0\amp -2\amp 1\\-1\amp 0\amp 1 \\ 6\amp -10\amp -1 \\ 1\amp -4\amp 1 \end{array} \right]\text{.}\)
Technology shows that the reduced row echelon form of \(B\) is
To find a basis for \(\Nul B\text{,}\) we must solve the homogeneous equation \(B \vx = \vzero\text{.}\) If \(\vx = \left[ \begin{array}{c}x_1\\x_2\\x_3 \end{array} \right]\) and \(B \vx = \vzero\text{,}\) the reduced row echelon form of \(B\) shows that \(x_3\) is free, \(x_2 = \frac{1}{2}x_3\text{,}\) and \(x_1 = x_3\text{.}\) So
Thus, a basis for \(\Nul B\) is
(ii)
Describe \(\Nul B\) geometrically (e.g., as a line, a plane, a union of lines, etc.) in the appropriate larger space.
Solution.
We use \(B = \left[ \begin{array}{rrr} 0\amp -2\amp 1\\-1\amp 0\amp 1 \\ 6\amp -10\amp -1 \\ 1\amp -4\amp 1 \end{array} \right]\text{.}\)
Since \(\Nul B = \Span\left\{ \left[ \begin{array}{c} 2\\ 1 \\ 2 \end{array} \right] \right\}\) is the span of one nonzero vector in \(\R^3\text{,}\) we conclude that \(\Nul B\) is the line in \(\R^3\) through the origin and the point \((2,1,2)\text{.}\)
Example 13.9.
Let \(A = \left[ \begin{array}{rr} 1\amp 3\\-1\amp 2 \\ 0\amp -2\\5\amp 6 \end{array} \right]\text{,}\) and let \(T\) be the matrix transformation defined by \(T(\vx) = A\vx\text{.}\)
(a)
What are the domain and codomain of \(T\text{?}\) Why?
Solution.
Recall that \(A \vx\) is a linear combination of the columns of \(A\) with weights from \(\vx\text{.}\) So \(A \vx\) is defined only when the number of components of \(\vx\) is equal to the number of columns of \(A\text{.}\) This explains why the domain of \(T\) is \(\R^2\text{.}\) Also, since each output of \(T\) is a linear combination of the columns of \(A\text{,}\) the codomain of \(T\) is \(\R^4\text{.}\)
(b)
Find all vectors \(\vx\) such that \(T(\vx) = \vzero\text{.}\) How is this set of vectors related to \(\Nul A\text{?}\) Explain.
Solution.
The set of vectors \(\vx\) such that \(\vzero = T(\vx) = A\vx\) is the same as \(\Nul A\text{.}\) The reduced row echelon form of \(A\) is
Since both columns of \(A\) are pivot columns, the columns of \(A\) are linearly independent. This implies that \(A \vx = \vzero\) has only the trivial solution. Therefore, the only vector \(\vx\) such that \(T(\vx) = \vzero\) is the zero vector in \(\R^2\text{.}\)
(c)
Is \(T\) one-to-one? Explain.
Solution.
The previous part shows that \(\Ker(T) = \{\vzero\}\text{.}\) This means that \(T\) is one-to-one by Theorem 13.3.
(d)
Is \(T\) onto? If yes, explain why. If no, find a basis for the range of \(T\text{.}\)
Solution.
Recall that the range of \(T\) is the same as \(\Col A\text{.}\) The reduced row echelon form of \(A\) has a row of zeros, so \(A \vx = \vb\) is not consistent for every \(\vb\) in \(\R^4\text{.}\) We conclude that \(T\) is not onto. To find a basis for the range of \(T\text{,}\) we just need to find a basis for \(\Col A\text{.}\) The pivot columns of \(A\) form such a basis, so a basis for the range of \(T\) is
Subsection Summary
-
The null space of an \(m \times n\) matrix \(A\) is the set of vectors \(\vx\) in \(\R^n\) so that \(A \vx = \vzero\text{.}\) In set notation
\begin{equation*} \Nul A = \{\vx : A \vx = \vzero\}\text{.} \end{equation*} The column space of a matrix \(A\) is the span of the columns of \(A\text{.}\)
-
A subset \(W\) of \(\R^n\) is a subspace of \(\R^n\) if
\(\vu + \vv\) is in \(W\) whenever \(\vu\) and \(\vv\) are in \(W\) (when this property is satisfied we say that \(W\) is closed under addition),
\(a \vu\) is in \(W\) whenever \(a\) is a scalar and \(\vu\) is in \(W\) (when this property is satisfied we say that \(W\) is closed under multiplication by scalars),
\(\vzero\) is in \(W\text{.}\)
The null space of an \(m \times n\) matrix is a subspace of \(\R^n\) while the column space of \(A\) is a subspace of \(\R^m\text{.}\)
The span of any set of vectors in \(\R^n\) is a subspace of \(\R^n\text{.}\)
-
The kernel of a matrix transformation \(T : \R^n \to \R^m\) is the set
\begin{equation*} \Ker(T) = \{\vx \in \R^n : T(\vx) = \vzero\}\text{.} \end{equation*} The kernel of a matrix transformation \(T\) defined by \(T(\vx) = A\vx\) is the same set as \(\Nul A\text{.}\)
-
The range of a matrix transformation \(T : \R^n \to \R^m\) is the set
\begin{equation*} \Range(T) = \{T(\vx) : \vx \in \R^n\}\text{.} \end{equation*} The range of a matrix transformation \(T\) defined by \(T(\vx) = A\vx\) is the same set as \(\Col A\text{.}\)
A basis for the null space of a matrix \(A\) can be found by writing the general solution to the homogeneous equation \(A \vx = \vzero\) as a linear combination of vectors whose weights are the variables corresponding to the non-pivot columns of \(A\text{.}\) The number of vectors in a basis for \(\Nul A\) is the number of non-pivot columns of \(A\text{.}\)
The pivot columns of a matrix \(A\) form a basis for the column space of \(A\text{.}\)
Exercises Exercises
1.
Find a basis for the null space and column space of the matrix
Of which spaces are the null and column spaces of \(A\) subspaces?
2.
If the column space of \(\left[ \begin{array}{ccr} 1\amp 2\amp -1\\1\amp 1\amp 1\\1\amp 2\amp c \end{array} \right]\) has basis \(\left\{ \left[ \begin{array}{c} 1\\1\\1 \end{array} \right], \left[ \begin{array}{c} 2\\1\\2 \end{array} \right]\right\}\text{,}\) what must \(c\) be?
3.
If the null space of \(\left[ \begin{array}{ccc} 2\amp 1\amp a\\ 1\amp 2\amp b \end{array} \right]\) has basis \(\left\{ \left[ \begin{array}{r} 2\\-1\\1 \end{array} \right]\right\}\text{,}\) what must \(a\) and \(b\) be?
4.
Find a matrix with at least four non-zero and distinct columns for which the column space has basis \(\left\{ \left[ \begin{array}{c} 1\\1\\1 \end{array} \right] , \left[ \begin{array}{c} 2\\2\\3 \end{array} \right] \right\}\text{.}\)
5.
Find a matrix with at least two rows whose null space has basis \(\left\{ \left[ \begin{array}{r} 1\\1\\-1 \end{array} \right] \right\}\text{.}\)
6.
Find a matrix whose column space has basis \(\left\{ \left[ \begin{array}{c} 1\\1\\1 \end{array} \right] , \left[ \begin{array}{c} 2\\2\\3 \end{array} \right] \right\}\) and whose null space has basis \(\left\{ \left[ \begin{array}{r} 2\\1\\-1 \end{array} \right] \right\}\text{.}\)
7.
If possible, find a \(4\times 4\) matrix whose column space does not equal \(\R^4\) but whose null space equals \(\{\vzero\}\text{.}\) Explain your answer. If not possible, explain why not.
8.
Label each of the following statements as True or False. Provide justification for your response.
(a) True/False.
For a \(3\times 4\) matrix, the null space contains vectors other than the zero vector.
(b) True/False.
For a \(4\times 3\) matrix, the null space contains vectors other than the zero vector.
(c) True/False.
If Nul \(A\) is not the zero subspace, then the transformation \(\textbf{x} \mapsto A\textbf{x}\) is not one-to-one.
(d) True/False.
If the transformation \(\textbf{x} \mapsto A\textbf{x}\) is onto where \(A\) is an \(m\times n\) matrix, then Col \(A=\R^m\text{.}\)
(e) True/False.
For a \(4\times 3\) matrix \(A\text{,}\) Col \(A\) cannot equal \(\R^4\text{.}\)
(f) True/False.
For a \(3\times 4\) matrix \(A\text{,}\) Col \(A\) cannot equal \(\R^3\text{.}\)
(g) True/False.
The null space of the matrix \(\begin{bmatrix}1 \amp 1 \amp 1 \\1 \amp 1 \amp 1 \\ 1 \amp 1 \amp 1 \end{bmatrix}\) consists of the two vectors \([-1 \ 0 \ 1]^{\tr}\) and \([0 \ -1 \ 1]^{\tr}\text{.}\)
(h) True/False.
A basis for the null space of the matrix \(\begin{bmatrix}1 \amp 1 \amp 1 \\1 \amp 1 \amp 1 \\ 1 \amp 1 \amp 1 \end{bmatrix}\) consists of the two vectors \([-1 \ 0 \ 1]^{\tr}\) and \([0 \ -1 \ 1]^{\tr}\text{.}\)
(i) True/False.
There does not exist a matrix whose null space equals its column space.
(j) True/False.
The column space of every \(4\times 4\) matrix is \(\R^4\) and its null space is \(\{\vzero\}\text{.}\)
Subsection Project: Solving the Lights Out Game
The Lights Out game starts with a \(5 \times 5\) grid on which some of the squares are lit (on) and some are not lit (off). We will call such a state a configuration. Pressing a square that is on turns it off and changes the state of all adjacent (vertically and horizontally) squares, and pressing a square that is off turns it on and changes the state of all adjacent (vertically and horizontally) squares. To model this situation, we consider the number system \(\Z_2 = \{0,1\}\) consisting only of 0 and 1, where 0 represents the off state and 1 the on state. We can also think of 1 as the act of pressing a square and 0 as the act of not pressing — that is,
\(0+0 = 0\) (not pressing an off square leaves it off),
\(0+1 = 1 = 1 + 0\) (pressing an off square turns it on or not pressing a lit square leaves it lit),
\(1+1 = 0\) (pressing a lit square turns it off).
The numbers 0 and 1 in \(\Z_2\) will be the only numbers we use when playing the Lights Out game, so all of our matrix entries will be in \(\Z_2\) and all of our calculations are done in \(\Z_2\text{.}\)
There are two ways we can view a Lights Out game.
We can view each configuration as a \(5 \times 5\) matrix. In this situation, we label the entries in the grid as shown in Figure 13.10. Each entry in the grid will be assigned a 0 or 1 according to whether the light in that entry is off or on.
For our purposes a better way to visualize a Lights Out configuration is as a \(25 \times 1\) vector. The components in this vector correspond to the entries in the \(5 \times 5\) grid with the correspondence given by the numbering demonstrated in Figure 13.10 (for the sake of space, this array is shown in a row instead of a column). Again, each component is assigned a 0 or 1 according to whether the light for that entry is off or on. In this view, each configuration is a vector with 25 components in \(\Z_2\text{.}\)
We will take the latter perspective and view the Lights Out game as if it is played on a \(25 \times 1\) board with entries in \(\Z_2\text{.}\) The space of all of these Lights Out configurations is denoted as \(\Z_2^{25}\) (similar to \(\R^{25}\text{,}\) but with entries in \(\Z_2\) rather than \(\R\)). Since \(\Z_2\) is a field, the space \(\Z_2^{25}\) is a vector space just as \(\R^{25}\) is. This is the environment in which we will play the Lights Out game.
If we think about the game as played on a \(25 \times 1\) board, then pressing a square correlates to selecting one of the 25 components of a configuration vector. Each time we press a square, we make a move that changes the status of that square and all the squares vertically or horizontally adjacent to it from the \(5 \times 5\) board. Recalling that adding 1 to a square has the effect of changing its status (from on to off or off to on), and each move that we make in the game can be represented as a \(25 \times 1\) vector that is added to a configuration. For example, the move of pressing the first square is given by adding the vector
to a configuration vector and the move of pressing the second square is represented by adding the vector
Project Activity 13.5.
Let \(\vm_i\) be the move of pressing the \(i\)th square for \(i\) from 1 to 25.
(a)
Find vector representations for \(\vm_9\) and \(\vm_{22}\text{.}\)
(b)
Let \(M = [m_{ij}] = \left[\vm_1 | \vm_2 | \cdots | \vm_{25} \right]\text{.}\) Explain why \(m_{ij} = m_{ji}\) for each \(i\) and \(j\text{.}\) In other words, explain why \(M^{\tr} = M\text{.}\) (Such a matrix is called a symmetric matrix.)
The goal of the Lights Out game is to begin with an initial configuration \(\vc\) (a vector in \(\Z_2^{25}\)) and determine if we can apply a sequence of moves to obtain the configuration in which all the entries are 0 (or all the lights are off). The vector in \(\Z_2^{25}\) of all 0s is the zero vector in \(\Z_2^{25}\) and we will denote it as \(\vzero\text{.}\) Some basic algebra of vector addition in \(\Z_2\) (or mod 2) will help us understand the strategy.
Start with a configuration \(\vc\text{.}\) If we press the \(i\)th square, then we obtain the new configuration \(\vc_1 = \vm_i + \vc\) (where each move \(\vm_i\) is also in \(\Z_2^{25}\)).
Project Activity 13.6.
(a)
What happens if we press the \(i\)th square twice in a row? Explain in terms of the action and the game and verify using vector addition.
(b)
Explain why applying move \(\vm_i\) then move \(\vm_j\) is the same as applying move \(\vm_j\text{,}\) then \(\vm_i\text{.}\)
(c)
Explain how the answers to the previous two questions show that to play the game we only need to determine which buttons to press (and only once each) without worrying about the order in which the buttons are pressed.
What we have seen is that to play the game we are really looking for scalars \(x_1\text{,}\) \(x_2\text{,}\) \(\ldots\text{,}\) \(x_{25}\) in \(\Z_2\) (in other words, either 0 or 1) so that
Project Activity 13.7.
Explain why (13.1) has the equivalent matrix equation
where \(M = \left[\vm_1 | \vm_2 | \cdots | \vm_{25} \right]\text{,}\) or
Explicitly identify the vector \(\vx\text{.}\) Also, explain why \(\vc\) is on the right side of this equation.
To solve a Lights Out game now, all we need do is determine a solution, if one exists, to the matrix equation (13.2).
Project Activity 13.8.
For this activity you may use the fact that the reduced row echelon form of the matrix \(M\) (using algebra in \(\Z_2\)) is as shown below.
(a)
Find a basis for the column space of \(M\text{.}\)
(b)
Explain why not every Lights Out puzzle can be solved. That is, explain why there are some initial configurations of lights on and off for which it is not possible to turn out all the lights (without turning off the game). Relate this to the column space of \(M\text{.}\)
The reduced row echelon form of the matrix \(M\) (using algebra in \(\Z_2\)):
To find conditions under which a Lights Out game is not solvable, we will demonstrate that if \(A\) is an \(n \times n\) matrix, then the scalar product of any vector in \(\Nul A^{\tr}\) with any column of \(A\) is \(\vzero\text{.}\) Let \(A = [a_{ij}]\) be an \(n \times n\) matrix with columns \(\va_1\text{,}\) \(\va_2\text{,}\) \(\ldots\text{,}\) \(\va_n\text{.}\) Represent the entries in the \(i\)th column as \(\va_i = [a_{1i} \ a_{2i} \ \ldots \ a_{ni}]^{\tr}\) for each \(i\) between 1 and \(n\text{.}\) Note that \(\va_i\) is also the \(i\)th row of \(A^{\tr}\text{.}\) Also, let \(\vx = [x_1 \ x_2 \ \ldots \ x_n]^{\tr}\) be a vector in \(\Nul A^{\tr}\text{.}\) Then \(A^{\tr} \vx = \vzero\text{.}\) Using the row-column method of multiplying a matrix by a vector, when we multiply the \(i\)th row of \(A^{\tr}\) with \(\vx\) we obtain
This equation is valid for each \(i\) between 1 and \(n\text{.}\) Recall that the sum in (13.3) is the scalar product of \(\va_i\) and \(\vx\) and is denoted \(\va_i \cdot \vx\text{.}\) That is,
The fact that \(\vx\) is in \(\Nul A^{\tr}\) means \(\va_i \cdot \vx = \vzero\) for every \(i\) between 1 and \(n\text{.}\) In other words, the scalar product of any vector in \(\Nul A^{\tr}\) with any column of \(A\) is \(\vzero\text{.}\) (When the scalar product of two vectors is \(\vzero\text{,}\) we call the vectors orthogonal — a fancy word for “perpendicular”.) Since scalar products are linear, we can extend this result to the following.
Theorem 13.11.
Let \(A\) be an \(n \times n\) matrix. If \(\vx\) is any vector in \(\Col A\) and \(\vy\) is any vector in \(\Nul A^{\tr}\text{,}\) then \(\vx \cdot \vy = 0\text{.}\)
With Theorem 13.11 in mind we can return to our analysis of the Lights Out game, applying this result to the matrix \(M\text{.}\)
Project Activity 13.9.
(a)
Find a basis for the null space of \(M^{\tr}\text{.}\) (Recall that \(M^{\tr} = M\text{,}\) so you can use the reduced row echelon form of \(M\) (using algebra in \(\Z_2\)) given earlier.)
(b)
Use Theorem 13.11 to show that if \(\vc = [c_1 \ c_2 \ \ldots \ c_{25}]^{\tr}\) is an initial Lights Out configuration that is solvable, then \(\vc\) must be orthogonal to each of the vectors in a basis for \(\Nul M^{\tr}\text{.}\) Then show that if \(\vc\) is a solvable initial Lights Out configuration, \(\vc\) must satisfy
and
Be very specific in your explanation.
Project Activity 13.10.
Now that we know which Lights Out games can be solved, let \(\vc\) be an initial configuration to a solvable Lights Out game. Explain how to find a solution to this game. Will the solution be unique? Explain.
Now that we have a strategy for solving the Lights Out game, use it to solve random puzzles at geogebra.org/m/wcmctahp
, or create your own game to solve.