Skip to main content

Section 22 Properties of Determinants

Subsection Introduction

This section is different than others in that it contains mainly proofs of previously stated results and only a little new material. Consequently, there is no application attached to this section.

We have seen that an important property of the determinant is that it provides an easy criteria for the invertibility of a matrix. As a result, we obtained an algebraic method for finding the eigenvalues of a matrix, using the characteristic equation. In this section, we will investigate other properties of the determinant related to how elementary row operations change the determinant. These properties of the determinant will help us evaluate the determinant in a more efficient way compared to using the cofactor expansion method, which is computationally intensive for large \(n\) values due to it being a recursive method. Finally, we will derive a geometrical interpretation of the determinant.

Preview Activity 22.1.

(a)

We first consider how the determinant changes if we multiply a row of the matrix by a constant.

(i)

Let \(A = \left[ \begin{array}{cc} 2\amp 3 \\ 1\amp 4 \end{array} \right]\text{.}\) Pick a few different values for the constant \(k\) and compare the determinant of \(A\) and that of \(\left[ \begin{array}{cc} 2k\amp 3k \\ 1\amp 4 \end{array} \right]\text{.}\) What do you conjecture that the effect of multiplying a row by a constant on the determinant is?

(ii)

If we want to make sure our conjecture is valid for any \(2\times 2\) matrix, we need to show that for \(A = \left[ \begin{array}{cc} a\amp b\\c\amp d \end{array} \right]\text{,}\) the relationship between \(\det(A)\) and the determinant of \(\left[ \begin{array}{cc} a\cdot k\amp b\cdot k\\c\amp d \end{array} \right]\) follows our conjecture. We should also check that the relationship between \(\det(A)\) and the determinant of \(\left[ \begin{array}{cc} a\amp b\\c\cdot k\amp d\cdot k \end{array} \right]\) follows our conjecture. Verify this.

(iii)

Make a similar conjecture for what happens to the determinant when a row of a \(3\times 3\) matrix \(A\) is multiplied by a constant \(k\text{,}\) and explain why your conjecture is true using the cofactor expansion definition of the determinant.

(b)

The second type of elementary row operation we consider is row swapping.

(i)

Take a general \(2\times 2\) matrix \(A = \left[ \begin{array}{cc} a\amp b\\c\amp d \end{array} \right]\) and determine how row swapping effects the determinant.

(ii)

Now choose a few different \(3\times 3\) matrices and see how row swapping changes the determinant in these matrices by evaluating the determinant with a calculator or any other appropriate technology.

(iii)

Based on your results so far, conjecture how row swapping changes the determinant in general.

(c)

The last type of elementary row operation is adding a multiple of a row to another. Determine the effect of this operation on a \(2\times 2\) matrix by evaluating the determinant of a general \(2\times 2\) matrix after a multiple of one row is added to the other row.

(d)

All of the elementary row operations we discussed above can be achieved by matrix multiplication with elementary matrices. For each of the following elementary matrices, determine what elementary operation it corresponds to by calculating the product \(EA\text{,}\) where \(A = \left[ \begin{array}{ccc} a_{11}\amp a_{12}\amp a_{13}\\a_{21}\amp a_{22}\amp a_{23}\\a_{31}\amp a_{32}\amp a_{33} \end{array} \right]\) is a general \(3\times 3\) matrix.

(i)

\(E = \left[ \begin{array}{ccc} 0\amp 1\amp 0\\ 1\amp 0\amp 0 \\ 0\amp 0\amp 1 \end{array} \right]\)

(ii)

\(E = \left[ \begin{array}{ccc} 1\amp 0\amp 0\\ 0\amp 3\amp 0 \\ 0\amp 0\amp 1 \end{array} \right]\)

(iii)

\(E = \left[ \begin{array}{ccc} 1\amp 0\amp 0\\ 0\amp 1\amp 2 \\ 0\amp 0\amp 1 \end{array} \right]\)

Subsection Elementary Row Operations and Their Effects on the Determinant

In Preview Activity 22.1, we conjectured how elementary row operations affect the determinant of a matrix. In the following activity, we prove how the determinant changes when a row is multiplied by a constant using the cofactor expansion definition of the determinant.

Activity 22.2.

In this activity, assume that the determinant of \(A\) can be determined by a cofactor expansion along any row or column. (We will prove this result independently later in this section.) Consider an arbitrary \(n \times n\) matrix \(A = [a_{ij}]\text{.}\)

(a)

Write the expression for \(\det(A)\) using the cofactor expansion along the second row.

(b)

Let \(B\) be obtained by multiplying the second row of \(A\) by \(k\text{.}\) Write the expression for \(\det(B)\) if the cofactor expansion along the second row is used.

(c)

Use the expressions you found above, to express \(\det(B)\) in terms of \(\det(A)\text{.}\)

(d)

Explain how this method generalizes to prove the relationship between the determinant of a matrix \(A\) and that of the matrix obtained by multiplying a row by a constant \(k\text{.}\)

Your work in Activity 22.2 proves the first part of the following theorem on how elementary row operations change the determinant of a matrix.

In the next section, we will use elementary matrices to prove the last two properties of Theorem 22.1.

Subsection Elementary Matrices

As we saw in Preview Activity 22.1, elementary row operations can be achieved by multiplication by elementary matrices.

Definition 22.2.

An elementary matrix is a matrix obtained by performing a single elementary row operation on an identity matrix.

The following elementary matrices correspond, respectively, to an elementary row operation which swaps rows 2 and 4; an elementary row operation which multiplies the third row by 5; and an elementary row operation which adds four times the third row to the first row on any \(4\times 4\) matrix:

\begin{equation*} E_1 = \left[ \begin{array}{cccc} 1\amp 0\amp 0\amp 0\\0\amp 0\amp 0\amp 1\\0\amp 0\amp 1\amp 0\\ 0\amp 1\amp 0\amp 0 \end{array} \right], \ \ E_2 = \left[ \begin{array}{cccc} 1\amp 0\amp 0\amp 0\\0\amp 1\amp 0\amp 0\\0\amp 0\amp 5\amp 0 \\ 0\amp 0\amp 0\amp 1 \end{array} \right], \ \ \text{ and } \ \ E_3 = \left[ \begin{array}{cccc} 1\amp 0\amp 4\amp 0\\0\amp 1\amp 0\amp 0\\0\amp 0\amp 1\amp 0\\0\amp 0\amp 0\amp 1 \end{array} \right]\,\text{.} \end{equation*}

To obtain an elementary matrix corresponding an elementary row operation, we simply perform the elementary row operation on the identity matrix. For example, \(E_1\) above is obtained by swapping rows 2 and 4 of the identity matrix.

With the use of elementary matrices, we can now prove the result about how the determinant is affected by elementary row operations. We first rewrite Theorem 22.1 in terms of elementary matrices:

Notes on Theorem 22.3.

An elementary matrix \(E\) obtained by multiplying a row by \(r\) is a diagonal matrix with one \(r\) along the diagonal and the rest 1s, so \(\det(E) = r\text{.}\) Similarly, an elementary matrix \(E\) obtained by adding a multiple of a row to another is a triangular matrix with 1s along the diagonal, so \(\det(E) = 1\text{.}\) The fact that the the determinant of an elementary matrix obtained by swapping two rows is \(-1\) is a bit more complicated and is verified independently later in this section. Also, the proof of Theorem 22.3 depends on the fact that the cofactor expansion of a matrix is the same along any two rows. A proof of this can also be found later in this section.

Proof of Theorem 22.3.

We will prove the result by induction on \(n\text{,}\) the size of the matrix \(A\text{.}\) We verified these results in Preview Activity 22.1 for \(n=2\) using elementary row operations. The elementary matrix versions follow immediately.

Now assume the theorem is true for \(k\times k\) matrices with \(k\geq 2\) and consider an \(n\times n\) matrix \(A\) where \(n=k+1\text{.}\) If \(E\) is an \(n\times n\) elementary matrix, we want to show that \(\det(EA)=\det(E)\det(A)\text{.}\) Let \(EA=B\text{.}\) (Although it is an abuse of language, we will refer to both the elementary matrix and the elementary row operation corresponding to it by \(E\text{.}\))

When finding \(\det(B)=\det(EA)\) we will use a cofactor expansion along a row which is not affected by the elementary row operation \(E\text{.}\) Since \(E\) affects at most two rows and \(A\) has \(n\geq 3\) rows, it is possible to find such a row, say row \(i\text{.}\) The cofactor expansion along row \(i\) of \(B\) is

\begin{equation} b_{i1} (-1)^{i+1} \det(B_{i1}) + b_{i2} (-1)^{i+2} \det(B_{i2}) + \cdots + b_{in} (-1)^{i+n} \det(B_{in}) \,\text{.}\tag{22.1} \end{equation}

Since we chose a row of \(A\) which was not affected by the elementary row operation, it follows that \(b_{ij}=a_{ij}\) for \(1\leq j\leq n\text{.}\) Also, the matrix \(B_{ij}\) obtained by removing row \(i\) and column \(j\) from matrix \(B=EA\) can be obtained from \(A_{ij}\) by an elementary row operation of the same type as \(E\text{.}\) Hence there is an elementary matrix \(E_k\) of the same type as \(E\) with \(B_{ij}=E_k A_{ij}\text{.}\) Therefore, by induction, \(\det(B_{ij})=\det(E_k)\det(A_{ij})\) and \(\det(E_k)\) is equal to 1, -1 or \(r\) depending on the type of elementary row operation. If we substitute this information into equation (22.1), we obtain

\begin{equation*} \begin{split} \det(B)\amp = a_{i1} (-1)^{i+1} \det(E_k) \det(A_{i1}) + a_{i2} (-1)^{i+2} \det(E_k) \det(A_{i2}) \\ \amp \qquad + \cdots + a_{in} (-1)^{i+n} \det(E_k) \det(A_{in})\\ \amp = \det(E_k) \det(A) \, . \end{split} \end{equation*}

This equation proves \(\det(EA)=\det(E_k)\det(A)\) for any \(n\times n\) matrix \(A\) where \(E_k\) is the corresponding elementary row operation on the \(k\times k\) matrices obtained in the cofactor expansion.

The proof of the inductive step will be finished if we show that \(\det(E_k)=\det(E)\text{.}\) This equality follows if we let \(A=I_n\) in \(\det(EA)=\det(E_k)\det(A)\text{.}\) Therefore, \(\det(E)\) is equal to \(r\text{,}\) or 1, or \(-1\text{,}\) depending on the type of the elementary row operation \(E\) since the same is true of \(\det(E_k)\) by inductive hypothesis.

Therefore, by the principle of induction, the claim is true for every \(n\geq 2\text{.}\)

As a corollary of this theorem, we can prove the multiplicativity of determinants:

Proof.

If \(A\) is non-invertible, then \(AB\) is also non-invertible and both \(\det(A)\) and \(\det(AB)\) are 0, proving the equality in this case.

Suppose now that \(A\) is invertible. By the Invertible Matrix Theorem, we know that \(A\) is row equivalent to \(I_n\text{.}\) Expressed in terms of elementary matrices, this means that there are elementary matrices \(E_1, E_2, \ldots, E_\ell\) such that

\begin{equation} A= E_1 E_2 \cdots E_\ell I_n = E_1 E_2 \cdots E_\ell \,\text{.}\tag{22.2} \end{equation}

Therefore, repeatedly applying Theorem 22.3, we find that

\begin{equation} \det(A) = \det(E_1) \det(E_2) \cdots \det(E_\ell) \,\text{.}\tag{22.3} \end{equation}

If we multiply equation (22.2) by \(B\) on the right, we obtain

\begin{equation*} AB = E_1 E_2 \cdots E_\ell B \,\text{.} \end{equation*}

Again, by repeatedly applying Theorem 22.3 with this product of matrices, we find

\begin{equation*} \det(AB) = \det(E_1 E_2 \cdots E_\ell B ) = \det(E_1) \det(E_2) \cdots \det(E_\ell) \det(B) \,\text{.} \end{equation*}

From equation (22.3), the product of \(\det(E_i)\)'s equals \(\det(A)\text{,}\) so

\begin{equation*} \det(AB) = \det(A) \det(B) \end{equation*}

which finishes the proof of the theorem.

We can use the multiplicative property of the determinant and the determinants of elementary matrices to calculate the determinant of a matrix in a more efficient way than using the cofactor expansion. The next activity provides an example.

Activity 22.3.

Let \(A=\left[ \begin{array}{rcc} 1\amp 1\amp 2\\ 2\amp 2\amp 6\\ -1\amp 2\amp 1 \end{array} \right]\text{.}\)

(a)

Use elementary row operations to reduce \(A\) to a row echelon form. Keep track of the elementary row operation you use.

(b)

Taking into account how elementary row operations affect the determinant, use the row echelon form of \(A\) to calculate \(\det(A)\text{.}\)

Your work in Activity 22.3 provides an efficient method for calculating the determinant. If \(A\) is a square matrix, we use row operations given by elementary matrices \(E_1\text{,}\) \(E_2\text{,}\) \(\ldots\text{,}\) \(E_k\) to row reduce \(A\) to row echelon form \(R\text{.}\) That is

\begin{equation*} R = E_kE_{k-1} \cdots E_2E_1A\text{.} \end{equation*}

We know \(\det(E_i)\) for each \(i\text{,}\) and since \(R\) is a triangular matrix we can find its determinant. Then

\begin{equation*} \det(A) = \det(E_1)^{-1}\det(E_2)^{-1} \cdots \det(E_2)^{-1}\det(R)\text{.} \end{equation*}

In other words, if we keep track of how the row operations affect the determinant, we can calculate the determinant of a matrix \(A\) using row operations.

Activity 22.4.

Theorem 22.3 and Theorem 22.4 can be used to prove the following (part c of Theorem 17.3) that \(A\) is invertible if and only if \(\det(A) \neq 0\text{.}\) We see how in this activity. Let \(A\) be an \(n \times n\) matrix. We can row reduce \(A\) to its reduced row echelon form \(R\) by elementary matrices \(E_1\text{,}\) \(E_2\text{,}\) \(\ldots\text{,}\) \(E_k\) so that

\begin{equation*} R = E_1E_2 \cdots E_kA\text{.} \end{equation*}
(a)

Suppose \(A\) is invertible. What, then, is \(R\text{?}\) What is \(\det(R)\text{?}\) Can the determinant of an elementary matrix ever be \(0\text{?}\) How do we conclude that \(\det(A) \neq 0\text{?}\)

(b)

Now suppose that \(\det(A) \neq 0\text{.}\) What can we conclude about \(\det(R)\text{?}\) What, then, must \(R\) be? How do we conclude that \(A\) is invertible?

Summary.

Let \(A\) be an \(n\times n\) matrix. Suppose we swap rows \(s\) times and divide rows by constants \(k_1, k_2, \ldots, k_r\) while computing a row echelon form \(\text{ REF } (A)\) of \(A\text{.}\) Then \(\det(A)=(-1)^s k_1 k_2\cdots k_r \det(\text{ REF } (A))\text{.}\)

Subsection Geometric Interpretation of the Determinant

Determinants have interesting and useful applications from a geometric perspective. To understand the geometric interpretation of the determinant of an \(n\times n\) matrix \(A\text{,}\) we consider the image of the unit square under the transformation \(T(\vx)=A\vx\) and see how its area changes based on \(A\text{.}\)

Activity 22.5.

(a)

Let \(A = \left[ \begin{array}{cc} 2\amp 0\\0\amp 3 \end{array} \right]\text{.}\) Start with the unit square in \(\R^2\) with corners at the origin and at \((1,1)\text{.}\) In other words, the unit square we are considering consists of all vectors \(\vv=\left[ \begin{array}{c} x\\y \end{array} \right]\) where \(0\leq x\leq 1\) and \(0\leq y\leq 1\text{,}\) visualized as points in the plane.

(i)

Consider the collection of image vectors \(A\vv\) obtained by multiplying \(\vv\)'s by \(A\text{.}\) Sketch the rectangle formed by these image vectors.

(ii)

Explain how the area of this image rectangle and the unit square is related via \(\det(A)\text{.}\)

(iii)

Does the relationship you found above generalize to an arbitrary \(A = \left[ \begin{array}{cc} a\amp 0\\0\amp b \end{array} \right]\text{?}\) If not, modify the relationship to hold for all diagonal matrices.

(b)

Let \(A=\left[ \begin{array}{cc} 2\amp 1\\ 0\amp 3 \end{array} \right]\text{.}\)

(i)

Sketch the image of the unit square under the transformation \(T(\vv)=A\vv\text{.}\) To make the sketching easier, find the images of the vectors \([0 \ 0]^{\tr}, [1 \ 0]^{\tr}, [0 \ 1]^{\tr}, [1 \ 1]^{\tr}\) as points first and then connect these images to find the image of the unit square.

(ii)

Check that the area of the parallelogram you obtained in the above part is equal to \(\det(A)\text{.}\)

(iii)

Does the relationship between the area and \(\det(A)\) still hold if \(A=\left[ \begin{array}{rc} -2\amp 1\\ 0\amp 3 \end{array} \right]\text{?}\) If not, how will you modify the relationship?

It can be shown that for all \(2\times 2\) matrices a similar relationship holds.

There is a similar geometric interpretation of the determinant of a \(3\times 3\) matrix in terms of volume.

The sign of \(\det(A)\) can be interpreted in terms of the orientation of the column vectors of \(A\text{.}\) See the project in Section 17 for details.

Subsection An Explicit Formula for the Inverse and Cramer's Rule

In Section 10 we found the inverse \(A^{-1}\) using row reduction of the matrix obtained by augmenting \(A\) with \(I_n\text{.}\) However, in theoretical applications, having an explicit formula for \(A^{-1}\) can be handy. Such an explicit formula provides us with an algebraic expression for \(A^{-1}\) in terms of the entries of \(A\text{.}\) A consequence of the formula we develop is Cramer's Rule, which can be used to provide formulas that give solutions to certain linear systems.

We begin with an interesting connection between a square matrix and the matrix of its cofactors that we explore in the next activity.

Activity 22.6.

Let \(A = \left[ \begin{array}{crc} 2\amp 1\amp 3 \\ 1\amp 4\amp 5 \\ 2\amp -1\amp 2 \end{array} \right]\text{.}\)

(a)

Calculate the \((1,1)\text{,}\) \((1,2)\text{,}\) and \((1,3)\) cofactors of \(A\text{.}\)

(b)

If \(C_{ij}\) represents the \((i,j)\) cofactor of \(A\text{,}\) then the cofactor matrix \(C\) is the matrix \(C = [C_{ij}]\text{.}\) The adjugate matrix of \(A\) is the transpose of the cofactor matrix. In our example, the adjugate matrix of \(A\) is

\begin{equation*} \adj(A) = \left[ \begin{array}{rrr} 13\amp -5\amp -7 \\ 8\amp -2\amp -7 \\ -9\amp 4\amp 7 \end{array} \right]\text{.} \end{equation*}

Check the entries of this adjugate matrix with your calculations from part (a). Then calculate the matrix product

\begin{equation*} A \ \adj(A)\text{.} \end{equation*}
(c)

What do you notice about the product \(A \ \adj(A)\text{?}\) How is this product related to \(\det(A)\text{?}\)

The result of Activity 22.6 is rather surprising, but it is valid in general. That is, if \(A = [a_{ij}]\) is an invertible \(n \times n\) matrix and \(C_{ij}\) is the \((i,j)\) cofactor of \(A\text{,}\) then \(A \ \adj(A)=\det(A) I_n\text{.}\) In other words, \(A \left(\frac{\adj(A)}{\det(A)}\right) = I_n\) and so

\begin{equation*} A^{-1} = \frac{1}{\det(A)} \adj(A)\text{.} \end{equation*}

This gives us another formulation of the inverse of a matrix. To see why \(A \ \adj(A) = \det(A) I_n\text{,}\) we use the row-column version of the matrix product to find the \(ij\)th entry of \(A \ \adj(A)\) as indicated by the shaded row and column

\begin{equation*} \left[ \begin{array}{cccc} a_{11} \amp a_{12} \amp \cdots \amp a_{1n} \\ a_{21} \amp a_{22} \amp \cdots \amp a_{2n} \\ \vdots \amp \vdots \amp \amp \vdots \\ \cellcolor[gray]{.9} a_{i1} \amp \cellcolor[gray]{.9} a_{i2} \amp \cellcolor[gray]{.9} \cdots \amp \cellcolor[gray]{.9} a_{in} \\ \vdots \amp \vdots \amp \amp \vdots \\ a_{n1} \amp a_{n2} \amp \cdots \amp a_{nn} \end{array} \right] \left[ \begin{array}{cccccc} C_{11} \amp C_{21} \amp \cdots \amp \cellcolor[gray]{.9} C_{j1} \amp \cdots \amp C_{n1} \\ C_{12} \amp C_{22} \amp \cdots \amp \cellcolor[gray]{.9} C_{j2} \amp \cdots \amp C_{n2} \\ \vdots \amp \vdots \amp \amp \cellcolor[gray]{.9} \amp \amp \vdots \\ C_{1n} \amp C_{2n} \amp \cdots \amp \cellcolor[gray]{.9} C_{jn} \amp \cdots \amp C_{nn} \end{array} \right]\text{.} \end{equation*}

Thus the \(ij\)th entry of \(A \ \adj(A)\) is

\begin{equation} a_{i1}C_{j1} + a_{i2}C_{j2} + \cdots + a_{in}C_{jn}\text{.}\tag{22.4} \end{equation}

Notice that if \(i=j\text{,}\) then expression (22.4) is the cofactor expansion of \(A\) along the \(i\)th row. So the \(ii\)th entry of \(A \ \adj(A)\) is \(\det(A)\text{.}\) It remains to show that the \(ij\)th entry of \(A \ \adj(A)\) is 0 when \(i \neq j\text{.}\)

When \(i \neq j\text{,}\) the expression (22.4) is the cofactor expansion of the matrix

\begin{equation*} \left[ \begin{array}{cccc} a_{11} \amp a_{12} \amp \cdots \amp a_{1n} \\ a_{21} \amp a_{22} \amp \cdots \amp a_{2n} \\ \vdots \amp \vdots \amp \amp \vdots \\ a_{i1} \amp a_{i2} \amp \cdots \amp a_{in} \\ \vdots \amp \vdots \amp \amp \vdots \\ a_{j-11} \amp a_{j-12} \amp \cdots \amp a_{j-1n} \\ \cellcolor[gray]{.9} a_{i1} \amp \cellcolor[gray]{.9} a_{i2} \amp \cellcolor[gray]{.9} \cdots \amp \cellcolor[gray]{.9} a_{in} \\ a_{j+11} \amp a_{i+12} \amp \cdots \amp a_{j+1n} \\ \vdots \amp \vdots \amp \amp \vdots \\ a_{n1} \amp a_{n2} \amp \cdots \amp a_{nn} \end{array} \right] \end{equation*}

along the \(j\)th row. This matrix is the one obtained by replacing the \(j\)th row of \(A\) with the \(i\)th row of \(A\text{.}\) Since this matrix has two identical rows, it is not row equivalent to the identity matrix and is therefore not invertible. Thus, when \(i \neq j\) expression (22.4) is 0. This makes \(A \ \adj(A) = \det(A) I_n\text{.}\)

One consequence of the formula \(A^{-1} = \frac{1}{\det(A)} \adj(A)\) is Cramer's rule, which describes the solution to the equation \(A \vx = \vb\text{.}\)

Activity 22.7.

Let \(A = \left[ \begin{array}{cc} 3\amp 1 \\ 4\amp 2 \end{array} \right]\text{,}\) and let \(\vb = \left[ \begin{array}{c}2\\6 \end{array} \right]\text{.}\)

(a)

Solve the equation \(A \vx = \vb\) using the inverse of \(A\text{.}\)

(b)

Let \(A_1 = \left[ \begin{array}{cc} 2\amp 1 \\ 6\amp 2 \end{array} \right]\text{,}\) the matrix obtained by replacing the first column of \(A\) with \(\vb\text{.}\) Calculate \(\frac{\det(A_1)}{\det(A)}\) and compare to your solution from part (a). What do you notice?

(c)

Now let \(A_2 = \left[ \begin{array}{cc} 3\amp 2 \\ 4\amp 6 \end{array} \right]\text{,}\) the matrix obtained by replacing the second column of \(A\) with \(\vb\text{.}\) Calculate \(\frac{\det(A_2)}{\det(A)}\) and compare to your solution from part (a). What do you notice?

The result from Activity 22.7 may seem a bit strange, but turns out to be true in general. The result is called Cramer's Rule.

To see why Cramer's Rule works in general, let \(A\) be an \(n \times n\) invertible matrix and \(\vb = [b_1 \ b_2 \ \cdots \ b_n]^{\tr}\text{.}\) The solution to \(A \vx = \vb\) is

\begin{equation*} \vx = A^{-1} \vb = \frac{1}{\det(A)} \adj(A) \vb = \frac{1}{\det(A)}\left[ \begin{array}{cccc} C_{11} \amp C_{21} \amp \cdots \amp C_{n1} \\ C_{12} \amp C_{22} \amp \cdots \amp C_{n2} \\ \vdots \amp \vdots \amp \amp \vdots \\ C_{1n} \amp C_{2n} \amp \cdots \amp C_{nn} \end{array} \right] \left[ \begin{array}{c} b_1 \\ b_2 \\ \vdots \\ b_n \end{array} \right]\text{.} \end{equation*}

Expanding the product gives us

\begin{equation*} \vx = \frac{1}{\det(A)}\left[ \begin{array}{c} b_1C_{11} + b_2C_{21} + \cdots + b_nC_{n1} \\ b_1C_{12} + b_2C_{22} + \cdots + b_nC_{n2} \\ \vdots \\ b_1C_{1n} + b_2C_{2n} + \cdots + b_nC_{nn} \end{array} \right]\text{.} \end{equation*}

The expression

\begin{equation*} b_1C_{1j} + b_2C_{2j} + \cdots + b_nC_{nj} \end{equation*}

is the cofactor expansion of the matrix

\begin{equation*} A_j = \left[ \begin{array}{cccccccc} a_{11} \amp a_{12} \amp \cdots \amp a_{1j-1} \amp \cellcolor[gray]{.9} b_1 \amp a_{1j+1} \amp \cdots \amp a_{1n} \\ a_{21} \amp a_{22} \amp \cdots \amp a_{2j-1} \amp \cellcolor[gray]{.9} b_2 \amp a_{2j+1} \amp \cdots \amp a_{2n} \\ \vdots \amp \vdots \amp \amp \amp \cellcolor[gray]{.9} \amp \amp \amp \vdots \\ a_{n1} \amp a_{n2} \amp \cdots \amp a_{nj-1} \amp \cellcolor[gray]{.9} b_n \amp a_{nj+1} \amp \cdots \amp a_{nn} \end{array} \right] \end{equation*}

along the \(j\)th column, giving us the formula in Cramer's Rule.

Cramer's Rule is not a computationally efficient method. To find a solution to a linear system of \(n\) equations in \(n\) unknowns using Cramer's Rule requires calculating \(n+1\) determinants of \(n \times n\) matrices — quite inefficient when \(n\) is 3 or greater. Our standard method of solving systems using Gaussian elimination is much more efficient. However, Cramer's Rule does provide a formula for the solution to \(A \vx = \vb\) as long as \(A\) is invertible.

Subsection The Determinant of the Transpose

In this section we establish the fact that the determinant of a square matrix is the same as the determinant of its transpose.

The result is easily verified for \(2 \times 2\) matrices, so we will proceed by induction and assume that the determinant of the transpose of any \((n-1) \times (n-1)\) matrix is the same as the determinant of its transpose. Suppose \(A = [a_{ij}]\) is an \(n \times n\) matrix. By definition,

\begin{equation*} \det(A) = a_{11}C_{11} + a_{12}C_{12} + a_{13}C_{13} + \cdots + a_{1n}C_{1n} \end{equation*}

and

\begin{equation*} \det(A^{\tr}) = a_{11}C_{11} + a_{21}C_{21} + a_{31}C_{31} + \cdots + a_{n1}C_{n1}\text{.} \end{equation*}

Note that the only terms in either determinant that contains \(a_{11}\) is \(a_{11}C_{11}\text{.}\) This term is the same in both determinants, so we proceed to examine other elements. Let us consider all terms in the cofactor expansion for \(\det(A^{\tr})\) that contain \(a_{i1}a_{1j}\text{.}\) The only summand that contains \(a_{i1}\) is \(a_{i1}C_{i1}\text{.}\) Letting \(A_{ij}\) be the sub-matrix of \(A\) obtained by deleting the \(i\)th row and \(j\)th column, we see that \(a_{i1}C_{i1} = (-1)^{i+1}a_{i1}\det(A_{i1})\text{.}\) Now let's examine the sub-matrix \(A_{i1}\text{:}\)

\begin{equation*} \left[ \begin{array}{ccccccc} \cellcolor[gray]{.9} a_{12} \amp \cellcolor[gray]{.9} a_{13} \amp \cellcolor[gray]{.9} \cdots \amp \cellcolor[gray]{.9} a_{1j} \amp \cellcolor[gray]{.9} \cdots \amp \cellcolor[gray]{.9} a_{1n-1} \amp \cellcolor[gray]{.9} a_{1n} \\ a_{22} \amp a_{23} \amp \cdots \amp \cellcolor[gray]{.9} a_{2j} \amp \cdots \amp a_{2n-1} \amp a_{2n} \\ \vdots \amp \amp \ddots \amp \cellcolor[gray]{.9} \vdots \amp \ddots \amp \amp \\ a_{i-12} \amp a_{i-13} \amp \cdots \amp \cellcolor[gray]{.9} a_{i-1j} \amp \cdots \amp a_{i-1n-1} \amp a_{i-1n} \\ a_{i+12} \amp a_{i+13} \amp \cdots \amp \cellcolor[gray]{.9} a_{i+1j} \amp \cdots \amp a_{i+1n-1} \amp a_{i+1n} \\ a_{n2} \amp a_{n3} \amp \cdots \amp \cellcolor[gray]{.9} a_{nj} \amp \cdots \amp a_{nn-1} \amp a_{nn} \end{array} \right] \end{equation*}

When we expand along the first row to calculate \(\det(A_{i1})\text{,}\) the only term that will involve \(a_{1j}\) is

\begin{equation*} (-1)^{j-1+1}a_{1j}\det(A_{i1, 1j})\text{,} \end{equation*}

where \(A_{ik,jm}\) denotes the sub-matrix of \(A\) obtained by deleting rows \(i\) and \(k\) and columns \(j\) and \(m\) from \(A\text{.}\) So the term that contains \(a_{i1}a_{1j}\) in the cofactor expansion for \(\det(A^{\tr})\) is

\begin{equation} (-1){i+1}a_{i1}(-1)^{j}a_{1j}\det(A_{i1_{1j}}) = (-1)^{i+j+1} a_{i1}a_{1j}\det(A_{i1, 1j})\text{.}\tag{22.5} \end{equation}

Now we examine the cofactor expansion for \(\det(A)\) to find the terms that contain \(a_{i1}a_{1j}\text{.}\) The quantity \(a_{1j}\) only appears in the cofactor expansion as

\begin{equation*} a_{1j}C_{1j} = (-1)^{1+j}a_{1j}\det(A_{1j})\text{.} \end{equation*}

Now let's examine the sub-matrix \(A_{1j}\text{:}\)

\begin{equation*} \left[ \begin{array}{ccccccc} \cellcolor[gray]{.9} a_{21} \amp a_{22} \amp \cdots \amp a_{2j-1} \amp a_{2j+1} \amp \cdots \amp a_{2n} \\ \cellcolor[gray]{.9} a_{31} \amp a_{32} \amp \cdots \amp a_{3j-1} \amp a_{3j+1} \amp \cdots \amp a_{3n} \\ \cellcolor[gray]{.9} \vdots \amp \amp \ddots \amp \vdots \amp \ddots \amp \amp \\ \cellcolor[gray]{.9} a_{i1} \amp \cellcolor[gray]{.9} a_{i2} \amp \cellcolor[gray]{.9} \cdots \amp \cellcolor[gray]{.9} a_{ij-1} \amp \cellcolor[gray]{.9} a_{ij+1} \amp \cellcolor[gray]{.9} \cdots \amp \cellcolor[gray]{.9} a_{in} \\ \cellcolor[gray]{.9} \vdots \amp \amp \amp \vdots \amp \amp \vdots \amp \\ \cellcolor[gray]{.9} a_{n1} \amp a_{n2} \amp \cdots \amp a_{nj-1} \amp a_{nj+1} \amp \cdots \amp a_{nn} \end{array} \right] \end{equation*}

Here is where we use the induction hypothesis. Since \(A_{1j}\) is an \((n-1) \times (n-1)\) matrix, its determinant can be found with a cofactor expansion down the first column. The only term in this cofactor expansion that will involve \(a_{i1}\) is

\begin{equation*} (-1)^{i-1+1}a_{i1} \det(A_{1i, j1})\text{.} \end{equation*}

So the term that contains \(a_{i1}a_{1j}\) in the cofactor expansion for \(\det(A)\) is

\begin{equation} (-1)^{1+j}a_{1j}(-1)^{i-1+1}a_{i1} \det(A_{1j_{i1}}) = (-1)^{i+j+1} a_{i1}a_{1j}\det(A_{1i, j1})\text{.}\tag{22.6} \end{equation}

Since the quantities in (22.5) and (22.6) are equal, we conclude that the terms in the two cofactor expansions are the same and

\begin{equation*} \det(A^{\tr}) = \det(A)\text{.} \end{equation*}

Subsection Row Swaps and Determinants

In this section we determine the effect of row swaps to the determinant. Let \(E_{rs}\) be the elementary matrix that swaps rows \(r\) and \(s\) in the \(n \times n\) matrix \(A=[a_{ij}]\text{.}\) Applying \(E_{12}\) to a \(2 \times 2\) matrix \(A = \left[ \begin{array}{cc} a \amp b \\ c \amp d \end{array} \right]\text{,}\) we see that

\begin{equation*} \det(A) = ad - bc = -(ad-bc) = \det\left(\left[ \begin{array}{cc} c \amp d \\ a \amp b \end{array} \right]\right) = \det(E_{12}A)\text{.} \end{equation*}

So swapping rows in a \(2 \times 2\) matrix multiplies the determinant by \(-1\text{.}\) Suppose that row swapping on any \((n-1) \times (n-1)\) matrix multiplies the determinant by \(-1\) (in other words, we are proving our statement by mathematical induction). Now suppose \(A\) is an \(n \times n\) matrix and let \(B = [b_{ij}] = E_{rs}A\text{.}\) We first consider the case that \(s = r+1\) — that we swap adjacent rows. We consider two cases, \(r > 1\) and \(r = 1\text{.}\) First let us suppose that \(r > 1\text{.}\) Let \(C_{ij}\) be the \((i,j)\) cofactor of \(A\) and \(C'_{ij}\) the \((i,j)\) cofactor of \(B\text{.}\) We have

\begin{equation*} \det(A) = a_{11}C_{11} + a_{12}C_{12} + \cdots + a_{1n}C_{1n} \end{equation*}

and

\begin{equation*} \det(B) = b_{11}C'_{11} + b_{12}C'_{12} + \cdots + b_{1n}C'_{1n}\text{.} \end{equation*}

Since \(r > 1\text{,}\)it follows that \(a_{1j} = b_{1j}\) for every \(j\text{.}\) For each \(j\) the sub-matrix \(B_{1j}\) obtained from \(B\) by deleting the \(i\)th row and \(j\)th column is the same matrix as obtained from \(A_{ij}\) by swapping rows \(r\) and \(s\text{.}\) So by our induction hypothesis, we have \(C'_{1j} = -C_{1j}\) for each \(j\text{.}\) Then

\begin{align*} \det(B) \amp = b_{11}C'_{11} + b_{12}C'_{12} + \cdots + b_{1n}C'_{1n}\\ \amp = a_{11}(-C_{11}) + a_{12}(-C_{12}) + \cdots + a_{1n}(-C_{1n})\\ \amp = -(a_{11}C_{11} + a_{12}C_{12} + \cdots + a_{1n}C_{1n})\\ \amp = -\det(A)\text{.} \end{align*}

Now we consider the case where \(r=1\text{,}\) where \(B\) is the matrix obtained from \(A\) by swapping the first and second rows. Here we will use the fact that \(\det(A) = \det(A^{\tr})\) which allows us to calculate \(\det(A)\) and \(\det(B)\) with the cofactor expansions down the first column. In this case we have

\begin{equation*} \det(A) = a_{11}C_{11} + a_{21}C_{21} + \cdots + a_{n1}C_{n1} \end{equation*}

and

\begin{align*} \det(B) \amp = b_{11}C'_{11} + b_{21}C'_{21} + \cdots + b_{n1}C'_{n1}\\ \amp = a_{21}C'_{11} + a_{11}C'_{21} + a_{31}C'_{31} + \cdots + a_{n1}C'_{n1}\text{.} \end{align*}

For each \(i \geq 3\text{,}\) the sub-matrix \(B_{i1}\) is just \(A_{i1}\) with rows 1 and 2 swapped. So we have \(C'_{i1} = -C_{i1}\) by our induction hypothesis. Since we swapped rows 1 and 2, we have \(B_{21} = A_{11}\) and \(B_{11} = A_{21}\text{.}\) Thus,

\begin{equation*} b_{11}C'_{11} = (-1)^{1+1}b_{11}\det(A_{21}) = a_{21}\det(A_{21}) = -a_{21}C_{21} \end{equation*}

and

\begin{equation*} b_{21}C'_{21} = (-1)^{2+1}a_{11}\det(A_{11}) = -a_{11}\det(A_{11}) = -a_{11}C_{11}\text{.} \end{equation*}

Putting this all together gives us

\begin{align*} \det(B) \amp = b_{11}C'_{11} + b_{21}C'_{21} + \cdots + b_{n1}C'_{n1}\\ \amp = -a_{21}C_{21} - a_{11}C_{11} + a_{31}(-C_{31}) + \cdots + a_{n1}(-C_{n1})\\ \amp = -\left(a_{11}C_{11} + a_{21}C_{21} + \cdots + a_{n1}C_{n1}\right)\\ \amp = - \det(A)\text{.} \end{align*}

So we have shown that if \(B\) is obtained from \(A\) by interchanging two adjacent rows, then \(\det(B) = -\det(A)\text{.}\) Now we consider the general case. Suppose \(B\) is obtained from \(A\) by interchanging rows \(r\) and \(s\text{,}\) with \(r \lt s\text{.}\) We can perform this single row interchange through a sequence of adjacent row interchanges. First we swap rows \(r\) and \(r+1\text{,}\) then rows \(r+1\) and \(r+2\text{,}\) and continue until we swap rows \(s-1\) and \(s\text{.}\) This places the original row \(r\) into the row \(s\) position, and the process involved \(s-r\) adjacent row interchanges. Each of these interchanges multiplies the determinant by a factor of \(-1\text{.}\) At the end of this sequence of row swaps, the original row \(s\) is now row \(s-1\text{.}\) So it will take one fewer adjacent row interchanges to move this row to be row \(r\text{.}\) This sequence of \((s-r)+(s-r-1) = 2(s-r-1)-1\) row interchanges produces the matrix \(B\text{.}\) Thus,

\begin{equation*} \det(B) = (-1)^{2(s-r)-1}\det(A) = -\det(A)\text{,} \end{equation*}

and interchanging any two rows multiplies the determinant by \(-1\text{.}\)

Subsection Cofactor Expansions

We have stated that the determinant of a matrix can be calculated by using a cofactor expansion along any row or column. We use the result that swapping rows introduces a factor of \(-1\) in the determinant to verify that result in this section. Note that in proving that \(\det(A^{\tr}) = \det(A)\text{,}\) we have already shown that the cofactor expansion along the first column is the same as the cofactor expansion along the first row. If we can prove that the cofactor expansion along any row is the same, then the fact that \(\det(A^{\tr}) = \det(A)\) will imply that the cofactor expansion along any column is the same as well.

Now we demonstrate that the cofactor expansions along the first row and the \(i\)th row are the same. Let \(A = [a_{ij}]\) be an \(n \times n\) matrix. The cofactor expansion of \(A\) along the first row is

\begin{equation*} a_{11}C_{11} + a_{12}C_{12} + \cdots + a_{1n}C_{1n} \end{equation*}

and the cofactor expansion along the \(i\)th row is

\begin{equation*} a_{i1}C_{i1} + a_{i2}C_{i2} + \cdots + a_{in}C_{in}\text{.} \end{equation*}

Let \(B\) be the matrix obtained by swapping row \(i\) with previous rows so that row \(i\) becomes the first row and the order of the remaining rows is preserved.

\begin{equation*} B = \left[ \begin{array}{cccccc} \cellcolor[gray]{.9} a_{i1} \amp \cellcolor[gray]{.9} a_{i2} \amp \cellcolor[gray]{.9} \cdots \amp \cellcolor[gray]{.9} a_{ij} \amp \cellcolor[gray]{.9} \cdots \amp \cellcolor[gray]{.9} a_{in} \\ \cellcolor[gray]{.9} a_{11} \amp a_{12} \amp \cdots \amp a_{1j} \amp \cdots \amp a_{1n} \\ \cellcolor[gray]{.9} a_{21} \amp a_{22} \amp \cdots \amp a_{2j} \amp \cdots \amp a_{2n} \\ \cellcolor[gray]{.9} a_{i-11} \amp a_{i-12} \amp \cdots \amp a_{i-1j} \amp \cdots \amp a_{i-1n} \\ \cellcolor[gray]{.9} a_{i+11} \amp a_{i+12} \amp \cdots \amp a_{i+1j} \amp \cdots \amp a_{i+1n} \\ \cellcolor[gray]{.9} \vdots \amp \ddots \amp \vdots \amp \ddots \amp \amp \\ \cellcolor[gray]{.9} a_{n1} \amp a_{n2} \amp \cdots \amp a_{nj} \amp \cdots \amp a_{nn} \end{array} \right] \end{equation*}

Then

\begin{equation*} \det(B) = (-1)^{i-1} \det(A)\text{.} \end{equation*}

So, letting \(C'_{ij}\) be the \((i,j)\) cofactor of \(B\) we have

\begin{equation*} \det(A) = (-1)^{i-1} \det(B) = (-1)^{i-1}\left(a_{i1}C'_{11} + a_{i2}C'_{12} + \cdots + a_{in}C'_{1n}\right)\text{.} \end{equation*}

Notice that for each \(j\) we have \(B_{1j} = A_{ij}\text{.}\) So

\begin{align*} \det(A) \amp = (-1)^{i-1}\big(a_{i1}C'_{11} + a_{i2}C'_{12} + \cdots + a_{in}C'_{1n}\big)\\ \amp = (-1)^{i-1}\Big(a_{i1}(-1)^(1+1)\det(B_{11}) + a_{i2}(-1)^{1+2}\det(B_{12})\\ \amp \qquad + \cdots + a_{in}(-1)^{1+n}\det(B_{1n})\Big)\\ \amp = (-1)^{i-1}\Big(a_{i1}(-1)^(1+1)\det(A_{i1}) + a_{i2}(-1)^{1+2}\det(A_{i2})\\ \amp \qquad + \cdots + a_{in}(-1)^{1+n}\det(A_{in})\Big)\\ \amp = a_{i1}(-1)^(i+1)\det(A_{i1}) + a_{i2}(-1)^{i+2}\det(A_{i2})\\ \amp \qquad + \cdots + a_{in}(-1)^{i+n}\det(A_{in})\\ \amp = a_{i1}C_{i1} + a_{i2}C_{i2} + \cdots + a_{in}C_{in}\text{.} \end{align*}

Subsection The LU Factorization of a Matrix

There are many instances where we have a number of systems to solve of the form \(A \vx = \vb\text{,}\) all with the same coefficient matrix. The system may evolve over time so that we do not know the constant vectors \(\vb\) in the system all at once, but only determine them as time progresses. Each time we obtain a new vector \(\vb\text{,}\) we have to apply the same row operations to reduce the coefficient matrix to solve the new system. This is time repetitive and time consuming. Instead, we can keep track of the row operations in one row reduction and save ourselves a significant amount of time. One way of doing this is the \(LU\)-factorization (or decomposition).

To illustrate, suppose we can write the matrix \(A\) as a product \(A = LU\text{,}\) where

\begin{equation*} L = \left[ \begin{array}{rccc} 1\amp 0\amp 0\amp 0\\-1\amp 1\amp 0\amp 0 \\0\amp 1\amp 1\amp 0\\1\amp 0\amp 0\amp 1 \end{array} \right] \ \ \text{ and } \ \ U = \left[ \begin{array}{ccrr} 1\amp 0\amp 1\amp 0\\0\amp 1\amp 3\amp -2 \\0\amp 0\amp 0\amp 3\\0\amp 0\amp 0\amp 0 \end{array} \right]\text{.} \end{equation*}

Let \(\vb = [3 \ 1 \ 1 \ 3]^{\tr}\) and \(\vx = [x_1 \ x_2 \ x_3 \ x_4]^{\tr}\text{,}\) and consider the linear system \(A \vx = \vb\text{.}\) If \(A \vx = \vb\text{,}\) then \(LU \vx = \vb\text{.}\) We can solve this system without applying row operations as follows. Let \(U\vx = \vz\text{,}\) where \(\vz = [z_1 \ z_2 \ z_3 \ z_4]^{\tr}\text{.}\) We can solve \(L\vz = \vb\) by using forward substitution.

The equation \(L \vz = \vb\) is equivalent to the system

\begin{align*} {}z_1 \ \amp {} \ \amp {} \ \amp {} \ \amp {} \ \amp {} \ \amp {} \amp = 3\\ {-}z_1 \ \amp {+} \ \amp {}z_2 \ \amp {} \ \amp {} \ \amp {} \ \amp {} \amp = 1\\ {} \ \amp {} \ \amp {}z_2 \ \amp {+} \ \amp {}z_3 \ \amp {} \ \amp {} \amp = 1\\ {} \ \amp {} \ \amp {} \ \amp {} \ \amp {} \ \amp {} \ \amp {}z_4 \amp = 3\text{.} \end{align*}

The first equation shows that \(z_1=3\text{.}\) Substituting into the second equation gives us \(z_2 = 4\text{.}\) Using this information in the third equation yields \(z_3 = -3\text{,}\) and then the fourth equation shows that \(z_4 = 0\text{.}\) To return to the original system, since \(U\vx = \vz\text{,}\) we now solve this system to find the solution vector \(\vx\text{.}\) In this case, since \(U\) is upper triangular, we use back substitution. The equation \(U\vx = \vz\) is equivalent to the system

\begin{align*} {}x_1 \ \amp {} \ \amp {} \ \amp {+} \ \amp {}x_3 \ \amp {} \ \amp {} \amp = \amp {}\amp 3\\ {} \ \amp {} \ \amp {}x_2 \ \amp {+} \ \amp {3}x_3 \ \amp {-} \ \amp {2}x_4 \amp = \amp {}\amp 4\\ {} \ \amp {} \ \amp {} \ \amp {} \ \amp {} \ \amp {} \ \amp {3}x_4 \amp = \amp {-}\amp 3\text{.} \end{align*}

Note that the third column of \(U\) is not a pivot column, so \(x_3\) is a free variable. The last equation shows that \(x_4=-1\text{.}\) Substituting into the second equation and solving for \(x_2\) yields \(x_2 = 2-3x_3\text{.}\) The first equation then gives us \(x_1 = 3-x_3\text{.}\) So the general solution

\begin{equation*} \vx = \left[ \begin{array}{r} 3\\2\\0\\-1 \end{array} \right] + \left[ \begin{array}{r} -1\\-3\\1\\0 \end{array} \right]x_3 \end{equation*}

to \(A \vx = \vb\) can be found through \(L\) and \(U\) via forward and backward substitution. If we can find a factorization of a matrix \(A\) into a lower triangular matrix \(L\) and an upper triangular matrix \(U\text{,}\) then \(A= LU\) is called an \(LU\)-factorization or \(LU\)-decomposition.

We can use elementary matrices to obtain a factorization of certain matrices into products of lower triangular (the “L” in LU) and upper triangular (the “U” in LU) matrices. We illustrate with an example. Let

\begin{equation*} A = \left[ \begin{array}{rccr} 1\amp 0\amp 1\amp 0\\-1\amp 1\amp 2\amp -2 \\0\amp 1\amp 3\amp 1\\1\amp 0\amp 1\amp 0 \end{array} \right]\text{.} \end{equation*}

Our goal is to find an upper triangular matrix \(U\) and a lower triangular matrix \(L\) so that \(A = LU\text{.}\) We begin by row reducing \(A\) to an upper triangular matrix, keeping track of the elementary matrices used to perform the row operations. We start by replacing the entries below the \((1,1)\) entry in \(A\) with zeros. The elementary matrices that perform these operations are

\begin{equation*} E_1 = \left[ \begin{array}{cccc} 1\amp 0\amp 0\amp 0\\1\amp 1\amp 0\amp 0 \\0\amp 0\amp 1\amp 0\\0\amp 0\amp 0\amp 1 \end{array} \right] \ \ \text{ and } \ \ E_2 = \left[ \begin{array}{rccc} 1\amp 0\amp 0\amp 0\\0\amp 1\amp 0\amp 0 \\0\amp 0\amp 1\amp 0\\-1\amp 0\amp 0\amp 1 \end{array} \right]\text{,} \end{equation*}

and

\begin{equation*} E_2E_1A = \left[ \begin{array}{ccrr} 1\amp 0\amp 1\amp 0\\0\amp 1\amp 3\amp -2 \\0\amp 1\amp 3\amp 1\\0\amp 0\amp 0\amp 0 \end{array} \right]\text{.} \end{equation*}

We next zero out the entries below the \((2,2)\) entry as

\begin{equation*} E_3E_2E_1A = \left[ \begin{array}{ccrr} 1\amp 0\amp 1\amp 0\\0\amp 1\amp 3\amp -2 \\0\amp 0\amp 0\amp 3\\0\amp 0\amp 0\amp 0 \end{array} \right]\text{,} \end{equation*}

where

\begin{equation*} E_3 = \left[ \begin{array}{crcc} 1\amp 0\amp 0\amp 0\\0\amp 1\amp 0\amp 0 \\0\amp -1\amp 1\amp 0\\0\amp 0\amp 0\amp 1 \end{array} \right]\text{.} \end{equation*}

The product \(E_3E_2E_1A\) is an upper triangular matrix \(U\text{.}\) So we have

\begin{equation*} E_3E_2E_1A = U \end{equation*}

and

\begin{equation*} A = E_1^{-1}E_2^{-1}E_3^{-1}U\text{,} \end{equation*}

where

\begin{equation*} E_1^{-1}E_2^{-1}E_3^{-1} = \left[ \begin{array}{rccc} 1\amp 0\amp 0\amp 0\\-1\amp 1\amp 0\amp 0 \\0\amp 1\amp 1\amp 0\\1\amp 0\amp 0\amp 1 \end{array} \right] \end{equation*}

is a lower triangular matrix \(L\text{.}\) So we have decomposed the matrix \(A\) into a product \(A = LU\text{,}\) where \(L\) is lower triangular and \(U\) is upper triangular. Since every matrix is row equivalent to a matrix in row echelon form, we can always find an upper triangular matrix \(U\) in this way. However, we may not always obtain a corresponding lower triangular matrix, as the next example illustrates.

Suppose we change the problem slightly and consider the matrix

\begin{equation*} B = \left[ \begin{array}{rccr} 1\amp 0\amp 1\amp 0\\-1\amp 1\amp 2\amp -2 \\0\amp 1\amp 3\amp 1\\1\amp 0\amp 0\amp 1 \end{array} \right]\text{.} \end{equation*}

Using the same elementary matrices \(E_1\text{,}\) \(E_2\text{,}\) and \(E_3\) as earlier, we have

\begin{equation*} E_3E_2E_1B = \left[ \begin{array}{ccrr} 1\amp 0\amp 1\amp 0\\0\amp 1\amp 3\amp -2 \\0\amp 0\amp 0\amp 3\\0\amp 0\amp -1\amp 1 \end{array} \right]\text{.} \end{equation*}

To reduce \(B\) to row-echelon form now requires a row interchange. Letting

\begin{equation*} E_4 = \left[ \begin{array}{crcc} 1\amp 0\amp 0\amp 0\\0\amp 1\amp 0\amp 0 \\0\amp 0\amp 0\amp 1\\0\amp 0\amp 1\amp 0 \end{array} \right] \end{equation*}

brings us to

\begin{equation*} E_4E_3E_2E_1B = \left[ \begin{array}{ccrr} 1\amp 0\amp 1\amp 0\\0\amp 1\amp 3\amp -2 \\0\amp 0\amp -1\amp 1\\0\amp 0\amp 0\amp 3 \end{array} \right]\text{.} \end{equation*}

So in this case we have \(U = E_4E_3E_2E_1B\text{,}\) but

\begin{equation*} E_1^{-1}E_2^{-1}E_3^{-1}E_4^{-1} = \left[ \begin{array}{rccc} 1\amp 0\amp 0\amp 0\\-1\amp 1\amp 0\amp 0 \\0\amp 1\amp 0\amp 1\\1\amp 0\amp 1\amp 0 \end{array} \right] \end{equation*}

is not lower triangular. The difference in this latter example is that we needed a row swap to obtain the upper triangular form.

Subsection Examples

What follows are worked examples that use the concepts from this section.

Example 22.8.

(a)

If \(A\text{,}\) \(B\) are \(n\times n\) matrices with \(\det(A)=3\) and \(\det(B)=2\text{,}\) evaluate the following determinant values. Briefly justify.

(i)

\(\det(A^{-1})\)

Solution.

Assume that \(\det(A)=3\) and \(\det(B)=2\text{.}\)

Since \(\det(A) \neq 0\text{,}\) we know that \(A\) is invertible. Since \(1 = \det(I_n) = \det(AA^{-1}) = \det(A) \det(A^{-1})\text{,}\) it follows that \(\det(A^{-1}) = \frac{1}{\det(A)} = \frac{1}{3}\text{.}\)

(ii)

\(\det(ABA^{\mathsf{T}})\)

Solution.

Assume that \(\det(A)=3\) and \(\det(B)=2\text{.}\)

We know that \(\det(A^{\tr}) = \det(A)\text{,}\) so

\begin{align*} \det(ABA^\tr) \amp = \det(A) \det(B) \det(A^{\tr})\\ \amp = \det(A) \det(B) \det(A)\\ \amp = (3)(2)(3)\\ \amp = 18\text{.} \end{align*}
(iii)

\(\det(A^3(BA)^{-1}(AB)^2)\)

Solution.

Assume that \(\det(A)=3\) and \(\det(B)=2\text{.}\)

Using properties of determinants gives us

\begin{align*} \det(A^3(BA)^{-1}(AB)^2) \amp = \det(A^3)\det((BA)^{-1}) \det((AB)^2)\\ \amp = (\det(A))^3 \left(\frac{1}{\det(AB)}\right) (\det(AB))^2\\ \amp = 27 \left(\frac{1}{\det(A) \det(B)} \right) (\det(A)\det(B))^2\\ \amp = \frac{(27)(6^2)}{6}\\ \amp = 162\text{.} \end{align*}
(b)

If the determinant of \(\left[ \begin{array}{ccc} a\amp b\amp c\\d\amp e\amp f\\g\amp h\amp i \end{array} \right]\) is \(m\text{,}\) find the determinant of each of the following matrices.

(i)

\(\left[ \begin{array}{ccc} a\amp b\amp c\\2d\amp 2e\amp 2f\\g\amp h\amp i \end{array} \right]\)

Solution.

Assume that \(\det\left(\left[ \begin{array}{ccc} a\amp b\amp c\\d\amp e\amp f\\g\amp h\amp i \end{array} \right] \right) = m\text{.}\)

Multiplying a row by a scalar multiples the determinant by that scalar, so

\begin{equation*} \det\left( \left[ \begin{array}{ccc} a\amp b\amp c\\2d\amp 2e\amp 2f\\g\amp h\amp i \end{array} \right] \right) = 2m\text{.} \end{equation*}
(ii)

\(\left[ \begin{array}{ccc} d\amp e\amp f\\g\amp h\amp i\\a\amp b\amp c \end{array} \right]\)

Solution.

Assume that \(\det\left(\left[ \begin{array}{ccc} a\amp b\amp c\\d\amp e\amp f\\g\amp h\amp i \end{array} \right] \right) = m\text{.}\)

Interchanging two rows multiples the determinant by \(-1\text{.}\) It takes two row swaps in the original matrix to obtain this one, so

\begin{equation*} \det\left( \left[ \begin{array}{ccc} d\amp e\amp f\\g\amp h\amp i\\a\amp b\amp c \end{array} \right] \right) = (-1)^2m = m\text{.} \end{equation*}
(iii)

\(\left[ \begin{array}{ccc} a\amp b\amp c\\g-2d\amp h-2e\amp i-2f\\a+d\amp b+e\amp c+f \end{array} \right]\)

Solution.

Assume that \(\det\left(\left[ \begin{array}{ccc} a\amp b\amp c\\d\amp e\amp f\\g\amp h\amp i \end{array} \right] \right) = m\text{.}\)

Adding a multiple of a row to another does not change the determinant of the matrix. Since there is a row swap needed to get this matrix from the original we have

\begin{equation*} \det\left(\left[ \begin{array}{ccc} a\amp b\amp c\\g-2d\amp h-2e\amp i-2f\\a+d\amp b+e\amp c+f \end{array} \right]\right) = -m\text{.} \end{equation*}

Example 22.9.

Let \(A = \left[ \begin{array}{ccr} 2\amp 8\amp 0\\2\amp 2\amp -3\\1\amp 2\amp 7 \end{array} \right]\text{.}\)

(a)

Find an LU factorization for \(A\text{.}\)

Solution.

We row reduce \(A\) to an upper triangular matrix by applying elementary matrices. First notice that if \(E_1 = \left[ \begin{array}{rcc} 1\amp 0\amp 0 \\-1\amp 1\amp 0 \\ 0\amp 0\amp 1 \end{array} \right]\text{,}\) then

\begin{equation*} E_1 A = \left[ \begin{array}{crr} 2\amp 8\amp 0 \\ 0\amp -6\amp -3 \\ 1\amp 2\amp 7 \end{array} \right]\text{.} \end{equation*}

Letting \(E_2 = \left[ \begin{array}{rcc} 1\amp 0\amp 0 \\0\amp 1\amp 0 \\ -\frac{1}{2}\amp 0\amp 1 \end{array} \right]\) gives us

\begin{equation*} E_2E_1A = \left[ \begin{array}{crr} 2\amp 8\amp 0 \\ 0\amp -6\amp -3 \\ 0\amp -2\amp 7 \end{array} \right]\text{.} \end{equation*}

Finally, when \(E_3 = \left[ \begin{array}{crc} 1\amp 0\amp 0 \\0\amp 1\amp 0 \\ 0\amp -\frac{1}{3}\amp 1 \end{array} \right]\) we have

\begin{equation*} U=E_3E_2E_1A = \left[ \begin{array}{crr} 2\amp 8\amp 0 \\ 0\amp -6\amp -3 \\ 0\amp 0\amp 8 \end{array} \right]\text{.} \end{equation*}

This gives us \(E_3E_2E_1A = U\text{,}\) so we can take

\begin{equation*} L = E_1^{-1}E_2^{-1}E_3^{-1} = \left[ \begin{array}{ccc} 1\amp 0\amp 0 \\1\amp 1\amp 0 \\ \frac{1}{2}\amp \frac{1}{3}\amp 1 \end{array} \right]\text{.} \end{equation*}
(b)

Use the LU factorization with forward substitution and back substitution to solve the system \(A \vx = [18 \ 3 \ 12]^{\tr}\text{.}\)

Solution.

To solve the system \(A \vx = \vb\text{,}\) where \(\vb = [18 \ 3 \ 12]^{\tr}\text{,}\) we use the LU factorization of \(A\) and solve \(LU \vx = \vb\text{.}\) Let \(\vx = [x_1 \ x_2 \ x_3]^{\tr}\) and let \(\vz = [z_1 \ z_2 \ z_3]^{\tr}\) with \(U \vx = \vz\) so that \(L \vz = L(U\vx) = A\vx = \vb\text{.}\) First we solve \(L \vz = [18 \ 3 \ 12]^{\tr}\) to find \(\vz\) using forward substitution. The first row of \(L\) shows that \(z_1 = 18\) and the second row that \(z_1 + z_2 = 3\text{.}\) So \(z_2 = -15\text{.}\) The third row of \(L\) gives us \(\frac{1}{2}z_1 + \frac{1}{3}z_2 + z_3 = 12\text{,}\) so \(z_3 = 12 - 9 + 5 = 8\text{.}\) Now to find \(\vx\) we solve \(U \vx = \vz\) using back substitution. The third row of \(U\) tells us that \(8x_3 = 8\) or that \(x_3 = 1\text{.}\) The second row of \(U\) shows that \(-6x_2-3x_3 = -15\) or \(x_2 =2\text{.}\) Finally, the first row of \(U\) gives us \(2x_1+8x_2 = 18\text{,}\) or \(x_1 = 1\text{.}\) So the solution to \(A \vx = \vb\) is \(\vx = [1 \ 2 \ 1]^{\tr}\text{.}\)

Subsection Summary

  • The elementary row operations have the following effects on the determinant:

    • If we multiply a row of a matrix by a constant \(k\text{,}\) then the determinant is multiplied by \(k\text{.}\)

    • If we swap two rows of a matrix, then the determinant changes sign.

    • If we add a multiple of a row of a matrix to another, the determinant does not change.

  • Each of the elementary row operations can be achieved by multiplication by elementary matrices. To obtain the elementary matrix corresponding to an elementary row operation, we perform the operation on the identity matrix.

  • Let \(A\) be an \(n\times n\) invertible matrix. For any \(\vb\) in \(\R^n\text{,}\) the solution \(\vx\) of \(A\vx=\vb\) has entries

    \begin{equation*} x_i =\frac{\det(A_i(\vb))}{\det(A)} \end{equation*}

    where \(A_i(\vb)\) represents the matrix formed by replacing \(i\)th column of \(A\) with \(\vb\text{.}\)

  • Let \(A\) be an invertible \(n\times n\) matrix. Then

    \begin{equation*} A^{-1} = \frac{1}{\det(A)} \text{ adj } A \end{equation*}

    where the \(\text{ adj } A\) matrix, the adjugate of \(A\), is defined as the matrix whose \(ij\)-th entry is \(C_{ji}\text{,}\) the \(ji\)-th cofactor of \(A\text{.}\)

  • For a \(2\times 2\) matrix \(A\text{,}\) the area of the image of the unit square under the transformation \(T(\vx)=A\vx\) is equal to \(|\det(A)|\text{,}\) which is also equal to the area of the parallelogram defined by the columns of \(A\text{.}\)

  • For a \(3\times 3\) matrix \(A\text{,}\) the volume of the image of the unit cube under the transformation \(T(\vx)=A\vx\) is equal to \(|\det(A)|\text{,}\) which is also equal to the volume of the parallelepiped defined by the columns of \(A\text{.}\)

  • An \(LU\) factorization of a square matrix \(A\) consists of a lower triangular matrix \(L\) and an upper triangular matrix \(U\) so that \(A = LU\text{.}\)

  • A square matrix \(A\) has an \(LU\) factorization if we can use row operations without row interchanges to row reduce \(A\) to an upper triangular matrix \(U\text{.}\) In this situation the elementary matrices that perform the row operations produce a lower triangular matrix \(L\) so that \(A = LU\text{.}\) If \(A\) cannot be reduced to an upper triangular matrix \(U\) without row interchanges, then we can factor \(A\) in the form \(PLU\text{,}\) where \(L\) is a lower triangular matrix, \(U\) is an upper triangular matrix, and \(P\) is obtained from the identity matrix by appropriate row interchanges.

  • There are many instances where we have a number of systems to solve of the form \(A \vx = \vb\text{,}\) all with the same coefficient matrix but where the vectors \(\vb\) can change. With an \(LU\) factorization, we can keep track of the row operations in one row reduction and save ourselves a significant amount of time when solving these systems.

Exercises Exercises

1.

Find a formula for \(\det(rA)\) in terms of \(r\) and \(\det(A)\text{,}\) where \(A\) is an \(n\times n\) matrix and \(r\) is a scalar. Explain why your formula is valid.

2.

Find \(\det(A)\) by hand using elementary row operations where

\begin{equation*} A= \left[ \begin{array}{rrrr} 1\amp 2\amp -1\amp 3\\ -1\amp -2\amp 3\amp -1\\ -2\amp -1\amp 2\amp -3\\ 1\amp 8\amp -3\amp 8 \end{array} \right] \,\text{.} \end{equation*}

3.

Consider the matrix \(A= \left[ \begin{array}{rrrr} 4\amp -1\amp -1\amp -1 \\ -1\amp 4\amp -1\amp -1\\ -1\amp -1\amp 4\amp -1\\ -1\amp -1\amp -1\amp 4 \end{array} \right]\text{.}\) We will find \(\det(A)\) using elementary row operations. (This matrix arises in graph theory, and its determinant gives the number of spanning trees in the complete graph with 5 vertices. This number is also equal to the number of labeled trees with 5 vertices.)

(a)

Add rows \(R_2\text{,}\) \(R_3\) and \(R_4\) to the first row in that order.

(b)

Then add the new \(R_1\) to rows \(R_2\text{,}\) \(R_3\) and \(R_4\) to get a triangular matrix \(B\text{.}\)

(c)

Find the determinant of \(B\text{.}\) Then use \(\det(B)\) and properties of how elementary row operations affect determinants to find \(\det(A)\text{.}\)

(d)

Generalize your work to find the determinant of the \(n\times n\) matrix

\begin{equation*} A= \left[ \begin{array}{rrrrrr} n\amp -1\amp -1\amp \cdots \amp -1\amp -1 \\ -1\amp n\amp -1\amp \cdots \amp -1\amp -1\\ \vdots \amp \vdots \amp \vdots \amp \cdots \amp \vdots \amp \vdots\\ -1\amp -1\amp -1\amp \cdots\amp -1\amp n \end{array} \right] \,\text{.} \end{equation*}

4.

For which matrices \(A\text{,}\) if any, is \(\det(A)=-\det(-A)\text{?}\) Justify your answer.

5.

Find the inverse \(A^{-1}\) of \(A=\left[ \begin{array}{ccc} 1\amp 0\amp 1 \\ 0\amp 1\amp 0\\2\amp 0\amp 1 \end{array} \right]\) using the adjugate matrix.

6.

For an invertible \(n\times n\) matrix \(A\text{,}\) what is the relationship between \(\det(A)\) and \(\det(\text{ adj } A)\text{?}\) Justify your result.

7.

Let \(A = \left[ \begin{array}{ccc} a\amp b\amp 1\\c\amp d\amp 2\\e\amp f\amp 3 \end{array} \right]\text{,}\) and assume that \(\det(A) = 2\text{.}\) Determine the determinants of each of the following.

(a)

\(B= \left[ \begin{array}{ccc} a\amp b\amp 1\\3c\amp 3d\amp 6\\e+a\amp f+b\amp 4 \end{array} \right]\)

(b)

\(C = \left[ \begin{array}{ccr} 2e\amp 2f\amp 6 \\2c-2e\amp 2d-2f\amp -2\\2a\amp 2b\amp 2 \end{array} \right]\)

8.

Find the area of the parallelogram with one vertex at the origin and adjacent vertices at \((1,2)\) and \((a,b)\text{.}\) For which \((a,b)\) is the area 0? When does this happen geometrically?

9.

Find the volume of the parallelepiped with one vertex at the origin and three adjacent vertices at \((3,2,0)\text{,}\) \((1,1,1)\) and \((1,3,c)\) where \(c\) is unknown. For which \(c\text{,}\) is the volume 0? When does this happen geometrically?

10.

Find an \(LU\) factorization of each of the following matrices \(A\text{.}\) Use the \(LU\) factorization to solve the system \(A \vx = \vb\) for the given vector \(\vb\text{.}\)

(a)

\(A = \left[ \begin{array}{rcc} 2\amp 1\amp 3 \\ -1\amp 0\amp 1 \\ 2\amp 1\amp 5 \end{array} \right]\text{,}\) \(\vb = \left[ \begin{array}{c} 1 \\ 2 \\ 1 \end{array} \right]\)

(b)

\(A = \left[ \begin{array}{rcc} 1\amp 1\amp 0 \\ -1\amp 1\amp 1 \\ 0\amp 2\amp 1 \end{array} \right]\text{,}\) \(\vb = \left[ \begin{array}{c} 1 \\ 2 \\ 1 \end{array} \right]\)

(c)

\(A = \left[ \begin{array}{rcrc} 1\amp 1\amp 1\amp 0 \\ 1\amp 0\amp 1\amp 1 \\ -1\amp 1\amp 0\amp 0 \\ 0\amp 1\amp -1\amp 0 \end{array} \right]\text{,}\) \(\vb = \left[ \begin{array}{c} 3 \\ 0 \\ 2 \\ 0 \end{array} \right]\)

11.

Let \(A = \left[ \begin{array}{cc} 1\amp 2\\2\amp 1 \end{array} \right]\text{.}\)

(a)

Find an \(LU\) decomposition of \(A\text{.}\)

(b)

Find a different factorization of \(A\) into a product \(L'U'\) where \(L'\) is a lower triangular matrix different from \(L\) and \(U'\) is an upper triangular matrix different from \(U\text{.}\) Conclude that the \(LU\) decomposition of a matrix is not unique.

12.

Let \(A = \left[ \begin{array}{rcc} 1\amp 2\amp 3\\-1\amp 2\amp 0 \\ 2\amp 0\amp 0 \end{array} \right]\text{.}\)

(a)

Find an \(LU\) decomposition of \(A\text{.}\)

(b)

Find an \(LU\) decomposition of \(A\) in which the diagonal entries of \(D\) are all 1.

Hint.

Continue row reducing.

(c)

Find an upper triangular matrix \(L\) whose diagonal entries are all 1, a lower triangular matrix \(U\) whose diagonal entries are all 1, and a diagonal matrix \(D\) such that \(A = LDU\text{.}\)

13.

Label each of the following statements as True or False. Provide justification for your response.

(a) True/False.

If two rows are equal in \(A\text{,}\) then \(\det(A)=0\text{.}\)

(b) True/False.

If \(A\) is a square matrix and \(R\) is a row echelon form of \(A\text{,}\) then \(\det(A) = \det(R)\text{.}\)

(c) True/False.

If a matrix \(A\) is invertible, then 0 is not an eigenvalue of \(A\text{.}\)

(d) True/False.

If \(A\) is a \(2\times 2\) matrix for which the image of the unit square under the transformation \(T(\vx)=A\vx\) has zero area, then \(A\) is non-invertible.

(e) True/False.

Row operations do not change the determinant of a square matrix.

(f) True/False.

If \(A_{ij}\) is the matrix obtained from a square matrix \(A = [a_{ij}]\) by deleting the \(i\)th row and \(jth\) column of \(A\text{,}\) then

\begin{align*} a_{i1}(-1)^{i+1}\amp \det(A_{i1}) + a_{i2}(-1)^{i+2}\det(A_{i2}) + \cdots\\ \amp \qquad + a_{in}(-1)^{i+n}\det(A_{in})\\ \amp = a_{1j}(-1)^{j+1}\det(A_{1j}) + a_{2i}(-1)^{j+2}\det(A_{2i}) + \cdots\\ \amp \qquad + a_{nj}(-1)^{j+n}\det(A_{nj}) \end{align*}

for any \(i\) and \(j\) between \(1\) and \(n\text{.}\)

(g) True/False.

If \(A\) is an invertible matrix, then \(\det\left(A^{\tr}A\right) > 0\text{.}\)