Skip to main content

Section 23 The Dot Product in Rn

Subsection Application: Hidden Figures in Computer Graphics

In video games, the speed at which a computer can render changing graphics views is vitally important. To increase a computer's ability to render a scene, programs often try to identify those parts of the images a viewer could see and those parts the viewer could not see. For example, in a scene involving buildings, the viewer could not see any images blocked by a solid building. In the mathematical world, this can be visualized by graphing surfaces. In Figure 23.1 we see a crude image of a house made up of small polygons (this is how programs generally represent surfaces). On the left in Figure 23.1 we see all of the polygons that are needed to construct the entire surface, even those polygons that lie behind others which we could not see if the surface was solid. On the right in Figure 23.1 we have hidden the parts of the polygons that we cannot see from our view.

Figure 23.1. Images of a house.

We also see this idea in mathematics when we graph surfaces. Figure 23.2 shows the graph of the surface defined by f(x,y)=4βˆ’x2 that is made up of polygons. At left we see all of the polygons and at right only those parts that would be visible from our viewing perspective.

Figure 23.2. Graphs of f(x,y)=4βˆ’x2.

By eliminating the parts of the polygons we cannot see from our viewing perspective, the computer program can more quickly render the viewing image. Later in this section we will explore one method for how programs remove the hidden portions of images. This process involves the dot product of vectors.

Subsection Introduction

Orthogonality, a concept which generalizes the idea of perpendicularity, is an important concept in linear algebra. We use the dot product to define orthogonality and more generally angles between vectors in Rn for any dimension n. The dot product has many applications, e.g., finding components of forces acting in different directions in physics and engineering. The dot product is also an example of a larger concept, inner products, that we will discuss later. We introduce and investigate dot products in this section.

We will illustrate the dot product in R2, but the process we go through will translate to any dimension. Recall that we can represent the vector v=[v1v2] as the directed line segment (or arrow) from the origin to the point (v1,v2) in R2, as illustrated in Figure 23.3. Using the Pythagorean Theorem we can then define the length (or magnitude or norm) of the vector v in R2 as

||v||=v12+v22.

We can also write this norm as

v1v1+v2v2.

The expression under the square root is an important one and we extend it and give it a special name.

Figure 23.3. A vector in R2 from the origin to a point.

If u=[u1 u2]T and v=[v1 v2]T are vectors in R2, then we call the expression u1v1+u2v2 the dot product of u and v, and denote it as uβ‹…v. With this idea in mind, we can rewrite the norm of the vector v as

||v||=vβ‹…v.

The definition of the dot product translates naturally to Rn (see Exercise 5 in Section 5).

Definition 23.4.

Let u=[u1 u2 β‹― un] and v=[v1 v2 β‹― vn] be vectors in Rn. The dot product (or scalar product ) of u and v is the scalar

uβ‹…v=u1v1+u2v2+β‹―+unvn=βˆ‘i=1nuivi.

The dot product then allows us to define the norm (or magnitude or length) of any vector in Rn.

Definition 23.5.

The norm ||v|| of the vector v∈Rn is

||v||=vβ‹…v.

We also use the words magnitude or length as alternatives for the word norm. We can equivalently write the norm of the vector v=[v1 v2 β‹― vn]T as

||v||=v12+v22+β‹―+vn2.

We can also realize the dot product as a matrix product. If u=[u1 u2 β‹― un]T and v=[v1 v2 β‹― vn]T, then

uβ‹…v=uTv

 39 .

IMPORTANT NOTE.

The dot product is only defined between two vectors with the same number of components.

Preview Activity 23.1.

(a)

Find uβ‹…v if u=[2 3 βˆ’1 4]T and v=[4 6 7 βˆ’5]T in R4.

(b)

The dot product satisfies some useful properties as given in the next theorem.

Verification of some of these properties is left to the exercises. Let u and v be vectors in R5 with uβ‹…v=βˆ’1, ||u||=2 and ||v||=3.

(iii)

Use whatever properties of Theorem 23.6 that are needed to determine the value of (2u+4v)β‹…(uβˆ’7v).

(c)

At times we will want to find vectors in the direction of a given vector that have a certain magnitude. Let u=[2 2 1]T in R3.

(iii)

Vectors with magnitude 1 are important and are given a special name.

Definition 23.7.

A vector v in Rn is a unit vector if ||v||=1.

We can use unit vectors to find vectors of a given length in the direction of a given vector. Let c be a positive scalar and v a vector in Rn. Use properties from Theorem 23.6 to show that the magnitude of the vector cv||v|| is c.

Subsection The Distance Between Vectors

Finding optimal solutions to systems is an important problem in applied mathematics. It is often the case that we cannot find an exact solution that satisfies certain constraints, so we look instead for the β€œbest” solution that satisfies the constraints. An example of this is fitting a least squares line to a set of data. As we will see, the dot product will allow us to find β€œbest” solutions to certain types of problems, where we measure accuracy using the notion of a distance between vectors. Geometrically, we can represent a vector u as a directed line segment from the origin to the point defined by u. If we have two vectors u and v, we can think of the length of the difference uβˆ’v as a measure of how far apart the two vectors are from each other. It is natural, then to define the distance between vectors as follows.

Definition 23.8.

Let u and v be vectors in Rn. The distance between u and v is the length of the difference uβˆ’v or

||uβˆ’v||.

As Figure 23.9 illustrates, if vectors u and v emanate from the same initial point, and P and Q are the terminal points of u and v, respectively, then the difference ||uβˆ’v|| is the standard Euclidean distance between the points P and Q.

Figure 23.9. ||uβˆ’v||.

Subsection The Angle Between Two Vectors

Determining a β€œbest” solution to a problem often involves finding a solution that minimizes a distance. We generally accomplish a minimization through orthogonality β€” which depends on the angle between vectors. Given two vectors u and v in Rn, we position the vectors so that they emanate from the same initial point. If the vectors are nonzero, then they determine a plane in Rn. In that plane there are two angles that these vectors create. We will define the angle between the vectors to be the smaller of these two angles. The dot product will tell us how to find the angle between vectors. Let u and v be vectors in Rn and ΞΈ the angle between them as illustrated in Figure 23.10.

Figure 23.10. The angle between u and v

Using the Law of Cosines, we have

||uβˆ’v||2=||u||2+||v||2βˆ’2||u|| ||v|| cos⁑(ΞΈ).

Rearranging, we obtain

||u|| ||v|| cos⁑(ΞΈ)=12(||u||2+||v||2βˆ’||uβˆ’v||2)=12(||u||2+||v||2βˆ’(uβˆ’v)β‹…(uβˆ’v))=12(||u||2+||v||2βˆ’uβ‹…u+2uβ‹…vβˆ’vβ‹…v)=uβ‹…v.

So the angle ΞΈ between two nonzero vectors u and v in Rn satisfies the equation

(23.1)cos⁑(ΞΈ)=uβ‹…v||u|| ||v||.

Of particular interest to us will be the situation where vectors u and v are orthogonal (perpendicular). 40  Intuitively, we think of two vectors as orthogonal if the angle between them is 90∘.

Activity 23.2.

(a)

The vectors e1=[1 0]T and e2=[0 1]T are perpendicular in R2. What is e1β‹…e2?

(b)

Now let u and v be any vectors in Rn.

(i)

Suppose the angle between nonzero vectors u and v is 90∘. What does Equation (23.1) tell us about uβ‹…v?

(ii)

Now suppose that uβ‹…v=0. What does Equation (23.1) tell us about the angle between u and v? Why?

(iii)

Explain why the following definition makes sense.

Definition 23.11.

Two vectors u and v in Rn are orthogonal if uβ‹…v=0.

(iv)

According to Definition 23.11, to which vectors is 0 orthogonal? Does this make sense to you intuitively? Explain.

Activity 23.3.

(a)

Find the angle between the two vectors u=[1 3 βˆ’2 5]T and v=[5 2 3 βˆ’1]T.

(b)

Find, if possible, two non-parallel vectors orthogonal to u=[03βˆ’21].

Subsection Orthogonal Projections

When running a sprint, the racers may be aided or slowed by the wind. The wind assistance is a measure of the wind speed that is helping push the runners down the track. It is much easier to run a very fast race if the wind is blowing hard in the direction of the race. So that world records aren't dependent on the weather conditions, times are only recorded as record times if the wind aiding the runners is less than or equal to 2 meters per second. Wind speed for a race is recorded by a wind gauge that is set up close to the track. It is important to note, however, that weather is not always as cooperative as we might like. The wind does not always blow exactly in the direction of the track, so the gauge must account for the angle the wind makes with the track. If the wind is blowing in the direction of the vector u in Figure 23.12 and the track is in the direction of the vector v in Figure 23.12, then only part of the total wind vector is actually working to help the runners. This part is called the orthogonal projection of the vector u onto the vector v and is denoted projvu. The next activity shows how to find this projection.

Figure 23.12. The orthogonal projection of u onto v.

Activity 23.4.

Since the orthogonal projection projvu is in the direction of v, there exists a constant c such that projvu=cv. If we determine the value of c, we can find projvu.

(a)

The wind component that acts perpendicular to the direction of v is called the projection of u orthogonal to v and is denoted projβŠ₯vu as shown in Figure 23.12. Write an equation that involves projvu, projβŠ₯vu, and u. Then solve that equation for projβŠ₯vu.

(b)

Given that v and projβŠ₯vu are orthogonal, what does that tell us about vβ‹…projβŠ₯vu? Combine this fact with the result of part (a) and that projvu=cv to obtain an equation involving v, u, and c.

(c)

Solve for c using the equation you found in the previous step.

(d)

Use your value of c to identify projvu.

To summarize:

Definition 23.13.

Let u and v be vectors in Rn with v≠0.

  1. The orthogonal projection of u onto v is the vector

    (23.2)projvu=uβ‹…v||v||2v.
  2. The projection of u orthogonal to v is the vector

    projβŠ₯vu=uβˆ’projvu.

Activity 23.5.

Let u=[58] and v=[6βˆ’10]. Find projvu and projβŠ₯vu and draw a picture to illustrate.

The orthogonal projection of a vector u onto a vector v is really a projection of the vector u onto the space Span{v}. The vector projvu is the best approximation to u of all the vectors in Span{v} in the sense that projvu is the closest to u among all vectors in Span{v}, as we will prove later.

Subsection Orthogonal Complements

In Activity 23.2 we defined two vectors u and v in Rn to be orthogonal (or perpendicular) if uβ‹…v=0. A use of orthogonality in geometry is to define a plane. A plane through the origin in R3 is a two dimensional subspace of R3, and a plane is defined to be the set of all vectors in R3 that are orthogonal to a given vector (called a normal vector). For example, to find the equation of the plane through the origin in R3 orthogonal to the normal vector n=[1 2 βˆ’1]T, we seek all the vectors v=[x y z]T such that

vβ‹…n=0.

This gives us the equation

x+2yβˆ’z=0

as the equation of this plane. The collection of all vectors that are orthogonal to a given subspace of vectors is called the orthogonal complement of that subspace in Rn.

Definition 23.14.

Let W be a subspace of Rn for some nβ‰₯1. The orthogonal complement of W is the set

WβŠ₯={x∈Rn:xβ‹…w=0 for all w∈W}.

Preview Activity 23.6.

Let W=Span{[1 βˆ’1]T} in R2. Completely describe all vectors in WβŠ₯ both algebraically and geometrically.

There is a more general idea here as defined in Preview Activity 23.6. If we have a set S of vectors in Rn, we let SβŠ₯ (read as β€œS perp”, called the orthogonal complement of S) be the set of all vectors in Rn that are orthogonal to every vector in S. In our plane example, the set S is {n} and SβŠ₯ is the plane with equation x+2yβˆ’z=0.

Activity 23.7.

We have seen another example of orthogonal complements. Let A be an mΓ—n matrix with rows r1, r2 , …, rm in order. Consider the three spaces Nul A, Row A, and Col A related to A, where Row A=Span{r1,r2,…,rm} (that is, Row A is the span of the rows of A). Let x be a vector in Row A.

(a)

What does it mean for x to be in Row A?

(b)

Now let y be a vector in Nul A. Use the result of part (a) and the fact that Ay=0 to explain why xβ‹…y=0. Explain how this verifies (Row A)βŠ₯=Nul A.

Hint.

Calculate Ay using scalar products of rows of A with y.

(c)

Use AT in place of A in the result of the previous part to show (Col A)βŠ₯=Nul AT.

The activity proves the following theorem:

To show that a vector is in the orthogonal complement of a subspace, it is not necessary to demonstrate that the vector is orthogonal to every vector in the subspace. If we have a basis for the subspace, it suffices to show that the vector is orthogonal to every vector in that basis for the subspace, as the next theorem demonstrates.

Proof.

Let B={w1,w2,…,wm} be a basis for a subspace W of Rn and let v be a vector in Rn. Our theorem is a biconditional, so we need to prove both implications. Since BβŠ‚W, it follows that if v is orthogonal to every vector in W, then v is orthogonal to every vector in B. This proves the forward implication. Now we assume that v is orthogonal to every vector in B and show that v is orthogonal to every vector in W. Let x be a vector in W. Then

x=x1w1+x2w2+β‹―+xmwm

for some scalars x1, x2, …, xm. Then

vβ‹…x=vβ‹…(x1w1+x2w2+β‹―+xmwm)=x1(vβ‹…w1)+x2(vβ‹…w2)+β‹―+xm(vβ‹…wm)=0.

Thus, v is orthogonal to x and v is orthogonal to every vector in W.

Activity 23.8.

Let W=Span{[110],[001]}. Find all vectors in WβŠ₯.

We will work more closely with projections and orthogonal complements in later sections.

Subsection Examples

What follows are worked examples that use the concepts from this section.

Example 23.17.

Let β„“ be the line defined by the equation ax+by+c=0 with in R2 and let P=(x0,y0) be a point in the plane. In this example we will learn how to find the distance from P to β„“.

(a)

Show that n=[a b]T is orthogonal to the line β„“. That is, n is orthogonal to any vector on the line β„“.

Solution.

Any vector on the line β„“ is a vector between two points on the line. Let Q=(x1,y1) and R=(x2,y2) be points on the line β„“. Then u=QRβ†’=[x2βˆ’x1 y2βˆ’y1]T is a vector on line β„“. Since Q and R are on the line, we know that ax1+by1+c=0 and ax2+by2+c=0. So βˆ’c=ax1+by1=ax2+by2 and

0=a(x2βˆ’x1)+b(y2βˆ’y1)=[a b]Tu.

Thus, n=[a b]T is orthogonal to every vector on the line β„“.

(b)

Let Q=(x1,y1) be any point on line β„“. Draw a representative picture of P, n with its initial point at P, along with Q and β„“. Explain how to use a projection to determine the distance from P to β„“.

Solution.

A picture of the situation is shown in Figure 23.18. If v=PQβ†’, then the distance from point P to line β„“ is given by ||projnv||.

Figure 23.18. Distance from a point to a line.

(c)

Use the idea from part (b) to show that the distance d from P to β„“ satisfies

(23.3)d=|ax0+by0+c|a2+b2.
Solution.

Recall that n=[a b]T and v=PQβ†’=[x1βˆ’x0 y1βˆ’y0]T. Since ax1+by1+c=0, we have

projnv=vβ‹…v||n||2n=a(x1βˆ’x0)+b(y1βˆ’y0)a2+b2[a b]T=ax1+by1βˆ’ax0βˆ’by0a2+b2[a b]T=βˆ’cβˆ’ax0βˆ’by0a2+b2[a b]T.

So

||projnv||=|ax0+by0+c|a2+b2a2+b2=|ax0+by0+c|a2+b2.
(d)

Use Equation (23.3) to find the distance from the point (3,4) to the line y=2x+1.

Solution.

Here we have P=(3,4), and the equation of our line is 2xβˆ’y+1=0. So a=2, b=βˆ’1, and c=βˆ’1. Thus, the distance from P to the line is

|ax0+by0+c|a2+b2=|2(3)βˆ’(4)+1|4+1=35.

Example 23.19.

Let a, b, and c be scalars with a≠0, and let

W={ax+by+cz=0:x,y,z∈R}.
(a)

Find two vectors that span W, showing that W is a subspace of R3. (In fact, W is a plane through the origin in R3.)

Solution.

The coefficient matrix for the system ax+by+cz=0 is [a b c]T. The first column is a pivot column and the others are not. So y and z are free variables and

[x y z]T=[βˆ’bayβˆ’caz,y,z]T=y[βˆ’ba 1 0]T+z[βˆ’ca 0 1]T.

So W=Span{[βˆ’ba 1 0]T,[βˆ’ca 0 1]T}.

(b)

Find a vector n that is orthogonal to the two vectors you found in part (a).

Solution.

If we let n=[a b c]T, then

nβ‹…[βˆ’ba 1 0]T=βˆ’b+b=0nβ‹…[βˆ’ca 0 1]T =βˆ’c+c=0.

Thus, [a b c]T is orthogonal to both [βˆ’ba 1 0]T and [βˆ’ca 0 1]T.

(c)

Explain why {n} is a basis for WβŠ₯.

Solution.

Let u=[βˆ’ba 1 0]T and v=[βˆ’ca 0 1]T. Every vector in W has the form xu+yv for some scalars x and y, and

nβ‹…(xu+yv)=x(nβ‹…u)+y(nβ‹…v)=0.

So n∈WβŠ₯. Now we need to verify that {n} spans WβŠ₯. Let w=[w1 w2 w3]T be in WβŠ₯. Then wβ‹…z=0 for every z∈W. In particular, wβ‹…u=0 or βˆ’baw1+w2=0, and wβ‹…v=0 or βˆ’caw1+w3=0. Equivalently, we have w2=baw1 and w3=caw1. So

w=[w1 w2 w3]T=[w1 baw1 caw1]T=1a[a b c]Tw1=w1an.

So every vector in WβŠ₯ is a multiple of n, and {n} spans WβŠ₯. We conclude that {n} is a basis for WβŠ₯. Thus, the vector [a b c]T is a normal vector to the plane ax+by+cz=0 if aβ‰ 0. The same reasoning works if at least one of a, b,or c is nonzero, so we can say in every case that [a b c]T is a normal vector to the plane ax+by+cz=0.

Subsection Summary

  • The dot product of vectors u=[u1 u2 β‹― un]T and v=[v1 v2 β‹― vn]T in Rn is the scalar

    uβ‹…v=u1v1+u2v2+β‹―+unvn=βˆ‘i=1nuivi.
  • The angle ΞΈ between two nonzero vectors u and v in Rn satisfies the equation

    cos⁑(ΞΈ)=uβ‹…v||u|| ||v||

    and 0≀θ≀180.

  • Two vectors u and v are orthogonal if uβ‹…v=0.

  • The length, or norm, of the vector u can be found as ||u||=uβ‹…u.

  • The distance between the vectors u and v in Rn is ||uβˆ’v||, which is the length of the difference uβˆ’v.

  • Let u and v be vectors in Rn.

    • The orthogonal projection of u onto v is the vector

      projvu=uβ‹…v||v||2v.
    • The projection of u perpendicular to v is the vector

      projβŠ₯vu=uβˆ’projvu.
  • The orthogonal complement of the subspace W of Rn is the set

    WβŠ₯={x∈Rn:xβ‹…w=0 for all w∈W}.

Exercises Exercises

1.

For each of the following pairs of vectors, find uβ‹…v, calculate the angle between u and v, determine if u and v are orthogonal, find ||u|| and ||v||, calculate the distance between u and v, and determine the orthogonal projection of u onto v.

(a)

u=[1 2]T, v=[βˆ’2 1]T

(b)

u=[2 βˆ’2]T, v=[1 βˆ’1]T

(c)

u=[2 βˆ’1]T, v=[1 3]T

(d)

u=[1 2 0]T, v=[βˆ’2 1 1]T

(e)

u=[0 0 1]T, v=[1 1 1]T

2.

Given u=[2 1 2]T, find a vector v so that the angle between u and v is 60∘ and the orthogonal projection of v onto u has length 2.

3.

For which value(s) of h is the angle between [1 1 h]T and [1 2 1]T equal to 60∘?

4.

Let A=[aij] be a kΓ—m matrix with rows r1, r2, …, rk, and let B=[b1 b2 β‹― bn] be an mΓ—n matrix with columns b1, b2, …, bn. Show that we can write the matrix product AB in a shorthand way as AB=[riβ‹…bj].

5.

Let A be an mΓ—n, u a vector in Rn and v a vector in Rm. Show that

Auβ‹…v=uβ‹…ATv.

6.

Let u, v, and w be vectors in Rn. Show that

(a)

(u+v)β‹…w=(uβ‹…w)+(vβ‹…w) (the dot product distributes over vector addition)

(b)

If c is an arbitrary constant, then (cu)β‹…v=uβ‹…(cv)=c(uβ‹…v)

7.

The Pythagorean Theorem states that if a and b are the lengths of the legs of a right triangle whose hypotenuse has length c, then a2+b2=c2. If we think of the legs as defining vectors u and v, then the hypotenuse is the vector u+v and we can restate the Pythagorean Theorem as

||u+v||2=||u||2+||v||2.

In this exercise we show that this result holds in any dimension.

(a)

Let u and v be orthogonal vectors in Rn. Show that ||u+v||2=||u||2+||v||2.

Hint.

Rewrite ||u+v||2 using the dot product.

(b)

Must it be true that if u and v are vectors in Rn with ||u+v||2=||u||2+||v||2, then u and v are orthogonal? If not, provide a counterexample. If true, verify the statement.

8.

The Cauchy-Schwarz inequality,

(23.4)|uβ‹…v|≀||u|| β€–|v||

for any vectors u and v in Rn, is considered one of the most important inequalities in mathematics. We verify the Cauchy-Schwarz inequality in this exercise. Let u and v be vectors in Rn.

(a)

Explain why the inequality (23.4) is true if either u or v is the zero vector. As a consequence, we assume that u and v are nonzero vectors for the remainder of this exercise.

(b)

Let w=projvu=uβ‹…v||v||2v and let z=uβˆ’w. We know that wβ‹…z=0. Use Exercise 7 of this section to show that

||u||2β‰₯||w||2.
(c)

Now show that ||w||2=|uβ‹…v|2||v||2.

(d)

Combine parts (b) and (c) to explain why equation (23.4) is true.

9.

Let u and v be vectors in Rn. Then u, v and u+v form a triangle. We should then expect that the length of any one side of the triangle is smaller than the sum of the lengths of the other sides (since the straight line distance is the shortest distance between two points). In other words, we expect that

(23.5)||u+v||≀||u||+||v||.

Equation (23.5) is called the Triangle Inequality. Use the Cauchy-Schwarz inequality (Exercise 8) to prove the triangle inequality.

10.

Let W be a subspace of Rn for some n. Show that WβŠ₯ is also a subspace of Rn.

11.

Let W be a subspace of Rn. Show that W is a subspace of (WβŠ₯)βŠ₯.

12.

If W is a subspace of Rn for some n, what is W∩WβŠ₯? Verify your answer.

13.

Suppose W1βŠ†W2 are two subspaces of Rn. Show that W2βŠ₯βŠ†W1βŠ₯.

14.

What are (Rn)βŠ₯ and {0}βŠ₯ in Rn? Justify your answers.

15.

Label each of the following statements as True or False. Provide justification for your response.

(a) True/False.

The dot product is defined between any two vectors.

(b) True/False.

If u and v are vectors in Rn, then uβ‹…v is another vector in Rn.

(c) True/False.

If u and v are vectors in Rn, then uβ‹…v is always non-negative.

(d) True/False.

If v is a vector in Rn, then vβ‹…v is never negative.

(e) True/False.

If u and v are vectors in Rn and uβ‹…v=0, then u=v=0.

(f) True/False.

If v is a vector in Rn and vβ‹…v=0, then v=0.

(g) True/False.

The norm of the sum of vectors is the sum of the norms of the vectors.

(h) True/False.

If u and v are vectors in Rn, then projvu is a vector in the same direction as u.

(i) True/False.

The only subspace W of Rn for which WβŠ₯={0} is W=Rn.

(j) True/False.

If a vector u is orthogonal to v1 and v2, then u is also orthogonal to v1+v2.

(k) True/False.

If a vector u is orthogonal to v1 and v2, then u is also orthogonal to all linear combinations of v1 and v2.

(l) True/False.

If u≠0 and v are parallel, then the orthogonal projection of v onto u equals v.

(m) True/False.

If u≠0 and v are orthogonal, then the orthogonal projection of v onto u equals v.

(n) True/False.

For any vector v and uβ‰ 0, ||projuv||≀||v||.

(o) True/False.

Given an mΓ—n matrix, dim⁑(Row A)+dim⁑(Row A)βŠ₯=n.

(p) True/False.

If A is a square matrix, then the columns of A are orthogonal to the vectors in Nul A.

(q) True/False.

The vectors in the null space of an mΓ—n matrix are orthogonal to vectors in the row space of A.

Subsection Project: Back-Face Culling

To identify hidden polygons in a surface, we will utilize a technique called back face culling. This involves identifying which polygons are back facing and which are front facing relative to the viewer's perspective. The first step is to assign a direction to each polygon in a surface.

Project Activity 23.9.

Consider the polygon ABCD in Figure 23.20. Since a polygon is flat, every vector in the polygon is perpendicular to a fixed vector (which we call a normal vector to the polygon). A normal vector n for the polygon ABCD in Figure 23.20 is shown. In this activity we learn how to find a normal vector to a polygon.

Let x=[x1 x2 x3]T and y=[y1 y2 y3]T be two vectors in R3. If x and y are linearly independent, then x and y determine a polygon as shown in Figure 23.20. Our goal is to find a vector n that is orthogonal to both x and y. Let w=[w1 w2 w3]T be another vector in R3 and let C=[wTxTyT] be the matrix whose rows are w, x, and y. Let Cij be the ijth cofactor of C, that is Cij is (βˆ’1)i+j times the determinant of the submatrix of C obtained by deleting the ith row and jth column of C. Now define the vector xΓ—y as follows:

xΓ—y=C11e1+C12e2+C13e3.

The vector xΓ—y is called the cross product of the vectors x and y. (Note that the cross product is only defined for vectors in R3.) We will show that xΓ—y is orthogonal to both x and y, making xΓ—y a normal vector to the polygon defined by x and y.

Figure 23.20. Normal vector to a polygon.
(a)

Show that

xΓ—y=[x2y3βˆ’x3y2x3y1βˆ’x1y3x1y2βˆ’x2y1].
(b)

Use a cofactor expansion of C along the first row and properties of the dot product to show that

det(C)=wβ‹…(xΓ—y).
(c)

Use the result of part (b) and properties of the determinant to calculate xβ‹…(xΓ—y) and yβ‹…(xΓ—y). Explain why xΓ—y is orthogonal to both x and y and is therefore a normal vector to the polygon determined by x and y.

Project Activity 23.9 shows how we can find a normal vector to a parallelogram β€” take two vectors x and y between the vertices of the parallelogram and calculate their cross products. Such a normal vector can define a direction for the parallelogram. There is still a problem, however.

Project Activity 23.10.

Let x=[x1 x2 x3]T and y=[y1 y2 y3]T be any vectors in R3. There is a relationship between xΓ—y and yΓ—x. Find and verify this relationship.

Project Activity 23.10 shows that the cross product is anticommutative, so we get different directions if we switch the order in which we calculate the cross product. To fix a direction, we establish the convention that we always label the vertices of our parallelogram in the counterclockwise direction as shown in Figure 23.20. This way we always use x as the vector from vertex A to vertex B rather than the reverse. With this convention established, we can now define the direction of a parallelogram as the direction of its normal vector.

Once we have a normal vector established for each polygon, we can now determine which polygons are back-face and which are front-face. Figure 23.21 at left provides the gist of the idea, where we represent the polygons with line segments to illustrate. If the viewer's eye is at point P and views the figures, the normal vectors of the visible polygons point in a direction toward the viewer (front-face) and the normal vectors of the polygons hidden from the viewer point away from the viewer (back-face). What remains is to determine an effective computational way to identify the front and back facing polygons.

Figure 23.21. Left: Hidden faces. Right: Back face culling.

Project Activity 23.11.

Consider the situation as depicted at right in Figure 23.21. Assume that AB and RS are polygons (rendered one dimensionally here) with normal vectors n at their centers as shown. The viewer's eye is at point P and the viewer's line of vision to the centers CAB and CRS are indicated by the vectors v. Each vector v makes an angle ΞΈ with the normal to the polygon.

(a)

What can be said about the angle ΞΈ for a front-facing polygon? What must be true about vβ‹…n for a front-facing polygon? Why?

(b)

What can be said about the angle ΞΈ for a back-facing polygon? What must be true about vβ‹…n for a back-facing polygon? Why?

(c)

The dot product then provides us with a simple computational tool for identifying back-facing polygons (assuming we have already calculated all of the normal vectors). We can then create an algorithm to cull the back-facing polygons. Assuming that we the viewpoint P and the coordinates of the polygons of the surface, complete the pseudo-code for a back-face culling algorithm:

for all polygons on the surface do
  calculate the normal vector n using the ______ product for the current polygon
  calculate the center C of the current polygon
  calculate the viewing vector ______
    if ______ then
      render the current polygon
  end if
end for

As a final comment, back-face culling generally reduces the number of polygons to be rendered by half. This algorithm is not perfect and does not always do what we want it to do (e.g., it may not remove all parts of a polygon that we don't see), so there are other algorithms to use in concert with back-face culling to correctly render objects.

Technically, uTv is a 1Γ—1 matrix and not a scalar, but we usually think of 1Γ—1 matrices as scalars.
We use the term orthogonal instead of perpendicular because we will be able to extend this idea to situations where we normally don't think of objects as being perpendicular.