Skip to main content

Section 15 Bases and Dimension

Subsection Application: Lattice Based Cryptography

When you use your credit card, you expect that the information that is transmitted is protected so others can't use your card. Similarly, when you create a password for your computer or other devices, you do so with the intention that it will be difficult for others to decipher.

Cryptology is the study of methods to maintain secure communication in the presence of other parties (cryptography), along with the study of breaking codes (cryptanalysis). In essence, cryptology is the art of keeping and breaking secrets. The creation of secure codes (cryptography) can provide confidentiality (ensure that information is available only to the intended recipients), data integrity (prevent data from being altered between the sender and recipient), and authentication (making sure that the information is from the correct source).

Modern cryptology uses mathematical theory that can be implemented with computer hardware and algorithms. The security of public key systems is largely based on mathematical problems that are very difficult to solve. For example, the security of the RSA system relies on the fact that it is computationally difficult to find prime factors of very large numbers, and elliptic curve cryptography relies on the difficulty of the discrete logarithm problem for elliptic curves. However, the continual increase in the power of computers threatens the security of these systems, and so cryptographic systems have to keep adapting to the newest technology. For example, Shor's Algorithm (which could run on a quantum computer) can solve the public key cryptographic systems that rely on the integer factorization problem or the discrete logarithm problem. So if a working quantum computer was ever developed, it would threaten the existing cryptographic systems. Lattice-based cryptography is a potential source of systems that may be secure even in such an environment. The security of these systems is dependent on the fact that the average case of the difficulty of certain problems in lattice theory is higher than the worst case problems that underpin current cryptosystems. As we will see later in this section, lattices are built on bases for subspace of Rn.

Subsection Introduction

A basis provides a system in which we can uniquely represent every vector in the space we are considering. More specifically, every vector in the space can be expressed as a linear combination of the vectors in the basis in a unique way. In order to be able to cover every point in the space, the basis has to span the space. In order to be able to provide a unique coordinate for each point, there should not be any extra vectors in the basis, which is achieved by linear independence of the vectors. For practical reasons, a basis simplifies many problems because we only need to solve the problem for each of the basis vectors. Solutions of the other cases usually follow because every vector in the space can be expressed as a unique linear combination of the basis vectors.

Recall that a basis for a subspace W of Rn is a set of vectors which are linearly independent and which span W.

Preview Activity 15.1.

(a)

For each of the following sets of vectors, determine whether the vectors form a basis of R3. Use any appropriate technology for your computations.

(iii)

{[101],[111],[033],[βˆ’121]}

(b)

In problem (1) we should have noticed that a space can have more than one basis, but that any two bases contain the same number of elements. This is a critically important idea that we investigate in more detail in this problem in one specific case. Assume that W is a subspace of Rn that has a basis B={v1,v2} with two basis vectors. We want to see if any other basis for W can have a different number of elements. Let us now consider a set U={u1,u2,u3} of three vectors in W. Our goal is to determine if U can be a basis for W. Since B is a basis for W, any vector in W can be written as a linear combination of the vectors in B. So we can write

(15.1)u1=a11v1+a21v2(15.2)u2=a12v1+a22v2(15.3)u3=a13v1+a23v2

for some scalars aij. If U were to be a basis for W, then U would have to be a linearly independent set. To determine the independence or dependence of U we consider the vector equation

(15.4)x1u1+x2u2+x3u3=0

for scalars x1, x2, and x3.

(i)

Substitute for u1, u2, and u3 from (15.1), (15.2), and (15.3) into (15.4) and perform some vector algebra to show that

(15.5)0=(a11x1+a12x2+a13x3)v1+(a21x1+a22x2+a23x3)v2.
(ii)

Recall that B={v1,v2} is a basis. What does that tell us about the weights in the linear combination (15.5)? Explain why Ax=0, where A=[aij] and x=[x1 x2 x3]T.

(iii)

With A as in part (b), how many solutions does the system Ax=0 have? Explain. What does this tell us about the independence or dependence of the set U? Why?

Hint.

Consider the number of rows and columns of A.

(iv)

Can U be a basis for W? Explain.

Subsection The Dimension of a Subspace of Rn

In Preview Activity 15.1 we saw that a subspace of Rn can have more than one basis. This prompts the question of how, if at all, are any two bases for a given space related. More specifically, is it possible to have two bases for a given subspace of Rn that contain different numbers of vectors? As we will see the answer is no, which will lead us to the concept of dimension.

Let W be a subspace of Rn that has a basis B={v1,v2,…,vk} of k vectors. Since we have been calling bases minimal spanning sets, we should expect that any two bases for the same subspace have the same number of elements (otherwise one of the two bases would not be minimal). Our goal in this section is to prove that result β€” that any other basis of W contains exactly k vectors. The approach will be the same as was used in Preview Activity 15.1. We will let U={u1,u2,…,um} be a set of vectors in W with m>k and demonstrate that U is a linearly dependent set. To argue linear dependence, let x1, x2, …, xm be scalars so that

(15.6)x1u1+x2u2+β‹―+xmum=0.

For each i there exist scalars aij so that

ui=a1iv1+a2iv2+β‹―+akivk.

Substituting into (15.6) yields

0=x1u1+x2u2+β‹―+xmum=x1(a11v1+a21v2+β‹―+ak1vk)+x2(a12v1+a22v2+β‹―+ak2vk)+β‹―+xm(a1mv1+a2mv2+β‹―+akmvk)=(x1a11+x2a12+x3a13+β‹―+xma1m)v1+(x1a21+x2a22+x3a23+β‹―+xma2m)v2(15.7)+β‹―+(x1ak1+x2ak2+x3ak3+β‹―+xmakm)vk.

Since B is a basis, the vectors v1, v2, …, vk are linearly independent. So each coefficient in (15.7) is 0 and x = [x1 x2 β‹― xm]T is a solution to the homogeneous system Ax=0, where A=[aij]. Now A is a kΓ—m matrix with m>k, so not every column of A is a pivot column. This means that Ax=0 has a nontrivial solution. It follows that the vector equation (15.6) has a nontrivial solution and so the m vectors u1, u2, …, um are linearly dependent. We summarize this in the following theorem.

One consequence of Theorem 15.1 is that, in addition to being a minimal spanning set, a basis is also a maximal linearly independent set.

Activity 15.2.

Now let's return to the question of the number of elements in a basis for a subspace of Rn. Recall that we are assuming that W has a basis B={v1,v2,…,vk} of k vectors in Rn. Suppose that Bβ€² is another basis for W containing m vectors.

(a)

Given the fact that B is a basis for W, what does Theorem 15.1 tell us about the relationship between m and k?

(b)

Given the fact that Bβ€² is a basis for W, what does Theorem 15.1 tell us about the relationship between m and k?

(c)

What do the results of (a) and (b) tell us about the relationship between m and k? What can we conclude about any basis for W?

The result of Activity 15.2 is summarized in the following theorem. Recall that the trivial space is the single element set {0}.

This last theorem states that the number of vectors in a basis for a subspace space is a well-defined number. In other words, the number of vectors in a basis is an invariant of the subspace. This important number is given a name.

Definition 15.3.

The dimension of a subspace W of Rn is the number of vectors in a basis for W. The dimension of the trivial subspace {0} of Rn is defined to be 0.

We denote the dimension of a subspace W of Rn by dim⁑(W). As we will see later, any two vector spaces of the same dimension are basically the same vector space. So the dimension of a vector space is an important number that essentially tells us all we need to know about the structure of the space.

Activity 15.3.

Find the dimensions of each of the indicated subspaces of Rn for the appropriate n. Explain your method.

Subsection Conditions for a Basis of a Subspace of Rn

There are two items we need to confirm before we can state that a subset B of a subspace W of Rn is a basis for W: the set B must be linearly independent and span W. However, if we have the right number (namely, the dimension) of vectors in our set B, then either one of these conditions will imply the other.

Activity 15.4.

Let W be a subspace of Rn with dim⁑(W)=k. We know that every basis of W contains exactly k vectors.

(a)

Suppose that S is a subset of W that contains k vectors and is linearly independent. In this part of the activity we will show that S must span W.

(i)

Suppose that S does not span W. Explain why this implies that W contains a set of k+1 linearly independent vectors.

(ii)

Explain why the result of part i. tells us that S is a basis for W.

(b)

Now suppose that S is a subset of W with k vectors that spans W. In this part of the activity we will show that S must be linearly independent.

(i)

Suppose that S is not linearly independent. Explain why we can then find a proper subset of S that is linearly independent but has the same span as S.

(ii)

Explain why the result of part i. tells us that S is a basis for W.

The result of Activity 15.4 is the following important theorem.

Subsection Finding a Basis for a Subspace

Since every vector in a subspace of Rn can be written uniquely as a linear combination of vectors in a basis for the subspace, a basis provides us with the most efficient and convenient way to represent vectors in the subspace. Until now we have been given a set of vectors and have been asked to find a basis from that set, so an important question to address is how we can find a basis for a subspace W of Rn starting from scratch. Here is one way. If W={0}, then the dimension of W is 0 and W has no basis. So suppose dim⁑(W)>0. Start by choosing any nonzero vector w1 in W. Let B1={w1}. If B1 spans W, then B1 is a basis for W. If not, there is a vector w2 in W that is not in Span(B1). Then B2={w1,w2} is a linearly independent set. If Span(B2)=W, then B2 is a basis for W and we are done. If not, repeat the process. Since any basis for W can contain at most n=dim⁑(Rn) vectors, we know the process must stop at some point. This process also allows us to construct a basis for a vector space that contains a given nonzero vector.

Activity 15.5.

Find a basis for R3 that contains the vector [12βˆ’1]. When constructing your basis, how do you know when to stop?

Subsection Rank of a Matrix

In this section, we define the rank of a matrix and review conditions to add to our Invertible Matrix Theorem.

Activity 15.6.

Let A=[12βˆ’1000010βˆ’100011].

(a)

Without performing any calculations, find dim⁑(Nul A). Explain.

(b)

Without performing any calculations, find dim⁑(Col A). Explain.

(c)

There is a connection between dim⁑(Nul A), dim⁑(Col A) and the size of A. Find this connection and explain it.

As Activity 15.6 illustrates, the number of vectors in a basis for Nul A is the number of non-pivot columns in A and the number of vectors in a basis for Col A is the number of pivot columns of A. We define the rank of a matrix A (denoted rank(A)) to be the dimension of Col A and the nullity of A to be dimension of Nul A. The dimension of the null space of A is also called the nullity of A (denoted nullity(A)) Using this terminology we have the Rank-Nullity Theorem.

There is also a row space of a matrix A, which we define to be the span of the rows of A. We can find the row space of A by finding the column space of AT, so the row space is really nothing new. As it turns out, the dimension of the row space of A is always equal to the dimension of the column space of A, and justification for this statement is in the exercises.

The Rank-Nullity Theorem allows us to add extra conditions to the Invertible Matrix Theorem.

Subsection Examples

What follows are worked examples that use the concepts from this section.

Example 15.7.

Let W={[r+s+ur+3s+2tβˆ’uβˆ’sβˆ’t+us+tβˆ’u]:r,s,t,u∈R}.

(a)

Explain why W is a subspace of R4.

Solution.

We can write any vector in W in the form

[r+s+ur+3s+2tβˆ’uβˆ’sβˆ’t+us+tβˆ’u]=r[1100]+s[13βˆ’11]+t[02βˆ’11]+u[1βˆ’11βˆ’1],

so

W=Span{[1100],[13βˆ’11],[02βˆ’11],[1βˆ’11βˆ’1]}.

As a span of a set of vectors in R4, W is a subspace of R4.

(b)

Find a basis for W and determine the dimension of W.

Solution.

Let A=[1101132βˆ’10βˆ’1βˆ’11011βˆ’1]. To find a basis for W, we note that the reduced row echelon form of A is [10βˆ’12011βˆ’100000000]. Since the pivot columns of A form a basis for Col A=W, we conclude that

{[1100],[13βˆ’11]}

is a basis for W. Therefore, dim⁑(W)=2.

Example 15.8.

Find a basis and the dimension of the solution set to the system

r+sβˆ’t+2u=03rβˆ’s+2tβˆ’u=0rβˆ’3s+4tβˆ’5u=0rβˆ’3s+5tβˆ’4u=0..

Solution.

The coefficient matrix of this system is

A=[11βˆ’123βˆ’12βˆ’11βˆ’34βˆ’55βˆ’35βˆ’4],

and the solution set to the system is Nul A. To find a basis for Nul A we row reduce A to

[10141401βˆ’547400000000].

The general solution to the system has the form

[rstu]=[βˆ’14tβˆ’14u54tβˆ’74utu]=t[βˆ’145410]+u[βˆ’14βˆ’7401],

so {[βˆ’145410],[βˆ’14βˆ’7401]} is a basis for Nul A and dim⁑(Nul A)=2.

Subsection Summary

The key idea in this section is the dimension of a vector space.

  • Any two bases for a vector space must contain the same number of vectors. Therefore, we can define the dimension of a vector space W to be the number of vectors in any basis for W.

  • If W is a subspace of Rn with dimension k and S is any linearly independent subset of W with k vectors, then S is a basis for W.

  • If W is a subspace of Rn with dimension k and S is any subset of W with k vectors that spans W, then S is a basis for W.

  • The rank of a matrix is the dimension of its column space.

  • The Rank-Nullity Theorem states that if A is an mΓ—n matrix, then rank(A)+nullity(A)=n.

Exercises Exercises

1.

Let A=[131200000126001132413912βˆ’139361].

(a)

Find a basis for Col A. What is the dimension of Col A? What, then, is the dimension of Nul A?

(b)

Find a basis for Nul A and verify the dimension you found in part (a).

2.

Let A=[2βˆ’111011βˆ’12]. The eigenvalues of A are 1 and 2. Find the dimension of each eigenspace of A.

3.

Let A=[12βˆ’1βˆ’1βˆ’2βˆ’42212βˆ’1βˆ’1].

(a)

Find a basis for Col A. What is the rank of A?

(b)

Find a basis for Nul A. What is the nullity of A.

(c)

Verify the Rank-Nullity Theorem for A.

(d)

The row space of A is the span of the rows of A. Find a basis for the row space of A and the dimension of the row space of A.

4.

Let A be an mΓ—n matrix with r pivots, where r is less than or equal to both m,n. Fill in the blanks.

(a)

The null space of A is a subspace of .

(b)

The column space of A is a subspace of .

(c)

Suppose r=m. Then there is a pivot in every and Col A= .

(d)

Suppose r=n. Then there is a pivot in every and Nul A=.

(e)

If A has 3 pivots, then the rank of A is .

(f)

If A has 3 pivots, then the number of free variables in the system Ax=0 is .

(g)

The dimension of Col A is equal to the number of , i.e. dim⁑Col A=.

(h)

The dimension of Nul A is equal to the number of , i.e. dim⁑Nul A=.

(i)

dim⁑(Nul A)+dim⁑(Col A)= .

(j)

Suppose the columns of A span Rm. Then rank A is .

(k)

Suppose the columns of A are linearly independent. Then r= and the dimension of Nul A is .

5.

Prove the remaining parts of the Invertible Matrix Theorem (Theorem 15.6). Let A be an nΓ—n matrix.

(a)

Prove that A is invertible if and only if dim⁑(Nul A)=0.

Hint.

What are the solutions to Ax=0?

(b)

Prove that A is invertible if and only if dim⁑(Col A)=n.

Hint.

Use part (a) and the Rank-Nullity Theorem.

6.

We can convert the language of the Rank-Nullity Theorem to matrix transformation language, as we show in this exercise. Let T be the matrix transformation defined by the matrix A.

(a)

How is the kernel of T related to A?

(b)

How is the range of T related to A?

(c)

How is the domain of T related to A?

(d)

Explain why the Rank-Nullity Theorem says that dim⁑(Ker(T))+dim⁑(Range(T))=dim⁑(Domain(T)).

7.

Let W be a subspace of R4. What are possible values for the dimension of W? Explain. What are the geometric descriptions of W in each case?

8.

Is it possible to find two subspaces W1 and W2 in R3 such that W1∩W2={0} and dim⁑W1=dim⁑W2=2? If possible, give an example and justify that they satisfy the conditions. If not possible, explain why not.

Hint.

Dimension two leads to two linearly independent vectors in each of Wi.

9.

Determine the dimensions of the column space and null space of [12432102141131210225].

10.

If possible, find a 3Γ—4 matrix whose column space has dimension 3 and null space has dimension 1. Explain how you found the matrix in addition to explaining why your answer works. If not possible, explain why it is not possible to find such a matrix.

11.

(a)

If possible, find a 5Γ—5 matrix whose column space has the same dimension as its null space. Explain how you found the matrix in addition to explaining why your answer works. If not possible, explain why it is not possible to find such a matrix.

(b)

If possible, find a matrix A so that Col A=Nul A. Explain how you found the matrix in addition to explaining why your answer works. If not possible, explain why it is not possible to find such a matrix.

12.

In this exercise we examine why the dimension of a row space of a matrix is the same as the dimension of the column space of the matrix. Let A be an mΓ—n matrix.

(a)

Explain why row operations do not change the row space of a matrix. Then explain why if R is the reduced row echelon form of A, then Row R=Row A, where Row M is the row space of the matrix M.

(b)

Explain why the rows of R that contain pivots form a basis for Row R, and also of Row A.

(c)

Explain why rank(A) is the number of pivots in the matrix A. Then explain why dim⁑(Row A)=dim⁑(Col A).

13.

Label each of the following statements as True or False. Provide justification for your response.

(a) True/False.

The dimension of the column space of a 3Γ—2 matrix can be three.

(b) True/False.

There exists a 3Γ—3 matrix whose column space has equal dimension as the null space.

(c) True/False.

If a set of vectors spans a subspace, then that set is a basis of this subspace.

(d) True/False.

If a linearly independent set of vectors spans a subspace, then that set is a basis of this subspace.

(e) True/False.

The dimension of a space is the minimum number of vectors needed to span that space.

(f) True/False.

The dimension of the null space of a 3Γ—2 matrix can at most be 2.

(g) True/False.

Any basis of R4 contains 4 vectors.

(h) True/False.

If n vectors span Rn, then these vectors form a basis of Rn.

(i) True/False.

Every line in Rn is a one-dimensional subspace of Rn.

(j) True/False.

Every plane through origin in Rn is a two-dimensional subspace of Rn.

(k) True/False.

In Rn any n linearly independent vectors form a basis.

Subsection Project: The GGH Cryptosystem

A cryptographic system (or cryptosystem) allows for secure communication between two or more parties. These systems take messages (called plaintext) and encrypt them in some way to produce what is called ciphertext. This is the scrambled information that is transmitted to the receiver, from which it should not be possible for someone who does not have the proper key to recover the original message. When the message is received by the intended recipient, it must be unscrambled or decrypted. Decryption is the process of converting ciphertext back to plaintext.

The Goldreich-Goldwasser-Halevi (GGH) public key cryptosystem 30  uses lattices to encrypt plaintext. The security of the system depends on the fact that the Closest Vector Problem (CVP) is, in general, a very hard problem. To begin to understand these cryptosystems, we begin with lattices.

Lattices are closely related to spans of sets of vectors in Rn. If we start with a linearly independent set S={b1,b2,…,bm} in Rn, we can create the span of S β€” the set of all linear combinations

c1b1+c2b2+β‹―+cmbm,

where c1, c2, …, cm are real numbers. This span creates a subspace of Rn. If we restrict the set from which we choose the coefficients, we can create different types of structures. An important one is a lattice. The lattice L(S) defined by the linearly independent set S={b1,b2,…,bm} is the set of linear combinations

c1b1+c2b2+β‹―+cmbm,

where c1, c2, …, cm are integers. If the vectors in S have integer components, then every point in L(S) will have integer entries. In these cases, L(S) is a subset of Zn, as illustrated in Figure 15.9. Also, if m=n we say that the lattice L(S) is full-rank. We will restrict ourselves to full-rank lattices in this project. A basis for a lattice is any set of linearly independent vectors that generates the lattice. There is a little special notation that is often used with lattices. If B={b1,b2,…,bn} is a basis for Rn, we associate to B the matrix B=[b1 b2 b3 β‹― bn]. We then use the notation L(B) to also refer to the lattice defined by B.

Project Activity 15.7.

We explore lattices in more detail in this activity.

(a)

Let S1={[1 1]T,[βˆ’1 1]T}.

(i)

Find five distinct vectors in L(S1).

(ii)

Is the vector [1 0]T in L(S1)? Justify your answer.

(iii)

We can draw pictures of lattices by plotting the terminal points of the lattice vectors. Draw all of the lattice points in L(S1) on the square with vertices (βˆ’4,βˆ’4), (4,βˆ’4), (4,4), and (βˆ’4,4).

(b)

Now let S2={[3 5]T,[1 2]T}. A picture of L(S2) is shown in Figure 15.9 with the basis vectors highlighted. As we have seen, L(S1) is not the entire space Z2. Is L(S2)=Z2? Justify your answer.

Figure 15.9. The lattice L(S2).

Project Activity 15.7 shows that even if B is a basis for Rn, it does not follow that L(B) is all of Zn. So latices can be complicated, and problems in lattice theory can be very difficult.

The GGH cryptosystem relies on the fact that we can convert β€œgood” bases for lattices into β€œbad” bases. We will not delve into the details of what separates a β€œgood” basis from a β€œbad” one, but suffice it to say that a good basis is one in which the basis vectors are close to being perpendicular 31  and are all short (that is, they have small norms), while any other basis is a bad basis. An example of a good basis is the basis S1 for R2 in Project Activity 15.7, and we will see later that {[βˆ’2 8]T,[βˆ’1 3]T} is a bad basis for the same lattice. You should draw a picture of the vectors [βˆ’2 8]T and [βˆ’1 3]T to convince yourself that this is a bad basis for its lattice.

The GGH cryptosystem works with two keys β€” a public key and a private key. The keys are based on lattices. The general process of the GGH cryptosystem is as follows. Begin with a good basis B={b1 b2 β€¦ bn} of Rn of vectors with integer components. Let B=[b1 b2 β‹― bn] be the matrix associated with B. Let Bβ€²={b1β€²,b2β€²,…,bnβ€²} be a bad basis for which L(Bβ€²)=L(B). Let Bβ€²=[b1β€² b2β€² β‹― bnβ€²] be the matrix associated to the basis Bβ€². The bad basis can be shared with anyone (the public key), but the good basis is kept secret (the private key). Start with a message m=[m1 m2 β‹― mn]T with integer entries to send.

First we encrypt the message, which can be done by anyone who has the public key Bβ€².

  • Create the message vector

    mβ€²=m1b1β€²+m2b2β€²+β‹―+mnbnβ€²=Bβ€²m

    that is in the lattice using the bad basis Bβ€².

  • Choose a small error e to add to mβ€² to move mβ€² off the lattice (small enough so that mβ€² does not pass by another lattice point). This is an important step that will make the message difficult to decrypt without the key. Let c=mβ€²+e=Bβ€²m+e. The vector c is the ciphertext that is to be transmitted to the receiver.

Only someone who knows the basis B can decode the ciphertext. This is done as follows.

  • Find the vector a=a1b1+a2b2+β‹―+anbn in the good basis B that is closest to c.

  • We interpret the vector [a1 a2 β€¦ an]T as being the encoded vector without the error. So to recreate the original message vector we need to undo the encrypting using the bad basis Bβ€². That is, we need to find the weights y1, y2, …, yn such that

    [a1 a2 β€¦ an]T=y1b1β€²+y2b2β€²+β‹―+ynbnβ€²=Bβ€²[y1 y2 β‹― yn]T.

    We can do this by as [y1 y2 β‹― yn]T=Bβ€²βˆ’1[a1 a2 β€¦ an]T.

There are several items to address before we can implement this algorithm. One is how we create a bad basis Bβ€² from B that produces the same lattice. Another is how we find the vector in B closest to a given vector. The latter problem is called the Closest Vector Problem (CVP) and is, in general, a very difficult problem. This is what makes lattice-based cryptosystems secure. We address the first of these items in the next activity, and the second a bit later.

Project Activity 15.8.

Consider again the basis S1={[1 1]T,[βˆ’1 1]T} from Project Activity 15.7, and let B=[1βˆ’111] be the matrix whose columns are the vectors in S1.

(a)

Let T be the triangle with vertices (0,0), (1,1), and (βˆ’1,1). Show that this triangle is a right triangle, and conclude that the vectors v1=[1 1]T, and v2=[βˆ’1 1]T are perpendicular.

(b)

Let U=[3152]. Let S3 be the set whose vectors are the columns of the matrix B1U. Show that L(S1)=L(S3).

The two bases S1={[1 1]T,[βˆ’1 1]T} and S3={[βˆ’2 8]T,[βˆ’1 3]T} from Project Activity 15.8 are shown in Figure 15.10. This figure illustrates how the matrix U transforms the basis S1, in which the vectors are perpendicular and short, to one in which the vectors are nearly parallel and significantly longer. So the matrix U converts the β€œgood” basis S1 into a β€œbad” basis S2, keeping the lattice intact. This is a key idea in the GGH cryptosystem. What makes this work is the fact that both U and Uβˆ’1 have integer entries. The reason for this is that, for a 2Γ—2 matrix U=[abcd], we know that Uβˆ’1=1adβˆ’bc[dβˆ’bβˆ’ca]. If U has integer entries and adβˆ’bc=Β±1, then Uβˆ’1 will also have integer entries. The number adβˆ’bc is called the determinant of U, and matrices with determinant of 1 or βˆ’1 are called unimodular. That what happened in Project Activity 15.8 happens in the general case is the focus of the next activity.

Figure 15.10. The lattices L(S1) and L(S3).

Project Activity 15.9.

We will restrict ourselves to 2Γ—2 matrices in this activity, but the results generalize to nΓ—n matrices. Let B={b1,b2} and Bβ€²={b1β€²,b2β€²} be bases for R2 with integer entries, and let B=[b1 b2] and Bβ€²=[b1β€² b2β€²] be the matrices associated to these bases. Show that if Bβ€²=BU for some unimodular matrix U with integer entries, then L(B)=L(Bβ€²).

Project Activity 15.9 is the part we need for our lattice-based cryptosysystem. Although we won't show it here, the converse of the statement in Project Activity 15.9 is also true. That is, if B and Bβ€² generate the same lattice, then Bβ€²=BU for some unimodular matrix U with integer entries.

There is one more item to address before we implement the GGH cryptosystem. That item is how to solve the Closest Vector Problem. There are some algorithms for approximating the closest vector in a basis. One is Babai's Closest Vector algorithm. This algorithm works in the following way. Consider a lattice with basis {b1,b2,…,bn}. To approximate the closest vector in the lattice to a vector w, find the weights c1, c2, …, cn in R such that w=c1b1+c2b2+β‹―+cnbn. Then round the coefficients to the nearest integer. This algorithm works well for a good basis, but is unlikely to return a lattice point that is close to w if the basis is a bad one.

Now we put this all together to illustrate the GGH algorithm.

Figure 15.11. Decrypting an encrypted message.

Project Activity 15.10.

Let B={[5 0]T,[0 3]T} be the private key, and let B=[5003] be the matrix whose columns are the vectors in B. Let U be the unimodular matrix U=[2335]. Let m=[3 βˆ’2]T be our message and let e=[1 βˆ’1]T be our error vector.

(a)

Use the unimodular matrix U to create the bad basis Bβ€².

(b)

Determine the ciphertext message c.

(c)

A picture of the message vector m and the ciphertext vector c are shown in Figure 15.11. Although the closest vector in the lattice to c can be determined by the figure, actual messages are constructed in high dimensional spaces where a visual approach is not practical. Use Babai's algorithm to find the vector in L(B) that is closest to c and compare to Figure 15.11.

(d)

The final step in the GGH scheme is to recover the original message. Complete the GGH algorithm to find this message.

(e)

The GGH cryptosystem works because the CVP can be reasonable solved using a good basis. That is, Babai's algorithm works if our basis is a good basis. To illustrate that a bad basis will not allow us to reproduce the original message vector, show that Babai's algorithm does not return the closest vector to c using the bad basis Bβ€².

Published in 1997 by Oded Goldreich, Shafi Goldwasser, and Shai Halevi.
This is also a good property in vector spaces. We will see in a later section that perpendicular basis vectors make calculations in vector spaces relatively easy. A similar thing is true in lattices, where we are able to solve certain variants of closest vector problem very efficiently.