Skip to main content

Section 10 The Inverse of a Matrix

Subsection Application: Modeling an Arms Race

Lewis Fry Richardson was a Quaker by conviction who was deeply troubled by the major wars that had been fought in his lifetime. Richardson's training as a physicist led him to believe that the causes of war were phenomena that could be quantified, studied, explained, and thus controlled. He collected considerable data on wars and constructed a model to represent an arms race. The equations in his model caused him concern about the future as indicated by the following statement:

But it worried him that the equations also showed that the unilateral disarmament of Germany after 1918, enforced by the Allied Powers, combined with the persistent level of armaments of the victor countries would lead to the level of Germany's armaments growing again. In other words, the post-1918 situation was not stable. From the model he concluded that great statesmanship would be needed to prevent an unstable situation from developing, which could only be prevented by a change of policies.  20 

Analyzing Richardson's arms race model utilizes matrix operations, including matrix inverses. We explore the basic ideas in Richardson's model later in this section.

Subsection Introduction

To this point we have solved systems of linear equations with matrix forms Ax=b by row reducing the augmented matrices [A | b]. These linear matrix-vector equations should remind us of linear algebraic equations of the form ax=b, where a and b are real numbers. Recall that we solved an equation of the form ax=b by dividing both sides by a (provided aβ‰ 0), giving the solution x=ba, or equivalently x=aβˆ’1b. The important property that the number aβˆ’1 has that allows us to solve a linear equation in this way is that aβˆ’1a=1, so that aβˆ’1 is the multiplicative inverse of a. We can solve certain types of matrix equations Ax=b in the same way, provided we can find a matrix Aβˆ’1 with similar properties. We investigate this situation in this section.

Preview Activity 10.1.

(a)

Before we define the inverse matrix, recall that the identity matrix In (with 1's along the diagonal and 0's everywhere else) is a multiplicative identity in the set of nΓ—n matrices (just like the real number 1 is the multiplicative identity in the set of real number). In particular, InA=AIn=A for any nΓ—n matrix A. Now we can generalize the inverse operation to matrices. For an nΓ—n matrix A, we define Aβˆ’1 to be the matrix which when multiplied by A gives us the identity matrix. In other words, AAβˆ’1=Aβˆ’1A=In. We can find the inverse of a matrix in a calculator by using the xβˆ’1 button.

For each of the following matrices, determine if the inverse exists using your calculator or other appropriate technology. If the inverse does exist, write down the inverse and check that it satisfies the defining property of the inverse matrix, that is AAβˆ’1=Aβˆ’1A=In. If the inverse doesn't exist, write down any error you received from the technology. Can you guess why the inverse does not exist for these matrices?

(b)

Now we turn to the question of how to find the inverse of a matrix in general. With this approach, we will be able to determine which matrices have inverses as well. We will consider the 2Γ—2 case to make the calculations easier. Suppose A is a 2Γ—2 matrix. Our goal is to find a matrix B so that AB=I2 and BA=I2. If such a matrix exists, we will call B the inverse, Aβˆ’1, of A.

(i)

What does the equation AB=I2 tell us about the size of the matrix B?

(ii)

Now let A=[1213]. We want to find a matrix B so that AB=I2. Suppose B has columns b1 and b2, i.e. B=[b1 b2]. Our definition of matrix multiplication shows that

AB=[Ab1 Ab2].
(A)

If AB=I2, what must Ab1 and Ab2 equal?

(B)

Use the result from part (a) to set up two matrix equations to solve to find b1 and b2. Then find b1 and b2. As a result, find the matrix B.

(C)

When we solve the two systems we have found a matrix B so that AB=I2. Is this enough to say that B is the inverse of A? If not, what else do we need to know to verify that B is in fact Aβˆ’1? Verify that B is Aβˆ’1.

(iii)

A matrix inverse is extremely useful in solving matrix equations and can help us in solving systems of equations. Suppose that A is an invertible matrix, i.e., there exists Aβˆ’1 such that AAβˆ’1=Aβˆ’1A=In.

(A)

Consider the system Ax=b. Use the inverse of A to show that this system has a solution for every b and find an expression for this solution in terms of b and Aβˆ’1. (Note that since matrix multiplication is not commutative, we have to pay attention to the order in which we multiply matrices. For example, Aβˆ’1AB=B while we cannot simplify ABAβˆ’1 to B unless A and B commute.)

(B)

If A, B, and C are matrices and A+C=B+C, then we can subtract the matrix C from both sides to see that A=B. We saw in Section 8 that there is no corresponding general cancellation property for matrix multiplication when we found that AB=AC could hold while Bβ‰ C. However, we can cancel A from this equation in certain circumstances. Suppose that AB=AC and that A is an invertible matrix. Show that we can cancel A in this case and conclude that B=C. (Note: When simplifying the product of matrices, again keep in mind that matrix multiplication is not commutative.)

Subsection Invertible Matrices

We now have an algebra of matrices in that we can add, subtract, and multiply matrices of the correct sizes. But what about division? In our early mathematics education we learned about multiplicative inverses (or reciprocals) of real numbers. The multiplicative inverse of a number a is the real number which when multiplied by a produces 1, the multiplicative identity of real numbers. This inverse is denoted aβˆ’1. For example, the multiplicative inverse of 2 is 2βˆ’1=12 because

2β‹…12=1=12β‹…2.

Of course, we didn't have to write both products because multiplication of real numbers is a commutative operation. There are a couple of important things to note about multiplicative inverses β€” we can use the inverses of the number a to solve the simple linear equation ax+b=c for x (x=aβˆ’1(cβˆ’b)), and not every real number has an inverse. The latter means that the inverse is not defined on the entire set of real numbers. We can extend the idea of inverses to matrices, although we will see that there are many more matrices than just the zero matrix that do not have inverses.

To define matrix inverses 21  we make an analogy with the property of inverses in the real numbers: xβ‹…xβˆ’1=1=xβˆ’1β‹…x.

Definition 10.1.

Let A be an nΓ—n matrix.

  1. A is invertible if there is an nΓ—n matrix B so that AB=BA=In.

  2. If A is invertible, an inverse of A is a matrix B such that AB=BA=In.

If an nΓ—n matrix A is invertible, its inverse will be unique (see Exercise 1), and we denote the inverse of A as Aβˆ’1. We also call an invertible matrix a non-singular matrix (with singular meaning non-invertible).

Activity 10.2.

(a)

Let A=[1000]. Calculate AB where B=[abcd]. Using your result, explain why it is not possible to have AB=I2, showing that A is non-invertible.

(b)

Calculate AB where A=[1224] and B=[abcd]. Using your result, explain why the inverse of A doesn't exist.

We saw in Activity 10.2 why the inverse does not exist for two specific matrices. We will find in the next section an easy criterion for determining when a matrix has an inverse. In short, when the RREF of the matrix has a pivot in every column and row, then the matrix will be invertible. We know that this condition relates to quite a few other linear algebra concepts we have seen so far, such as linear independence of columns and the columns spanning Rn. We will put these criteria together in one big theorem in the next section.

Activity 10.3.

Suppose that A is an invertible nΓ—n matrix. Hence we have an inverse matrix Aβˆ’1 for which AAβˆ’1=Aβˆ’1A=In. We will see how the inverse is useful in solving matrix equations involving A.

(a)

Explain why the matrix expressions

 Aβˆ’1(AB), Aβˆ’1(A(BA)Aβˆ’1)  and  BAβˆ’1BAAβˆ’1Bβˆ’1A

can all be simplified to B.

Hint.

Use the associative property of matrix multiplication.

(b)

Suppose the system Ax=b has a solution. Explain why then Aβˆ’1(Ax)=Aβˆ’1b. What does this equation simplify to?

(c)

Since we found one single expression for the solution x in equation Ax=b, this implies that the equation has a unique solution. What does this imply about the matrix A?

As we saw in Activity 10.2, if the nΓ—n matrix A is invertible, then the equation Ax=b is consistent for all b in Rn and has the unique solution x=Aβˆ’1b. This means that A has a pivot in every row and column, which is equivalent to the criterion that A reduces to In, as we noted above.

Even though x=Aβˆ’1b is an explicit expression for the solution of the system Ax=b, using the inverse of a matrix is usually not a computationally efficient way to solve a matrix equation. Finding the RREF of a matrix computationally takes fewer steps to solve the matrix equation.

Subsection Finding the Inverse of a Matrix

The next questions for us to address are how to tell when a matrix is invertible and how to find the inverse of an invertible matrix. Consider a 2Γ—2 matrix A. To find the inverse matrix B=[b1 b2] of A, we have to solve the two matrix-vector equations Ab1=[10] and Ab2=[01] to find the columns of B. Since A is the coefficient matrix for both systems, we apply the same row operations on both systems to reduce A to RREF. Thus, instead of solving the two matrix-vector equations separately, we could simply have found the RREF of

[A |1001]

and done all of the work in one pass. Note that the right hand side of the augmented matrix is now I2. So we row reduce [A | I2], and if the systems are consistent, the reduced row echelon form of [A | I2] must be [I2 | Aβˆ’1]. You should be able to see that this same process works in any dimension.

How to find the inverse of an nΓ—n matrix A.

  • Augment A with the identity matrix In.

  • Apply row operations to reduce the augmented matrix [A | In]. If the system is consistent, then the reduced row echelon form of [A | In] will have the form [In | B] (by Activity 10.2 (d)). If the reduced row echelon form of A is not In, then this step fails and A is not invertible.

  • If A is row equivalent to In, then the matrix B in the second step has the property that AB=In. We will show later that the matrix B also satisfies BA=In and so B is the inverse of A.

Activity 10.4.

Find the inverse of each matrix using the method above, if it exists. Compare the result with the inverse that you get from using appropriate technology to directly calculate the inverse.

We can use this method of finding the inverse of a matrix to derive a concrete formula for the inverse of a 2Γ—2 matrix:

(10.1)[abcd]βˆ’1=1adβˆ’bc[dβˆ’bβˆ’ca],

provided that adβˆ’bcβ‰ 0 (see Exercise 2). Hence, any 2Γ—2 matrix [abcd] has an inverse if and only if adβˆ’bcβ‰ 0. We call this quantity determinant of A, det(A). We will see that the determinant of a general nΓ—n matrix will be essential in determining invertibility of the matrix.

Subsection Properties of the Matrix Inverse

As we have done with every new operation, we ask what properties the inverse of a matrix has.

Activity 10.5.

Consider the following questions about matrix inverses. If two nΓ—n matrices A and B are invertible, is the product AB invertible? If so, what is the inverse of AB? We answer these questions in this activity.

(a)

Let

A=[1213]   and B=[23βˆ’12].
(ii)

Find the matrix product AB. Is AB invertible? If so, use formula (10.1) to find the inverse of AB.

(iii)

Calculate the products Aβˆ’1Bβˆ’1 and Bβˆ’1Aβˆ’1. What do you notice?

(b)

In part (a) we saw that the matrix product Bβˆ’1Aβˆ’1 was the inverse of the matrix product AB. Now we address the question of whether this is true in general. Suppose now that C and D are invertible nΓ—n matrices so that the matrix inverses Cβˆ’1 and Dβˆ’1 exist.

(i)

Use matrix algebra to simplify the matrix product (CD)(Dβˆ’1Cβˆ’1).

Hint.

What do you know about DDβˆ’1 and CCβˆ’1?

(ii)

Simplify the matrix product (Dβˆ’1Cβˆ’1)(CD) in a manner similar to part i.

(iii)

What conclusion can we draw from parts i and ii? Explain. What property of matrix multiplication requires us to reverse the order of the product when we create the inverse of CD?

Activity 10.5 gives us one important property of matrix inverses. The other properties given in the next theorem can be verified similarly.

Subsection Examples

What follows are worked examples that use the concepts from this section.

Example 10.3.

For each of the following matrices A,

  • Use appropriate technology to find the reduced row echelon form of [A | I3].

  • Based on the result of part (a), is A invertible? If yes, what is Aβˆ’1? If no, explain why.

  • Let x=[x1x2x3] and b=[541]. If A is invertible, solve the matrix equation Ax=b using the inverse of A. If A is not invertible, find all solutions, if any, to the equation Ax=b using whatever method you choose.

(a)

A=[1231βˆ’1βˆ’1101]

Solution.

With A=[1231βˆ’1βˆ’1101], we have the following.

  • The reduced row echelon form of [A | I3] is

    [100121βˆ’1201011βˆ’2001βˆ’12βˆ’132].
  • Since A is row equivalent to I3, we conclude that A is invertible. The reduced row echelon form of [A | I3] tells us that

    Aβˆ’1=12[12βˆ’122βˆ’4βˆ’1βˆ’23].
  • The solution to Ax=b is given by

    vx=Aβˆ’1b=12[12βˆ’122βˆ’4βˆ’1βˆ’23][541]=[67βˆ’5].
(b)

A=[1251βˆ’1βˆ’1101]

Solution.

With A=[1251βˆ’1βˆ’1101], we have the following.

  • The reduced row echelon form of [A | I3] is

    [1010010120βˆ’1100012βˆ’3].
  • Since A is not row equivalent to I3, we conclude that A is not invertible.

  • The reduced row echelon form of [A | b] is

    [101001200001].

    The fact that the augmented column is a pivot column means that the equation Ax=b has no solutions.

Example 10.4.

(a)

Let A=[010001000].

(i)

Show that A2β‰ 0 but A3=0.

Solution.

Let A=[010001000].

Using technology to calculate A2 and A3 we find that A3=0 while A2=[001000000].

(ii)

Show that Iβˆ’A is invertible and find its inverse. Compare the inverse of Iβˆ’A to I+A+A2.

Solution.

Let A=[010001000].

For this matrix A we have Iβˆ’A=[1βˆ’1001βˆ’1001]. The reduced row echelon form of Iβˆ’A is

[1βˆ’00111010011001001],

so Iβˆ’A is invertible and (Iβˆ’A)βˆ’1=[111011001]. A straightforward matrix calculation also shows that

(Iβˆ’A)βˆ’1=I+A+A2.
(b)

Let M be an arbitrary square matrix such that M3=0. Show that M is invertible and find an inverse for M.

Solution.

We can try to emulate the result of part (a) here. Expanding using matrix operations gives us

(Iβˆ’M)(I+M+M2)=(I+M+M2)βˆ’(M+M2+M3)=(I+M+M2)βˆ’(M+M2+0)=I

and

(I+M+M2)(Iβˆ’M)=(I+M+M2)βˆ’(M+M2+M3)=(I+M+M2)βˆ’(M+M2+0)=I.

So Iβˆ’M is invertible and (Iβˆ’M)βˆ’1=I+M+M2. This argument can be generalized to show that if M is a square matrix and Mn=0 for some positive integer n, then Iβˆ’M is invertible and

(Iβˆ’M)βˆ’1=I+M+M2+β‹―+Mnβˆ’1.

Subsection Summary

  • If A is an nΓ—n matrix, then A is invertible if there is a matrix B so that AB=BA=In. The matrix B is called the inverse of A and is denoted Aβˆ’1.

  • An nΓ—n matrix A is invertible if and only if A the reduced row echelon form of A is the nΓ—n identity matrix In.

  • To find the inverse of an invertible nΓ—n matrix A, augment A with the identity and row reduce. If [A | In]∼[In | B], then B=Aβˆ’1.

  • If A and B are invertible nΓ—n matrices, then (AB)βˆ’1=Bβˆ’1Aβˆ’1. Since the inverse of AB exists, the product of two invertible matrices is an invertible matrix.

  • We can use the algebraic tools we have developed for matrix operations to solve equations much like we solve equations with real variables. We must be careful, though, to only multiply by inverses of invertible matrices, and remember that matrix multiplication is not commutative.

Exercises Exercises

1.

Let A be an invertible nΓ—n matrix. In this exercise we will prove that the inverse of A is unique. To do so, we assume that both B and C are inverses of A, that is AB=BA=In and AC=CA=In. By considering the product BAC simplified in two different ways, show that B=C, implying that the inverse of A is unique.

2.

Let A=[abcd] be an arbitrary 2Γ—2 matrix.

(a)

If A is invertible, perform row operations to determine a row echelon form of A.

Hint.

You may need to consider different cases, e.g., when a=0 and when a≠0.

(b)

Under certain conditions, we can row reduce [A | I2] to [I2 | B], where

B=1adβˆ’bc[dβˆ’bβˆ’ca].

Use the row echelon form of A from part (a) to find conditions under which the 2Γ—2 matrix A is invertible. Then derive the formula for the inverse B of A.

3.

(a)

For a few different k values, find the inverse of A=[1k01]. From these results, make a conjecture as to what Aβˆ’1 is in general.

(b)

Prove your conjecture using the definition of inverse matrix.

(c)

Find the inverse of A=[1kβ„“01m001].

(Note: You can combine the first two parts above by applying the inverse finding algorithm directly on A=[1k01].)

4.

Solve for the matrix A in terms of the others in the following equation:

Pβˆ’1(D+CA)P=B

If you need to use an inverse, assume it exists.

5.

For which c is the matrix A=[12βˆ’121115c] invertible?

6.

For which c is the matrix A=[c23c] invertible?

7.

Let A and B be invertible nΓ—n matrices. Verify the remaining properties of Theorem 10.2. That is, show that

(b)

The matrix AT is invertible and (AT)βˆ’1=(Aβˆ’1)T.

8.

Label each of the following statements as True or False. Provide justification for your response.

(a) True/False.

If A is an invertible matrix, then for any two matrices B,C, AB=AC implies B=C.

(b) True/False.

If A is invertible, then so is AB for any matrix B.

(c) True/False.

If A and B are invertible nΓ—n matrices, then so is AB.

(d) True/False.

If A is an invertible nΓ—n matrix, then the equation Ax=b is consistent for any b in Rn.

(e) True/False.

If A is an invertible nΓ—n matrix, then the equation Ax=b has a unique solution when it is consistent.

(f) True/False.

If A is invertible, then so is A2.

(g) True/False.

If A is invertible, then it reduces to the identity matrix.

(h) True/False.

If a matrix is invertible, then so is its transpose.

(i) True/False.

If A and B are invertible nΓ—n matrices, then A+B is invertible.

(j) True/False.

If A2=0, then I+A is invertible.

Subsection Project: The Richardson Arms Race Model

How and why a nation arms itself for defense depends on many factors. Among these factors are the offensive military capabilities a nation deems its enemies have, the resources available for creating military forces and equipment, and many others. To begin to analyze such a situation, we will need some notation and background. In this section we will consider a two nation scenario, but the methods can be extended to any number of nations. In fact, after World War I, Richardson collected data and created a model for the countries Czechoslovakia, China, France, Germany, England, Italy, Japan, Poland, the USA, and the USSR. 22 

Let N1 and N2 represent 2 different nations. Each nation has some military capability (we will call this the armament of the nation) at time n (think of n as representing the year). Let a1(n) represent the armament of nation N1 at time n, and a2(n) the armament of nation N2 at time n. We could measure ai(n) in weaponry or dollars or whatever units make sense for armaments. The Richardson arms race model provides connections between the armaments of the two nations.

Project Activity 10.6.

We continue to analyze a two nation scenario. Let us suppose that our two nations are Iran (nation N1) and Iraq (nation N2). In 1980, Iraq invaded Iran resulting in a long and brutal 8 year war. Richardson was interested in analyzing data to see if such wars could be predicted by the changes in armaments of each nation. We construct the two nation model in this activity.

During each time period every nation adds or subtracts from its armaments. In our model, we will consider three main effects on the changes in armaments: the defense effect, fatigue effect and the grievance effect. In this activity we will discuss each effect in turn and then create a model to represent a two nation arms race.

  • We first consider the defense effect. In a two nation scenario, each nation may react to the potential threat implied by an arms buildup of the other nation. For example, if nation N1 feels threatened by nation N2 (think of South and North Korea, or Ukraine and Russia, for example), then nation N2's level of armament might cause nation N1 to increase its armament in response. We will let Ξ΄12 represent this effect of nation N2's armament on the armament of nation N1. Nation N1 will then increase (or decrease) its armament in time period n by the amount Ξ΄12a2(nβˆ’1) based on the armament of nation N2 in time period nβˆ’1. We will call Ξ΄12 a defense coefficient. 23 

  • Next we discuss the fatigue effect. Keeping a strong defense is an expensive and taxing enterprise, often exacting a heavy toll on the resources of a nation. For example, consider the fatigue that the U.S. experienced fighting wars in Iraq and Afghanistan, losing much hardware and manpower in these conflicts. Let Ξ΄ii represent this fatigue factor on nation i. Think of Ξ΄ii as a measure of how much the nation has to replace each year, so a positive fatigue factor means that the nation is adding to its armament. The fatigue factor produces an effect of Ξ΄iiai(nβˆ’1) on the armament of nation i at time t=n that is the effect of the armament at time t=nβˆ’1.

  • The last factor we consider is what we will call a grievance factor. This can be thought of as the set of ambitions and/or grievances against other nations (such as the acquisition or reacquisition of territory currently belonging to another country). As an example, Argentina and Great Britain both claim the Falkland Islands as territory. In 1982 Argentina invaded the disputed Falkland Islands which resulted in a two-month long undeclared Falkland Islands war, which returned control to the British. It seems reasonable that one nation might want to have sufficient armament in place to support its claim if force becomes necessary. Assuming that these grievances and ambitions have a constant impact on the armament of a nation from year to year, let gi be this β€œgrievance” constant for nation i. 24  The effect a grievance factor gi would have on the armament of nation i in year n would be to add gi directly to ai(nβˆ’1), since the factor gi is constant from year to year (paying for arms and soldier's wages, for example) and does not depend on the amount of existing armament.

(a)

Taking the three effects discussed above into consideration, explain why

a1(n)=Ξ΄11a1(nβˆ’1)+Ξ΄12a2(nβˆ’1)+a1(nβˆ’1)+g1.

Then explain why

(10.2)a1(n)=(Ξ΄11+1)a1(nβˆ’1)+Ξ΄12a2(nβˆ’1)+g1.
(b)

Write an equation similar to equation (10.2) that describes a2(n) in terms of the three effects.

(c)

Let an=[a1(n)a2(n)]. Explain why

an=(D+I2)anβˆ’1+g,

where D=[Ξ΄11Ξ΄12Ξ΄21Ξ΄22] and g=[g1 g2]T.

Table 10.5. Military Expenditures of Iran and Iraq 1966-1975
Year Iran Iraq
1966 662 391
1967 903 378
1968 1090 495
1969 1320 615
1970 1470 600
1971 1970 618
1972 2500 589
1973 2970 785
1974 5970 2990
1975 7100 1690

Project Activity 10.7.

In order to analyze a specific arms race between nations, we need some data to determine values of the Ξ΄ij and the gi. Table 10.5 shows the military expenditures of Iran and Iraq in the years leading up to their war in 1975. (The data is in millions of US dollars, adjusted for inflation and is taken from ``World Military Expenditures and Arms Transfers 1966-1975" by the U.S. Arms Control and Disarmament Agency.) We can perform regression (we will see how in a later section) on this data to obtain the following linear approximations:

(10.3)a1(n)=2.0780a1(nβˆ’1)βˆ’1.7081a2(nβˆ’1)βˆ’126.9954(10.4)a2(n)=0.9419a1(nβˆ’1)βˆ’1.3283a2(nβˆ’1)βˆ’101.2980

(Of course, the data does not restrict itself to only factors between the two countries, so our model will not be as precise as we might like. However, it is a reasonable place to start.) Use the regression equations (10.3) and (10.4) to explain why

D=[1.0780βˆ’1.70810.94194βˆ’2.3283]  and   g=[βˆ’126.9954 βˆ’101.2980]T

for our Iran-Iraq arms race.

Project Activity 10.6 and Project Activity 10.7 provide the basics to describe the general arms race model due to Richardson. If we have an m nation arms race with D=[Ξ΄ij] and g=[gi] , then

(10.5)an=(D+Im)anβˆ’1+g.

Project Activity 10.8.

The idea of an arms race, theoretically, is to reach a point at which all parties feel secure and no additional money needs to be spent on armament. If such a situation ever arises, then the armament of all nations is stable, or in equilibrium. If we have an equilibrium solution, then for large values of n we will have an=anβˆ’1. So to find an equilibrium solution, if it exists, we need to find a vector aE so that

(10.6)aE=(D+I)aE+g

where I is the appropriate size identity matrix. If aE exists, we call aE an equilibrium state.

We can apply matrix algebra to find the equilibrium state vector aE under certain conditions.

(b)

Under what conditions can we be assured that there will always be a unique equilibrium state aE? Explain. Under these conditions, how can we find this unique equilibrium state? Write this equilibrium state vector aE as a matrix-vector product.

(c)

Does the arms race model for Iran and Iraq have an equilibrium solution? If so, find it. If not, explain why not. Use technology as appropriate.

(d)

Assuming an equilibrium exists and that both nations behave in a way that supports the equilibrium, explain what the appropriate entry of the equilibrium state vector aE suggests about what Iran and Iraq's policies should be. What does this model say about why there might have been war between these two nations?

Nature 135, 830-831 (18 May 1935) β€œMathematical Psychology of War” (3420).
We usually refer to a multiplicative inverse as just an inverse. Since every matrix has an additive inverse, there is no need to consider the existence of additive inverses.
The Union of Soviet Socialist Republics (USSR), headed by Russia, was a confederation of socialist republics in Eurasia. The USSR disbanded in 1991. Czechoslovakia was a sovereign state in central Europe that peacefully split into the Czech Republic and Slovakia in 1993.
Of course, there are many other factors that have not been taken into account in the analysis. A nation may have heavily armed allies (like the U.S.) which may provide enough perceived security that this analysis is not relevant. Also, a nation might be a neutral state, such as Switzerland, and this analysis might not apply to such nations.
It might be possible for gi to be negative if, for example, a nation feels that such disputes can and should only be settled by negotiation.