Semi-inverting a Non-diagonalizable Matrix

Donald R. Burleson, Ph.D.

© Copyright June 2017 by Donald R. Burleson. All rights reserved.


            In an article titled “Semi-inversion of a Diagonalizable Nonsingular Matrix,” found at www.blackmesapress.com/Diagonalizable.htm , after showing that a diagonalizable (and invertible) matrix A can rather easily be semi-inverted, i.e. transformed to Ai , which if similarly transformed again yields

ole.gif , by exponentiating the eigenvalues by way of either the canonical diagonalization or the spectral (principal idempotent) decomposition, I mentioned that while for nonsingular matrices the condition of diagonalizability is a sufficient condition for semi-invertibility, it is not in general a necessary condition, as one may find examples of non-diagonalizable matrices that are still semi-invertible. The example I gave there was a simple 2X2 matrix. For larger non-diagonalizable matrices the process of semi-inversion is often more laborious, but I wish here to suggest an approach that can yield the desired results at least in reasonably straightforward circumstances.

          Basically this approach consists of taking a non-diagonalizable (and nonsingular) matrix A and examining its powers A2, A3, etc. to find a pattern, in terms of functions of n, that one may generalize (and affirm by mathematical induction) in order to write An , where one then extends the matrix form to the condition n = i to form the semi-inverse Ai. Since essentially a matrix-valued semi-inversion operator ole1.gif has to have the property ole2.gif , one then (to verify the result) repeats the process on Ai , examining powers (Ai)2 , (Ai)3, etc. to find a pattern by which one may write the matrix (Ai)n in general as an array of functions of n, where then one puts n = i to formally produce (Ai)i = A-1 (A to the power -1) and check to see that this process actually produces the inverse matrix, which of course one has computed beforehand for reference.


EXAMPLE 1:


Let A = ole3.gif with inverse ole4.gif

and with an (algebraic multiplicity 3) eigenvalue ole5.gif

A is not diagonalizable since the triple eigenvalue 1 generates only eigenvectors of the form (x, 0, 0)T for an eigenspace that is only one-dimensional, where the dimensions of whatever eigenspaces might have belonged to the matrix A would have had to add up to 3 for the 3X3 matrix to be diagonalizable. (A is of course nonsingular since it has no zero eigenvalue.)

          We examine a few powers of A:

ole6.gif

ole7.gif    etc., where one discerns (and easily proves by mathematical induction) that the general pattern gives ole8.gif

so that the natural identification of the desired semi-inverse would formally be

ole9.gif  

Then to verify that the result, when the semi-inversion operator is applied again, does indeed produce the ordinary inverse of A, we examine powers of Ai , where by hindsight the elements in the first row, second column of each matrix will be written in a style that shows the emerging pattern:

ole10.gif

ole11.gif

so that the pattern gives

ole12.gif


The substitution of i for n produces the expected corresponding elements in the matrix A-1 so that altogether (Ai)i = A-1 as required.


          Clearly this example works rather smoothly because the matrix powers here do not produce any functions of n that are difficult to discern, but with many other non-diagonalizable matrices the patterns would be less transparent. Still, this approach does in principle allow us to compute semi-inverses for many non-diagonalizable matrices, by direct verification that defining the operator ole13.gif as that operator that maps A to the result of formally replacing n with i in the general power An , does in fact lead to ole14.gif More precisely, if the matrix power An is made up of functions of n that one may denote ole15.gif as the matrix elements, e.g. ole16.gif ole17.gif as in the above example, then one defines the image of the semi-inversion operator functionally as the matrix whose u,v-th element is ole18.gif provided that for each such function of n it is possible to evaluate the function at n = i. In the same notation, when one applies the operator a second time in succession, one produces in each elemental position

ole19.gif  , the u,v-th element of A-1. Thus in practice a non-diagonalizable matrix can be semi-inverted so long as one can write each of the elements of An as a particular function of n for which it is possible to evaluate that function in all cases at n = i.

          In the case of a diagonalizable matrix, while the semi-inverse can be computed by diagonalization or principal idempotent decomposition, the “matrix powers pattern identification” method illustrated here in the above example will in principle work as well, yielding the same result.


EXAMPLE 2:


Let ole20.gif with eigenvalues ole21.gif

and respective eigenvectors (1, 0, 0)T, (0,1, 0)T, and (2, 0, 1)T which as the columns will form a modal matrix M, where M and its inverse are then

ole22.gif

so that the canonical diagonalization of A is

ole23.gif  and replacing the diagonal eigenvalues with their exponentiations 1i = 1, (-1)i = ole24.gif , 2i = 2i and multiplying the expression out gives the semi-inverse as

ole25.gif . But even though the matrix is diagonalizable, we still could alternatively have employed the “pattern identification” approach. Observing the first few powers of A we see:

ole26.gif

where one readily discerns the elemental patterns as functions of n to be:

ole27.gif    and                                                     formally putting n = i gives a semi-inverse exactly corresponding to the one obtained by diagonalization. But although the patterns here are easy to determine, they would be more subtle in most cases, and when the matrix is diagonalizable, the easier way by far is to semi-invert it by way of diagonalization and operations on the eigenvalues.

          A case in point is the following.


EXAMPLE 3:


Consider the matrix A = ole28.gif which is apparently simple in form and only slightly different from a matrix previously examined, and is in fact diagonalizable. In the earlier article at www.blackmesapress.com/Diagonalizable.htm this matrix was easily shown by diagonalization to have a semi-inverse equal to

ole29.gif    The “pattern identification” method ultimately works on this matrix but requires identifying functions of n a bit more problematic than the previous example. The first few powers of the matrix A are:

ole30.gif

ole31.gif

and while some of the patterns for writing An are obvious, e.g. An2,2 = 2n and

An3,3 = (-1)n, others are not quite so immediate. For the position row 1, column 2, with successive values 2, 4, 10, 20, 42, 84, ... we need to note that these may be written respectively 21, 22, 21 + 23, 22 + 24, 21 + 23 + 25, 22 + 24 + 26, etc., or

ole32.gif  when n is even and ole33.gif                                         when n is odd, these expressions having been derived using the familiar algebraic identity

ole34.gif

We need a single expression for both these cases (even, odd), in order to have a single function of n where we may later put n = i, and to resolve this parity-generated difference we may write, for both cases together, the common expression ole35.gif , and when n = i, the expression matches the corresponding element in Ai as determined by diagonalization; in this case that result is already known and serves as a check.

          Similarly, for the elements in position row 3, column 2, with successive values 1, 3, 5, 11, 21, 43, etc. we may write in general ole36.gif and when n = 1 this matches with the corresponding element in Ai. Ironically, these patterns as functions of n are suggested by the results produced by diagonalization and one need only check them to see that they generate the successive sets of values observed in the “pattern identification” approach, but of course when a matrix is not diagonalizable to start with, this is not an available option.

           The matrix just examined, which fortunately is diagonalizable, is much more readily semi-inverted by diagonalization and exponentiation of its eigenvalues. Non-diagonalizable matrices present a less felicitous computational prospect, and one is encouraged by the fact that with regard to nXn matrices, in the space CnXn with the standard topology, the set of all non-diagonalizable matrices has Lebesgue measure zero.