Saturday, 14 August 2021

The Group Inverse

In this post, I will discuss about an another type of generalized inverse called the Group inverse. Unlike the Moore-Penrose inverse, this pseudo inverse is only confined with the singular matrices and not for rectangular matrices. Hence we restrict ourselves to square matrices only.

Consider the matrix $A=\begin{pmatrix}1 &0 &1\\ 0 &1 &1\\ 1 &0 &1\end{pmatrix}$, then the spectrum of $A$, $\sigma(A)=\{0,1,2\}$. Further, $A^{\dagger}\begin{pmatrix}1/3 &-1/3 &1/3 \\ -1/6 &2/3 &-1/6\\ 1/6 &1/3  &1/6  \end{pmatrix}$ and $\sigma(A^{\dagger})=\{0,2/3,1/2\}.$ So, $1\in  \sigma(A)$, but $1^{-1}=1\notin  \sigma(A^{\dagger}).$

In our previous discussion about the Moore-Penrose inverse, we remarked that the Moore-Penrose inverse of a matrix does not holds many properties that a nonsingular matrix holds. For example, if for a singular $A\in \mathbb{R}^{n\times n}$, $\lambda \in \sigma(A)$, then it may be possible that $\lambda^{\dagger} \notin \sigma(A^{\dagger})$. However, for a nonsingular matrix $A$, if $\lambda \in \sigma(A)$, then it always holds that $\lambda^{-1} \in \sigma(A^{-1})$. The same argument is also true if we consider an eigenvector $x$ of $A$ and $A^{-1}.$

Similarly, $B=X^{-1}AX$ does not imply that $B^{\dagger}=X^{-1}A^{\dagger}X$. But when $A$ is nonsingular then this property holds. So, the eigenvalue properties is not preserved when we compute $A^{\dagger}$. This problem is resolved by introducing another interesting generalized inverse called the Group inverse.

Definition: Let $A\in \mathbb{R}^{n\times n}$, then there is a matrix $X$ (if it exists) which satisfies the following three matrix equations:

(i) $AXA=A$

(ii) $XAX=X$

(iii) $AX=XA.$

This $X$ is called the Group inverse of the matrix $A.$ It exists only for those matrix $A$ whose index is 1, i.e., $rank(A)=rank(A^2)$ or equivalently, $R(A)\cap N(A)=\{0\}$. The Group inverse of  $A$ is denoted by $A^{\#}.$ It is also unique whenever it exists.

proof for uniqueness: Let $X$ and $Y$ both satisfy the above three equations, then

$X=(XA)X=(A)XX=AY(AX)X=AY(XA)X=(AY)X=YAX$

and 

$Y=YAY=Y(A)Y=Y(AX)(AY)=YXAYA=Y(XA)=YAX$. Hence $X=Y.$

The name `Group' is given because the set $\{\cdots,(A^{\#})^{2},A^{\#},AA^{\#},A,A^{2},\cdots\}$ together with usual matrix multiplication form an abelian group. It is not difficult to notice that the identity element of this group is $AA^{\#}.$

Example: Consider the nilpotent matrix $A=\begin{pmatrix}0 &1\\0 &0 \end{pmatrix}$, then $A^{2}=\begin{pmatrix}0 &0\\0 &0 \end{pmatrix}$. Clearly, $1=rank(A)\neq rank(A^2)=0$. Alternatively, we can see that $(1,0)^T\in R(A)\cap N(A)$.

From above example, it is easy to observe that the Group inverse of any nonzero nilpotent matrix can not exist because the range and null space can never be complimentary for a nilpotent matrix. From our elementary knowledge of linear algebra, we know that a (nonzero) nilpotent matrix is not diagonalizable. Below we first try to investigate the group inverse of those matrices which is diagonalizable.

Suppose $A$ is diagonalizable index 1 matrix, then there exists a nonsingular matrix $P$ and a diagonal matrix $D=diag(\lambda_1,\lambda_2,\cdots,\lambda_n)$ ($n$ is order of $A$) such that $A=PDP^{-1}$. Then we can check that $A^{\#}=P^{-1}D^{\dagger}P$. 

Recall that $D^{\dagger}=diag(\lambda_1^{\dagger},\lambda_2^{\dagger},\cdots,\lambda_n^{\dagger}).$

The explicit formula for calculating Group inverse of a matrix $A$:

step 1: Find the full rank decomposition of $A$ as $A=FG$.

step 2: Check whether $(GF)^{-1}$ exists. If $GF$ is singular, then the group inverse of $A$ doesn't               exists.

step 3: Finally, $A^{\#}=F(GF)^{-2}G.$

Properties of Group inverse:

(i) Whenever $A$ is nonsingular, $A^{\#}=A^{-1}$

(ii) $A^{\# \#}=A$

(iii) $(A^T)^{\#}=(A^{\#})^{T}$, where $T$ stands for the transpose

(iv) $\lambda(\neq 0)\in \sigma(A)$, then $\lambda^{-1}\in \sigma(A^{\#})$

(v) $x$ is the eigenvector corresponding to the eigenvalue $\lambda (\neq 0)$ of $A$, then $x$ is also  the                eigenvector of corresponding to the eigenvalue $\lambda^{-1}$ of $A^{\#}.$


Theorem 1. Let $K$ be a square matrix of index 1, and let $L$ be such that $R(KL)\subset R(L)$. Then, $R(KL)=R(K)\cap R(L)$.

SOURCES: 

[1] Ben-Israel, A.; Greville, T. N. E., Generalized Inverses. Theory and Applications, Springer-Verlag, New York, 2003.

4 comments:

Normal Equations

 In this post we discuss about normal equations. But, first we prove the following result which will be useful in our subsequent argumen...