# Secondary k-Generalized Inverse of a s-k-Normal Matrices

DOI : 10.17577/IJERTV2IS90555

Text Only Version

#### Secondary k-Generalized Inverse of a s-k-Normal Matrices

S. Krishnamoorthy1 And G.Bhuvaneswari 2

1 Head & Professor of Mathematics, Ramanujan Research Center,

Govt. Arts College (Autonomous), Kumbakonam, Tamilnadu612001 , India.

2 Research Scholar, Lecturer in Mathematics, Ramanujan Research Center,

Govt. Arts College (Autonomous) , Kumbakonam, Tamilnadu 612001, India.

Abstract:

Secondary k-generalized inverse of a given square matrix is defined and its characterizations are given. Secondary k- generalized inverses of s-k normal matrices are discussed.

Ams Classification : 15A09,15A57.

Keywords: s-k normal, s-k unitary, nilpotent, s-k hermitian matrices.

1.Introduction:

Ann Lee initiated the study of secondary symmetric matrices in[1]. The concept of secondary k – normal matrices was introduced in [3]. Some equivalent conditions on secondary k- normal matrices are given in [4]. In this paper we describe secondary k- generalized inverse of a square matrix, as the unique solution of a certain set of equation . This secondary k-generalized inverse exists for particular

kind of square matrices. Let Cnxn denote the space of nxn complex matrices. We deal with secondary k-generalized inverse of s-k normal matrices. Throught this paper, if ACnxn , then we assume that if A 0 then A(KVA*VK) 0

i.e., A(KVA*VK) = 0 A = 0 (1)

It is clear that the conjugate secondary k transpose satisfies the following properties.

KV(A+ B)*VK = (KVA*VK)+(KVB*VK)

KV(A)*VK = (KVA*VK) KV(BA)*VK = (KVA*VK)(KVB*VK)

Now if BA(KVA*VK) = CA(KVA*VK) then by (1)

BA(KVA*VK)- CA(KVA*VK) = 0

(BA(KVA*VK)- CA(KVA*VK))(KV(B- C)*VK) = 0

(BA- CA)(KV(BA- CA)*VK) = 0

(BA-CA) = 0

BA = CA

Therefore BA(KVA*VK) = CA(KVA*VK) BA = CA (2)

Similarly,

B(KVA*VK)A = C(KVA*VK)A

B(KVA*VK) = C(KVA*VK) (3)

Definition 1.1: [3]

A Matrix ACnxn is said to be secondary k-normal ( s-k normal) if

A(KVA*VK) = (KVA*VK)A

Example 1.2:

i 2 3 4

A = 4 i 2 3

is a s-k normal matrix for k=(1,3),(2,4) the permutation matrix be

3 4 i 2

2 3 4 i

0 0 1 0 0 0 0 1

0 0 0 1 0 0 1 0

K =

and

V =

1 0 0 0 0 1 0 0

0 1 0 0 1 0 0 0

Definition 1.3:

A matrix ACnxn is said to be secondary k-unitary (s-k unitary) if

A(KVA*VK) = (KVA*VK)A I

Example 1.4:

i 1 1 0

1 i 0 1

A=

is a s-k unitary matrix

1 0 i 1

0 1 1 i

Section 2: Secondary k – Generalized inverses of a matrix Theorem 2.1:

For any ACnxn , the four equations

AXA = A (4)

XAX = X (5)

KV(AX)*VK = AX (6)

KV(XA)*VK = XA (7)

have a unique solution for any ACnxn .

Proof: First, we shall show that equations (5) & (6) are equivalent to the single equation

XKV(AX)*VK = X (8)

From equations (5) and (6), (7) follows, since it is merely (6) substituted in (5) Conversely, equation

(8) implies

AXKV(AX)*VK = AX

Since the left hand side is s-k hermitian, (6) follows. By substituting (6) in (8), we get XAX = Xwhich is actually (5). Therefore (5) and (7) are equivalent to (8) Similarly, (4) & (7) are equivalent to the equation

XA(KVA*VK) = KVA*VK (9)

Thus to find a solution for the given set of equations, it is enough to find an X satisfying (8) & (9). Now the expressions ((KVA*VK)A),((KVA*VK)A)2, ((KVA*VK)A)3 cannot all be linearly independent ( i.e) there exists a relation

1((KVA*VK)A) + 2((KVA*VK)A)2 ++ k((KVA*VK)A)k = 0 (10)

Where 1, 2,, k are not all zero. Let r be the first non zero . (i.e) 1 = 2 r-1 0 .

Therefore (10) implies that

r((KVA*VK)A)r = -r+1((KVA*VK)A)r+1 ++ m((KVA*VK)A)m

r

r

If we take B = –1r+1I + r+2((KVA*VK)A)++ m((KVA*VK)A)m-r-1

Then

B((KVA*VK)A)r+1 = –1

B((KVA*VK)A)r+1 = –1

((KVA*VK)A)r+1 ++ ((KVA*VK)A)m

((KVA*VK)A)r+1 ++ ((KVA*VK)A)m

r r+1 m

B((KVA*VK)A)r+1 = ((KVA*VK)A)r . By using (2) & (3) repeatedly, we get

B(KVA*VK)A(KVA*VK) = KVA*VK (11)

Now if we take X = B(KVA*VK) then (11) implies that this X satisfies (9) implies (7), we have (KV(XA)*VK)(KVA*VK) = KVA*VK

B(KV(XA)*VK)(KVA*VK) = B(KVA*VK)

Therefore X = B(KVA*VK) satisfies (8). Thus X = B(KVA*VK) is a solution for the given set of equations.

Now let us prove that this X is unique. Suppose that X and Y satisfy (8) and (9). Then by substituting (7) in (5) and (6) in (4), we obtain

(KV(XA)*VK)X = X and (KV(AX)*VK)A = A

Also,

Y = (KV(YA)*VK)Y and KVA*VK = (KVA*VK)AY

Now X = X(KVX*VK)(KVA*VK)

= X(KVX*VK)(KVA*VK)AY

= X(KV(AX)*VK)AY

= XAY

= XA(KV(YA)*VK)Y

= XA(KVA*VK)(KVY*VK)Y

= (KVA*VK)(KVY*VK)Y

= (KV(YA)*VK)Y

X = Y

Therefore X is unique.

Definition 2.2: Let ACnxn . The unique solution of (4), (5), (6) and (7) is called secondary k-

generalized inverse of A and is written as A sk .

Example 2.3:

1 1 1

sk

1/9 1/9 1/9

If A = 1 1 1

1 1 1

then A = 1/9 1/9 1/9

1/9 1/9 1/9

Note 2.4: By using (7) in (5), (6) in (4) and from (8) and (9) we obtain

sk

sk

sk *

sk *

sk

A (KV ( A ) VK )(KVA VK ) A (KVA VK )(KV ( A ) VK ) A

(12)

A sk A(KVA*VK ) (KVA*VK ) (KVA*VK )AA sk

If is a scalar, then sk means -1 when 0 and zero when = 0 .

Section3: Secondary- k-generalized inverse of s-k normal matrices.

In this paper, characterizations of secondary k-generalized inverse (s-k-g) inverse of a matrix are obtained s-k-g inverse of s-k-normal matrices are discussed .s-k herimitian matrices are defined and the condition for s-k normal matrices to be diagonal is investigated.

Theorem 3.1: For ACnxn .

sk

sk

*

1. A

sk A

2. KVA*

VK = KVA

sk

VK

3. If A is non singular, then

sk = A-1

4. (A sk =

sk

sk

A ) A

5. ((KVA*VK)A sk = sk (KV sk VK)*

) A A

Proof: Let ACnxn .

1. By the definition of s-k-g inverse, we have

sk A

sk =

sk

and

sk (

sk

sk

sk =

sk

A A A A A ) A A

These two equations imply that

sksk

A = A

2. From the definition of A sk , we have AA sk A = A

(KVA*VK)(KV( sk )*VK)(KVA*VK) = KVA*VK

A

)

)

Also (KVA*VK)(KV(A* sk VK)(KVA*VK) = KVA*VK

From these two equations, we have

(KV( sk )*VK) = KV(A* sk VK

A )

3. Since A is non singular, A-1 exists

Now A sk A = A (By definition of )

A sk

Pre multiplying & post multiplying by A-1 we have

A

A

sk

= A-1

4. The equations, A sk A = A and

A

)

)

(A)(A sk (A) = (A) imply that

)

)

A

A

(A sk =

sk

(A sk =

sk

sk

where

sk = -1

) A

5. from (12) we have,

sk

sk * *

sk

A (KV(A ) VK)(KVA VK) = A

Also A sk A = A .

A

Therefore A sk (KV( sk )*VK)(KVA*VK)A = A

A A

Substitute this in the right hand side of the defining relation, we get

((KVA*VK)A sk = sk (KV(A* sk VK)

) A )

Theorem 3.2: A necessary and sufficient condition for the equation AXB= D to have a solution is

A sk D

sk B = D , in which case the general solution is

A B

X = sk D sk + Y- sk AYB sk , where Y is arbitrary.

A B A B

Proof: Let us assume that X satisfies the equation AXB= D, then D = AXB

= A sk AXB

sk B = A

sk D

sk B (By the definition of )

A B

Conversely if

D = A

A

sk D

B

sk B ,then X =

sk D

sk

sk , then it is a particular solution

A

of AXB= D. since AXB = A

B

sk D

A B

sk B = D.

A B

A

A

If Y Cnxn , then any expression of the form X =

sk D

sk + Y-

sk AYB

sk is

B

B

A

A

B

B

A

A

B

B

a solution of AXB= D.and conversely,if X is a solution AXB= D, then

A

A

B

B

X = sk D

sk + X-

sk AX

sk B satisfies AXB= D. Hence the theorem.

Theorem 3.3: The matrix equations AX = B and XD = E have a common solution if and only if each equation has a solution and AE = BD.

Proof: It is easy to see that the conditions is necessary, conversely A sk B and E D sk

are solutions

of AX=B and XD=E and hence

and

EDsk D E . Also AE=BD. By using these facts it

can be prove that

is a common solution of the given equations.

Definition 3.4: A matrix ECnxn

is said to be secondary-k hermitian idempotent matrix (s-k. h.i) if

E(KVE*VK) = E (i.e) E = KVE*VK and E2 = E .

Theorem 3.5: (i)

A sk A ,

sk A, 1- A sk

are all the s-k hermitian idempotent.

A A

)

)

(ii) J is idempotent there exist s-k hermitian idempotents E and F such that J = (FE sk

in which case J = EJF.

Proof: Proof of (i) is obvious. If J is idempotent then J2=J. By (i) of theorem (3.1),

sk

J = (J

sk J)(JJ

sk )

. Now if we take

E = JJ sk

and

F = J

sk J they will satisfy our

)

)

requirements conversely if J = (FE sk

then J=EFPEF where

P = (KV((FE sk )*VK)(FE sk (KV(FE sk )*VK) . Therefore J=EJF and hence

) ) )

J2 = E(FE sk FE(FE sk F = E(FE sk F = J . Hence J is idempotent.

) ) )

Note 3.6:(i) s-k hermitian idempotent matrices are s-k normal matrices.

(ii) The s-k-g inverse of an s-k hermitian idempotent matrix is also s-k hermitian idempotent

matrix.

Definition 3.7: For any square matrix A there exists a unique set of matrices J defined for each complex number such that

JJ = J (13)

J 1

(14)

AJ = JA

(15)

(A- I)J is nilpotent (16)

Then the non zero J s are called the principal idempotent elements of A.

Theorem 3.8: If

E =1-(A- I)nsk (A- I)n

and F =1-(A- I)n (A- I)nsk ,

where n is sufficiently large, then the principal idempotent element of A are given by

sk

J FE and n can be taken as unity iff A is diagonable.

Proof: Assume that A is diagonable.

Let

E =1-(A- I)sk (A- I) , and

F =1-(A- I)(A- I)sk

Then by 3.5(i) E and F are s-k hermitian idempotent matrices. If is not an eigen value of

A, then

A- I 0

and hence F and E are zero by (iii) of theorem(3.1). Clearly,

(A-I)E 0 and F(A – I) = 0 (17)

Therefore FE

Hence FE 0

FAE

if

= FE

(18)

Now if we take

J F E sk

then by (ii) of theorem (3.5),

J = E {F E

sk F

(19),

}

Now(18) implies JJ = J .Also by( 18), FJE = FE (20) If Z

is an eigen vector of A corresponding to the eigen value then EZ = Z. As A is diagonable, any column vector X conformable with A is expressible as a sum of eigen vectors (i.e) it is expressible in the form X E X . This is a finite sum over all complex .

Similarly, if Y* is conformable with A, it is expressible as Y* Y*F

Now by

equations (18) and (20) Y*( J )X (Y*F )( J )( E X )

Y*F E X

= Y*( J )X Y*X

Y*( J I)X 0

J I

Also(17) and (19) lead to AJ = J = JA (21)

This implies (A- I)J is nilpotent and (15) and (16) are satisfied.

Moreover A= J

Conversely if J I

(22)

and A is not diagonable (n=1) then X = JX

gives X as a sum

of eigen vectors of A, since (21) was derived without assuming the diagonability of A. If A is not diagonable. It seems more convenient simply to prove that for any set of Js satisfying (19), (20), (21) &

(22) each J = (F E )sk

where F

and E

are defined as in the theorem.

If the Js satisfy (13), (14), (15) & (16) J = I and

(A- I)n J = 0 = J(A- I)n

(23)

Which comes by using the fact that (A- I)J is nilpotent, where n is sufficiently large.

From (23) and the definition of E and F , we have

EF J JF

(24)

By using Euclids algorithm, there exist P and Q which are polynomials in A such that

I = (A- I)n P+ Q(A-I)n

if .

Now F(A – I)n 0 (A-I)n E . Hence FE = 0 if

From (24) FJ = 0 = JE if . Since J = I , we get

FJ = F and JE = E (25) using (24) and (25 ) it is easy to see that

FE s-k = J . Hence the theorem

Theorem 3.9: If A is s-k normal, it is diagonable and its principal idempotent elements are s-k hermitian.

Proof: If A is s-k normal then (A- I) is s-k normal. By using (viii)of the theorem (3.1) in the

definition of E

and F

of theorem (3.8) we obtain E =1-(A- I)sk (A- I)

and

F =1-(A- I)(A- I)sk

Hence A is diagonable. since E F , J = E is s-k hermitian.

References:

1. Ann Lee: Secondary symmetric , Secondary skew symmetric, Secondary orthogonal matrices:

Period math. Hungary 7(1976),63-76

2. Isarel-Adi Ben and Greville Thomas ME;Generalized inverses:Theory and Applications; A wiley interscience publications Newyork,(1974).

3. S. Krishnamoorthy, G. Bhuvaneswari , Secondary k- normal Matrices, International Journal of Recent scientific Research Vol , 4,issue 5,pp 576-578, May 2013.

4. S. Krishnamoorthy, G. Bhuvaneswari , Some Characteristics on Secondary k- normal Matrices,Open Journal of Mathematical Modeling July 2013,1(1):80-84.

5. Weddurburn, J.H.M.,Lectures on matrices, colloq.Publ.Amer. Math.Soc.No.17,1934.