Theory of System of Linear Differential Equations on Time Scales

DOI : 10.17577/IJERTV2IS90018

Download Full-Text PDF Cite this Publication

Text Only Version

Theory of System of Linear Differential Equations on Time Scales

Prof. K. Rajendra Prasad, G .Venkata Vijaya Lakshmi and P. Murali

Department of Engineering Mathematics, Andhra University, Visakhapatnam, 530003, Andhra Pradesh India.

Abstract

This paper presents the criterion to construct fundamental matrices for the- system of linear differential equations with constant coefficients on time scales. We develop the procedure to compute fundamental matrices for vector differential equations on time scales.

Key words: Time scale, dynamical equation, fundamental matrix, eigenvalues, eigenvectors.

AMS Subject Classification: 34B99, 39A99

  1. Introduction

    The study of solutions of linear differential equations on time scales gained momentum because of unified approach nature for differential and difference systems. The theory of linear differential equations provides a broad mathematical basis for an understanding of continuous time dynamic processes. There are many results on continuous time dynamical systems which are needed in discrete time context. In recent past a new theory is emerged to unify the results not only on continuous and discrete time dynamical systems but also on discrete time dynamical system for any jump. The theory was ,first introduced by B. Aulbach et al [2]. By a time scale we mean a nonempty closed subset of . For the time scale calculus and notation for delta differentiation, as well as concepts for dynamic equations on time scales, we refer to the introductory book on time scales by M. Bohner et al [3]. It provides a new direction of research in dynamical process with time scales.

    In this paper, for the development of theory, we construct the fundamental matrices for the system of linear differential equations on time scales. If all the eigenvalues of the coefficient matrix are real and distinct, then we can construct a solution of the system without any difficulty. But if some of the eigenvalues of the coefficient matrix are

    repeated we take care, since in general evaluate without assumptions.

    nth delta derivative of a polynomial cannot be

    Three conditions that we assume throughout are as follows:

    1. Every point t in T is neither simultaneously left dense and right scattered nor simultaneously left scattered and right dense.

    2. The jump is uniform at all scattered points of T. Finally,

    3. The eigenvalues of A are regressive on T.

    This paper is organized as follows. In Section 2, we briefly describe some salient features of time scales, functions defined on time scales and operations with these functions. In Section 3, we construct the fundamental matrices for t he system of linear differential equations on time scale for real and distinct eigenvalues, and as an application, we also give some examples to demonstrate our results. In Section 4,first we introduce algebraic concepts for the main result and, then we invoke our assumptions, along with direct sum of solution spaces, we prove that a lemma to

    compute the mth delta derivative of

    t n , to obtain fundamental matrices of system of

    linear differential equations for general case.

  2. Preliminaries

    We denote the time scale by the symbol T. By an interval we mean the intersection of the real interval with a given time scale. The jump operators in-

    troduced on a time scale T may be connected or disconnected. To overcome this topological difficulty the concept of jump operators is introduced in the following

    way. The operators and from T to T, defined by t inf s T : s t

    and

    t

    sups T

    : s t

    are called jump operators. If is bounded above

    and is bounded below then we define max T

    max T and min T

    min T .

    These operators allow us to classify the points of time scale T. A point t T is

    said to be right-dense if (t) = t, left-dense if (t) = t, right-scattered if (t) > t, left-scattered if (t) < t, isolated if (t) < t < (t) and dense if (t) = t = (t).The

    set k which is derived from the time scale T is defined as follows

    T k T \ ((supT ),supT ) if supT

    T if supT

    Finally, if f :T is a function, then we define the function f :T by

    f t f t

    for all t T

    Definition 2.1 Let T be a time scale, be a real line, and f :T . We say

    That f is delta differentiable at a point s Tk , if there exists an a such that

    for any 0 there exists a neighborhood U of s such that,

    f ( (s)) f (t) ( (s) t)a (s) t t U,

    or more specifically, f is delta differentiable at s if the limit

    lim

    f (t) f ( (s))

    t (s)

    t (s)

    exists, and is denoted by f (s) .

    If f is delta differentiable for every

    t Tk

    we say that

    f :T k is delta

    differentiable on T. If f and g are two delta differentiable functions at s then fg is delta differentiable at s and

    fg s f s g s f s g s f s g s f s g s

    Definition 2.2 A function g :T k .is rd- continuous if it is continuous in every

    right-dense point

    t Tk

    and if lim g(s)

    st

    exists for each left-dense t Tk .

    We say that a function

    p :T k is regressive provided 1 (t) p(t) 0

    for all

    t Tk

    . For s T

    , we define the graininess function

    :T [0, ) by

    (s) (s) s .

    Definition 2.3 For h > 0 we define the Hilger complex number

    z : z 1 For h = 0 : Let

    .

    h h 0

    Definition 2.4 For h > 0, we define the cylinder transformation h : h h by

    (z) 1 Log(1 zh) ,where Log is the principal logarithm function. For h = 0, we define

    h h

    o (z) z for all z .

    Definition 2.5 If p is regressive, then we define the exponential function by

    t

    ep (t, s) exp(s ( ) ( p( )) )

    for s,t T ,where

    ( )

    is the cylinder transformation.

    Definition 2.6 Let

    p :T k be regressive and rd-continuous, then a mapping

    ep :T is said to be a solution of the linear homogenous dynamic equation

    y p(t) y, if e (t,t ) p(t)e (t,t ) t Tk , and a fixed t Tk .

    p 0 p 0 0

    Definition 2.7 Any set of n linearly independent solutions of y Ay is a fundamental

    set of solutions of the equation. The matrix with these particular solutions as columns is a fundamental matrix for the given equation.

    Definition 2.8 Let y , y ,….y be a fundamental set of solutions of equation y Ay and

    1 2 n

    let

    Y ( y1, y2 ,…, yn )

    be the corresponding fundamental matrix. For any constant

    n-vector c, Yc is a solution of y Ay .

  3. Real Distinct Eigenvalues

    In this section, we consider a system of differential equations

    y Ay

    (1)

    on a time scale T k , where A is n n constant matrix ,and y is

    n 1

    vector, assume that

    the eigenvalues of A are regressive on T k . By using a non-singular transformation,

    y = Sx (2)

    where S is n n

    non-singular constant matrix and x is

    n 1vector, the equation (1) can

    be transformed into

    x Dx where

    D S 1 AS

    (3)

    D will take different forms depending on the eigenvalues of A. This case is treated to provide an introduction for the general case.

    Theore 3.1 Assume that the equation (1) satisfies the above assumptions of A and, if we assume the eigenvalues 1, 2 ,……, n of the matrix A are real and distinct, then the fundamental matrix Y for (1) is of the form

    Y (t) [s1, s2 ,…., sn ]E(t)

    where

    s j is an

    n 1

    eigenvector of A corresponding to eigenvalue

    , E(t) e (t,t ), i, j 1, 2,….., n and a fixed t Tk .

    j ij j 0 0

    Proof: The canonical form of A is a diagonal matrix given by matrix S be

    D (ij j ) . Let the

    S [s , s ,……., s ], where the jth column is the vector s . It follows that AS = SD, and,

    1 2 n j

    since S is non-singular, that

    x Dx

    D S 1 AS . If

    (4) is

    written in scalar form, and each scalar equation, has a relation it is that

    j

    j

    xj e (t,t0 )d j ,

    j 1, 2, ……, n,

    where

    d j is real constant and a fixed

    t Tk . The matrix

    E (ij e

    (t,t0 ))

    is a

    0

    0

    j

    j

    fundamental matrix for the equation (4). It follows that a fundamental matrix Y for the equation (1) is Y = SE.

    Example

    An example illustrates the above result. Find a fundamental matrix for the following equation.

    1 1 1

    y 0 2 1 y .

    0 0 3

    The eigenvalue of the coefficient matrix A are

    1 1, 2 2, 3 3 , the corresponding

    eigenvectors are s [1, 0, 0]T , s [1, 1, 0]T , s

    [1, 1, 1]T , where T is transpose.

    1 2 3

    Hence, a fundamental set of solutions is given by

    y e (t,t )sT , y e (t,t )sT , y e (t,t )sT

    1 1 0 1 2 2 0 2 3 3 0 3

    The matrix

    e1 (t, t0 )

    e2 (t, t0 )

    e3 (t, t0 )

    Y SV 0 e (t, t ) e (t, t )

    2 0 2 0

    0 0 e (t, t )

    3 0

    is a fundamental matrix for the equation

    1. If T ,

      then

      Y

      e(t t0 )

      0

      e2(t t0 )

      e2(t t0 )

      e3(t t0 )

      e3(t t0 )

      0

      0

      e3(t t0 )

      e(t t0 )

      0

      e2(t t0 )

      e2(t t0 )

      e3(t t0 )

      e3(t t0 )

      0

      0

      e3(t t0 )

      2(t t0 )

      3(t t0 )

      4(t t0 )

    2. If T , then Y

      0 3(t t0 )

      4(t t0 )

      0 0 4(t t0 )

      (3) If T h , h>0,

      t t0 t t0 t t0

      (1 h) h (1 2h) h (1 3h) h

      t t t t

      then

      Y

      0 0

      • h h

    0 (1 2h) (1 3h)

    t t0

    0 0 (1 3h) h

  4. General case

In this section, we state and prove the main results of this paper. We need the following algebraic concepts and theorems.

The direct sum of r- vector spaces can be used advantageously in this section. Given

Y1,Y2 , ……..,Yr as r- finite dimensional vector spaces, their direct sum Y1 Y2 …..Yr is

the set of all ordered rth

tuples

(a , a , …….., a ) where a Y ,

i 1, 2,……, r . It may be

1 2 r i i

established, if addition and scalar multiplication are appropriately defined, that this set

is a vector space and that its dimension is the sum of the dimension of Y1,Y2 , …..,Yr .

It is

of significance that a subspace of the direct sum consisting of all ordered rth tuples of

th

1

1

the form (a1, 0, …….., 0) is the isomorphism to Y , the subspace containing all r tuples

of the form

(0, a2 , …….., 0)

is the isomorphism to Y2

, similarly, (0, 0, ….., ai ,…, 0, 0) is

the isomorphism to Yi, i=1, 2, …,r.

The properties of a direct sum in this case evolve from the properties of matrix

multiplication. Let

A1, A2 , …, Ar

be r square matrices of orders

n1,n2 ,…, nr , ,

respectively, and let the vector space Yi

be the solution space of

i

i

y A y,

i 1,2, …,r.

(5)

If Yi

is a fundamental matrix for the equation (5), then

yi Yi

if and only if

yi

Yici

(6)

for some vector ci

in V (R).

n

n

i

We may represent an element in the direct sum of

Y1,Y2 , …,Yr by

[ y , y , …, y ]T

1 2 r

(7)

It may be observed here that an ordered rth tuple is an ordered rth tuple whether it be written in horizontal or vertical form. The vertical form is preferred here because the solutions of vector equations are usually written as column vectors. Because of its vertical form, the ordered rth tuple (7) may be thought of as a partitioned column vector

of dimension n1 n2 ……

  • nr

    . Hence, we write

    y1 Y1 0 0 c1

    y 0 Y

    0 c

    2 2

    2

    y

    0 0

    Y c

    r r r

    where c1,c2 …,cr

    relation that

    are the vector appearing in formula (6). It is clear from this

    y1 c1

    y c

    2 Y Y … Y if and only if 2 V

    1 2 r

    n1 n2 …..nr

    y c

    r r

    This establishes the fact that

    Y1 Y2 … Yr

    is a vector space of dimension

    n1 n2 … nr

    .It is equally clear that elements of the form

    [ y , 0, 0, …, 0]T , [0, y , 0, …, 0]T ,…, and [0, , 0, , 0 , …, y ]T are, respectively,

    1 2 r

    subspaces of the direct sum. The first of these subspaces is isomorphic to Y1 , the second

    toY , … , and finally rth subspace to Y .

    2 r

    Our understanding of the formation of a direct sum and its properties can now be applied to establish the following theorem. The notation that was introduced above is used in theorem.

    Theorem 4.1 If

    {Ai : i

    1, 2, …, r}

    is a set of constant square matrices, then the

    solution space of

    A1

    0

    0

    y

    0 0

    0

    0

    A2 y

    (8)

    0 0

    A

    A

    r

    is the direct sum of the solution spaces of the equations in the set

    {y

    Ai y :i

    1,2, …, r}.

    Moreover, a fundamental matrix for (8) is

    Y1

    0

    0

    0 0

    0

    0

    Y2

    r

    r

    0 0 Y

    i

    i

    where

    Y is a fundamental matrix for y

    Ai y,i

    1, 2, …, r.

    Lemma 4.2 Let n , define a function

    f :T

    by f

    t

    tn

    , if we assume that

    the conditions (A) and (B) are satisfied, then

    m n!

    nm

    n1 n2 ….nm r m

    n

    f (t) tn m r

    ( i (t)) i

    (9)

    (n m)! r 0

    n1 ,n2 ,…,nr 0 i1

    m

    m

    tTk

    holds for all

    m n N,

    where

    n1 n2 …..nm r

    n1 ,n2 ,……,nm {0}

    is the set of all distinct

    combinations of{n1,n2 , …,nm}

    such that the sum is equal to given r.

    Proof: We will show the equation (9) by induction. First, if m = 1, then

    n1

    n1r

    n1 r 1

    i n

    f (t) t

    (

    (t)) i

    r 0

    n1 0 i1

    f t

    i.e

    tn1

    tn3 t 2

    t n4 t 3

    t t n2

    t n1 .

    Therefore the equation (9) is true for m = 1. Next, we assume that equation (9) is true for

    m s N

    , then, by using the properties of delta derivatives, define

    n0

    0 , we btain

    s1

    n! ns

    n1 n2 ….ns r s

    n

    f (t) tn s r ( i (t)) i

    (n s)! r 0

    n1 ,n2 ,…,ns 0 i1

    n! n

    ns n1 n2 ….ns r s

    n

    t

    s tn s r

    ( i (t)) i

    (n s)!

    r 1

    n1 ,n2 ,…,ns 0 i1

    n!

    n

    ns

    n1 n2 ….ns r s

    n

    t

    s tn s r

    ( i (t)) i

    (n s)!

    r 1

    n1 ,n2 ,…,ns 0 i1

    n! ns1 ns1r

    t

    t

    n1 r

    1

    ( i

    ni

    (n s)!

    (t))

    r 0

    n1 0 i1

    n! ns

    n

    n1 n2 ….ns r

    s

    n

    n

    t s r ( i (t)) i

    (n s)! r 1

    n1 ,n2 ,…,ns 0 i1

    n! ns

    n

    n1 n2 ….ns r s

    n

    (t

    s r )

    ( i 1 (t)) i

    (n s)! r 1

    n1 ,n2 ,…,n 0 i1

    s

    n! ns1 ns1r

    t

    t

    n1 r

    1

    ( i

    ni

    (n s)!

    (t))

    r 0

    n1 0 i1

    n! ns

    n

    n1 n2 ….ns r

    s1 l

    n

    t s r ( i 1 (t)) i

    (n s)! r 1

    n1 ,n2 ,…,ns 0 l 0 i0

    nl1 1

    l r

    n r 1

    s n

    (

    2 (t) 1 ) ( l

    1 (t) l1 1

    ) ( i (t)) i

    r1 0

    il 2

    n! ns nsr 1 nsr 1r

    t 1

    t 1

    n1 r1

    1

    ( i

    ni

    (n s)!

    (t))

    r 1

    r1 0

    n1 0 i1

    n1 n2 ….ns r s

    i

    n

    ( 1 (t)) i

    n1 ,n2 ,…,ns 0 i1

    Now we collect the terms tns1, tns2 , …, from the above expression, we have

    n! n

    n!

    n1 n2 ….ns1 1

    s1 n

    t s 1

    t n s 2

    ( i (t)) i …..

    (n s 1)! (n s 1)!

    n1 ,n2 ,…,ns1 0 i1

    n! n1 n2 ….ns1 ns1 s1 n

    …… ( i (t)) i

    (n s 1)!

    n!

    n!

    ns1

    n

    n1 ,n2 ,…,ns1 0

    n1 n2 ….ns1 r

    i1

    s1

    n

    t

    s 1 r

    ( i (t)) i

    (n s 1)!

    r 0

    n1 ,n2 ,…,ns1 0 i1

    So that equation (9) holds for m = s+1. By the principle of mathematical induction,

    (9) holds for all m n N .

    il

    il

    i1

    i1

    NOTE: For l > s, s 1 and s 0

    We assume that the eigenvalues

    1,2 , …,r

    of A are real and distinct, with

    multiplicity

    n1, n2 , …, nr

    respectively, for each eigenvalue i there exists only one

    linearly independent eigenvector and regressive on T k , such that

    n1 n2

    … nr n

    As we discussed in the Section 3, by using a non-singular transformation (2), (1) can be transform into (3). Thus, that D has r block matrices.

    D1

    0

    0

    0 0

    0

    0

    D2

    i.e D

    (10)

    r

    r

    0 0 D

    where Di is a square sub matrix of order

    ni , i 1, 2, …, r,

    and is given by

    Di

    i I

  • J and J is defined by

1

0

0

1

0

0

0

0

1

0

0

1

0

0

0

0

0 0

0 0

J

0 1

0 0

i i

i i

n n

Suppose for each eigenvalue i

, if there exists

mi

ni linearly independent eigen-

vectors and remaining generalized eigenvectors are computed for last linearly inde- pendent eigenvector, then Di has the following form

  • J

  • J

i Im 1m 1

i i

i i

(11)

i i

i i

i i i i

i i i i

i In m 1n m 1

i i

i i

ni mi 1ni mi 1 n n

by using this technique, we can establish a fundamental matrix for equation (1), for each

eigenvalue i

there exists only one linearly independent eigenvector, as stated in the

following theorem.

Theorem 4.3 If D is defined by relation (10) then a fundamental matrix for

x Dx

X1 0 0

(12)

0

0

X

X

0

0

2

is given by

X

(13)

0 0 Xr

where

Xi is a fundamental matrix for

x D x,i

1,2, …,r.

(14)

i

i

The matrix

Xi is given by

Xi e (t,t0 )W (en (t)),

(15)

i i

i i

Where

W (e

n

n

i

(t)) is wronskian matrix of

e (1,t,t2 ,……..,tni 1 )

n

n

i

(16)

ni

ni

on time scale T k

Proof: The matrix Di

Di i I J

of order

ni ,

was defined by

It is clear that i

is the only eigenvalue of Di

and that its multiplicity is

ni . A

corresponding eigenvector is

d1 , and it may be noted, incidentally, that

e (t,t0 )d1

is a

i

i

solution of equation (14). In order to find other solutions, we note that any vector x can

be expressed as

x e (t,t0 )h.

If x, in this form, is substitute into equation (14), we get

i

i

(e (t, t )) h e ( (t), t )h Ie (t, t )h Je (t,t )h

i 0 i 0 i i 0 i 0

0

0

e ( (t), t

i

)h

Je (t, t )h

0

0

i

0

0

h e1 ( (t), t

i

)Je

i

(t, t0 )h

h Je (t, (t))h

i

Since

i t

i t

(t )

e (t, (t)) exp{ (s)i (s)s

[exp{

(t )

(s) (s)s]1

And since

t i

t

exp

s ss 1 t

i i

i i

t

t

1

exp s ss

1 t 1

i i

i i

i i

t

therefore

e (t, (t)) [1 (t)]1

Hence, we get

h J[1 (t)]1 h

(17)

i

i

Since i

s are regressive, 1 i (t) 0

Hence x is a solution of (14) if and only if h

i

i

satisfies (17). The latter equation is the companion vector equation associated with nth

u[ni ] [n ]

order scalar equation

1 i (t)

0, i.e. u i

0 the vector

e (t) , defined by

n

n

i

(16), is a fundamental vector for this equation. Hence

W eni t

is a fundamental

matrix for equation (17). It follows that

Xi , defined by (15), is a fundamental matrix for

equation (14). By using theorem (4.1). We may conclude that the matrix X defined by

(13) is a fundamental matrix for equation (12). This proves the theorem.

The main result of this section is now stated in the following theorem

.

Theorem 4.4 A fundamental matrix for (1) is given by Y = SX The matrix X is defined by

the relations (13), (15) and (16). The matrix S is such that S 1 AS D

Jordan canonical form of A.

, where D is the

Proof: The validity of the theorem is obvious, since the result follows from the direct consequence of preliminary discussion in the Section(3) of (1), (2) and (3).

For more than one linear independent eigenvector corresponding to each eigenvalue, then the following theorem gives the fundamental matrix.

Theorem 4.5 If D is defined by relation (10), then a fundamental matrix for (12) is

given by (13) where

Xi is a fundamental matrix for (14) and Di

is defined by (11).

e (t, t0 )Im 1m 1

The matrix X is given by X i i i

i

e (t, t )W (e

(t))

T k i i

T k i i

i 0 ni mi 1

where

i i

i i

W (en m 1 (t))

is defined by (16) on time scale

( n m )

Example 1

An example that illustrates the case of repeated eigenvalues. Find a fundamental matrix for the following equation:

3 1 1

y 0 3 1 y

0 0 3

The eigen value of the coefficient matrix are eigenvectors are

1 3, 2 3, 3 3

the corresponding

2

2

1 1 1

0 0 1

0 0 1

1

1

s 0 ,

s 1 ,

0

0

s2

Let

1 1 1

S 0 1 0

0 0 1

It may be verified that

S 1 AS D

3 1 0

D 0 3 1

0 0 3

The matrix D is in Jordan canonical form. A fundamental matrix Y for the given equation is

e (t, t ) te (t, t ) t 2e (t, t )

3 0 3 0 3 0

T h , h 0 Y S 0 e3 (t, t0 ) (t (t))e3 (t, t0 )

e3(t t0 ) te3(t t0 )

t2e3(t t0 )

0 0 2e3 (t, t0 )

  1. If T then Y S 0

    e3(t t0 )

    2te3(t t0 )

    0 0 2e3(t t0 )

  2. IfT then

    4(t t0 ) t4(t t0 ) t2 4(t t0 )

    Y S

    0 4(t t0 )

    (2t 1)4(t t0 )

    0 0 (2)4(t t0 )

  3. If

T h ,

h 0

(t t0 ) (t t0 ) (t t0 )

(1 3h) h

t(1 3h) h

t 2 (1 3h) h

0 0

0 0

(t t ) (t t )

then

Y S 0 (1 3h) h (2th p )(1 3h) h

0 0 (2h)(1 3h)

(t t0 )

h

Example 2

Finally, an example that illustrates the case of some are repeated and some are distinct eigenvalues. Find a fundamental matrix for the following equation:

3 2 0

y 1 0 0 y

1 2 1

The eigen value of the coefficient matrix are 1 2, 2 1, 3 1the corresponding eigenvectors are

2

2

2 0 1

0 1 1

0 1 1

1

1

s 1 ,

s 0 ,

1

1

s2

Let

2 0 1

S 1 0 1

0 1 1

It may be verified that S 1 AS D

2 0 0

D 0 1 1

0 0 1

The matrix D is in Jordan canonical form. A fundamental matrix Y for the given equation is

e2 (t, t0 ) 0 0

Y S 0 e (t, t ) te (t, t )

1 0 1 0

0 0 e (t, t )

1 0

e2(t t0 )

0 0

  1. If T ,Y S 0

    e(t t0 ) te(t t0 )

    0 0

    e(t t0 )

    3(t t0 )

    0 0

  2. If T , then

    Y S

    0 2(t t0 )

    t2(t t0 )

    0 0 2(t t0 )

    (1 2h)

    (t t0 )

    h

    0 0

    0 0

    0 0

    (t t ) (t t )

  3. If T h ,

h > 0, then

Y S 0 (1 h) h t(1 h) h

0 0 (1 h)

(t t0 )

h

References

  1. B. Aulbach, Continuous and Discreate Dynamics Near Manifolds of Equilibria, Lecture Notes in Mathematical, Vol. 1058, Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1984.

  2. B. Aulbach and S. Hilger, A unified approach to continuous and discrete dynamics. In qualitative theory of differential equations(szeged, 1988), volume 53 of Colloq. Math. Soc. Janas Bolya, pp 37-56. North-Holland, Amsterdam, 1990.

  3. M. Bohner and A. C. Peterson, Dynamic Equations on Time scales, An Introduction with Applications, Birkhauser, Boston, MA, (2001).

  4. R. H. Cole, Theory of Ordinary Differential Equations, Appleton-Century-Crofts, 1968.

  5. K. Deimling,Nonlinear Functional Analysis, Springer, New York, 1985.

  6. F. A. Ficker, Some uses of Linear Spaces in Analysis, American Mathematical Monthly, 66(1959), 259-275.

  7. M. Golomb, An Algebraic Method in Differential Equation, American Mathe- matical Monthly,72(1965), 1107-1110.

  8. E. D. Nering, Linear Algebra and Matrix Theory, New York, John Wiley and sons, inc. , 1963.

  9. E. J. Putzer, A Voiding the Jorden cannonical form in the Discussion of Lin-ear Systems with Constant Coefficients, American Mathematical Monthly, 73 (1960), 2-7.

Leave a Reply