AdZ2. bb4l - ESIRC - Emporia State University

Transcription

AdZ2. bb4l - ESIRC - Emporia State University
ABSTRACT
AN ABSTRACT
Salem Hussein Chaaban
OF THE THESIS WRITTEN BY
for the
in
t-lathematics
Title
Introduction to Lie Algebras
Abstract approved:
Master of Science
presented on
~AdZ2.
r1lay 26, 1988
bb4l~
The purpose of this thesis is to introduce the
basic ideas of Lie algebras to the reader with some
basic knowledge of abstract and elementary
linear
algebra.
In this study,
Lie algebras are considered from a
purely algebraic point of view, without reference to Lie
groups and differential geometry.
Such a view point has
the advantage of going immediately into the discussion
of Lie algebras without first establishing the topo10g­
cal machineries for the sake of defining Lie groups from
which Lie algebras are introduced.
In Chapter I we summarize for the reader's conven­
ience rather quickly some of the basic concepts of
linear algebra with which he is assumed to be familiar.
In Chapter II we introduce the language of algebras in
a form designed
for material developed in the later
chapters.
Chapters III and IV were devoted to the study of
Lie algebras and the Lie algebra of derivations.
Some
definitions, basic properties,
and several examples are
given.
In Chapter II we also study the Lie algebra of
antisymmetric operators, Ideals and homomorphisms.
In
Chapter III we introduce a Lie algebra structure on
DerF(A) and study the link between the group of automor­
phisms of A and the Lie algebra of derivations DerF(A).
INTRODUCTION TO LIE ALGEBRAS
A thesis
submitted to
The Division of Mathematical and Physical Sciences
Emporia State University
In Partial Fulfillment
of the Requirements for the Degree
Master of Science
By
Salem Hussein Chaaban
August, 1988
''--
~
ACKNOWLEDGMENTS
The author wish to express his sincere thanks to
his thesis advisor Dr. Essam Abotteen for the assistance
and constructive criticism and his continuing advise and
encouragement through the last year and half.
Without
his decisive influence the writing of
the thesis would
never have been statrted.
The author would also like to express his deepest
thanks
to his thesis co-advisor Dr. Stefanos Gialamas,
for his helpful suggestions.
Dr. John Carlson,
the Chairman of the Department,
played an important part with his encouragement,
belief
in the authors and his moral boosting.
For this, the
author would like to sincerely thank him.
The completion of
this thesis would not have been
possible without the contributions of
the above three
individuals.
Finally,
the author would also like to express
sincere
gratitude
to his wife,
children,
parents,
brothers,
and sisters for their patience,
support, and
encouragement.
My wife Amy compiled the table of contents and
the
bibliography.
Her
patience with my writing, often into
the late hours
of
the evening over
the
years is most
commendable.
My
daughters Mona
and Rasha did some
proofreading, Salima did some scratching, and the twins
Hana and Rana did some watching and screaming.
Salem Hussein Chaaban
464052
hi:
U!
'i r r' • r:
U..
,_, J -'
'8ft
00
A~~r ~~a:=~~o~'
r:::.=:J'::JD~e.lloopO=::a;"'r""':'t-m-e-n~t:-------
St~ot'Y)OS
~ a..Qq'YY\'<x'S
Committee Member
t-
it
/)()/nvHZe/
Committee Member
Committee
~
~
"I,
TABLE
OF
CONTENTS
Page
CHAPTER ONE "Introduction" • • • • • • • • • • • • • • • • • • • • • • • • •
1.1
1.11
1.111
I.IV
I.V
I.VI
1
Fundamental Concepts Of Vector Spaces ••••• 1
Linear Independence And Bases ••••••••••••• 4
Linear Transformation ••••••••••••••••••••• 6
The Matrix Of A Linear Transformation ••••• 7
Trace And Transpose Of A Matrix ••••••••••• 10
Series Of Matrices •••••••••••••••••••••••• 12
CHAPTER TWO "Algebras" • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 16
CHAPTER THREE "Lie Algebras" • • • • • • • • • • • • • • • • • • • • • • • 35
111.1
Basic Definitions And Examples •••••••••••• 35
111.11 Structure Constants ••••••••••••••••••••••• 39
111.111 The Lie Algebra Of Antisymmetric Operators 46
III.IV Ideals And Homomorphisms •••••••••••••••••• 55
CHAPTER FOUR "Lie Algebra Of Derivations" • • • • • • • • • • 64
IV.I
IV.II
Derivation Algebra •••••••••••••••••••••••• 64
Inner Derivations Of Associative And Lie
Algebras. • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• 73
CHAPTER FIVE "Summary And Conclusion" • • • • • • • • • • • • • • 81
BIBLIOGRAPHY • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 84
1
CHAPTER
ONE
INTRODUCTION
1.1
FUNDAMENTAL CONCEPTS OF VECTOR SPACES
The
provide
object
a
of
this
short account
introductory
of
is to
the foundations of Linear
Algebra that we need in later chapters.
are
chapter
Various results
given without proofs and others are given with only
sketchy arguments.
DEFINITIONS AND EXAMPLES
DEFINITION 1: Let F be a field. A vector space over F is
a set V whose elements are called vectors,
two operations.
addition,
The
operation is called vector
it assigns to each pair of vectors v,w
vector, denoted by v+w
scalar multiplication,
vector v
first
together with
E V.
E V
a
The second operation called
assigns to each scalar
E V a vector denoted by
av
following conditions are satisfied:
E V,
a E
F and
such that the
2
1. (u+v) + w = u + (v+w), for all U,V,w E V.
(i.e. addition is associative).
2. There
an element of V,
is
denoted by 0 and is
called zero vector, such that
o
=
+ u
u + 0
u, for all u E V.
each vector u E V, there exists an element,
3. For
denoted by
u
=
-u E V
such that
+ (-u) = 0
= v + u for all u,v E V.
4. u + v
(i.e. addition is commutative)
= au + av for all u,v e V and a E F.
5. a(u+v)
(i.e. scalar multiplication is distributive with
respect to vector addition).
=
6. (a +{J) u
au +
fJ ufo r
all u E V, and
a
,{J E
F.
(i.e. scalar multiplication is distributive with
respect to scalar addition).
7. (afJ)u
8. lu
=
= a(fJu) for all u E
V, and
a
,fJ E
F.
u for all u E V.
(1 here is the multiplicative identity of F).
REMARK 1:
In the sequa1
F
will be either the field of
real numbers or the field of complex numbers.
REMARK 2:
Conditions
1-4
in the definition of
vector
space is equivalent to say with respect to vector addit­ ion, V is an abelian group.
REMARK 3:
When
no
confusion
is to be feared a vector
space over F will be simply called a vector space.
3
EXAMPLE 1:
Let
F
be
any
field.
,a ):
V = {(a,
1
<=
a EF, 1
<=
i
n
Define a vector addition in V by:
n }.
i
(p , ... ,p )
,a ) +
(a ,
1
1
n
= (a
1
n
+P ,
,a
1
+Pn ).
n
Define a scalar multiplication by:
Y( a
,
•••
Then
= (r a
,a )
1
n
with
respect
,
•••
,Ya ) for all
1
YE
F•
n
to
these
operations, V becomes
a
n
vector space over
F,
n
In particular R
F •
we denote this vector space by
n
is
a
vector space over Rand C
is
a
vector space over C.
EXAMPLE 2:
Let F be any field. Let Mat
(F) be the set
mxn
of all mxn matrices with entries
F.
in
Let A
=
(a
)
ij
and B
=
(b
(F).
) be elements in Mat
ij
sum A+B to be the matrix C = (c
=
1, 2,
••.
, m and j
can immediately verify
that
=
=
) where c
ij
for i
We define their
mxn
, n.
1, 2,
Mat
(F)
is
+ b
a
ij
ij
ij
Then
one
an
abelian
mxn
group under this addi tion.
scalar multiple of A by a to
Let
aE
be
F.
the
We define aA, the
matrix C
=
(c
),
ij
= aa
where c
ij
Then it can be verified that Mat
ij
is a vector space over F.
(F)
mxn
4
DEFINITION 2:
Let V be a vector space over F.
A subset
W of V is called a subspace of V if W is itself a vector
space
over
F
under
the
same
operations
of
vector
addition and scalar multiplication of V.
THEOREM 1:
A nonempty subset W of a vector space V is a
subspace of V if
we have w +w E
1
1.11
only if for all w , w EW,aEF,
1
2
Ow EW.
1
and
Wand
2
LINEAR INDEPENDENCE AND BASES
. ..
DEFINITION 1: If v , v ,
1
2
in a
vector
o
v
1 1
where
a
,
is
v
a
set
of vectors
n
space V over F, an
expression of the form
+ 0 v + ••• + a v
2 2
n n
EF is called a Linear Combination of the vectors
i
v,v,
1
•••
,v.
2
n
THEOREM 1:
a vector
Let 8 be any subset
space
combinations
REMARK 1:
of
vectors
of
V, then the
vectors
(finite or infinite) of
set
from
L(8)
8
The subspace L(8) of all
from 8 is
called
the
is
of
all
linear
a subspace of V.
linear combinations
subspace spanned or
generated by the set 8.
REMARK 2:
8 C L(8)
REMARK 3:
L(8)
contains 8.
is
the
smallest
subspace
of V that
5
DEFINITION 2:
Let V be a vector space over F.
S of
in V is
there
vectors
said to be linearly dependent if
...
are distinct vectors v ,
1
,a
a1 '
A subset
,
v
in S and scalar
n
a
not all zero, such that
+a
v +
n I l
A set S of vectors
in V is
v
=
O.
n n
called linearly independent
if S is not linearly dependent.
DEFINITION 3:
Let V be a vector space. A basis for V is
a subset S (finite or infinite) of V,
that
independent and that spans V, that is V
A
vector
space V over F is
=
said
is linearly
L(S).
to
be
finite
dimensional if it has a finite basis.
In
spite
of
the
fact
that
there is no unique
choice of basis, for a vector space,
there is something
common to all of these choices.
It is
a
property that
is intrinsic to the space itself.
THEOREM 2:
The
number
finite-dimensional
of
vector
elements in any
space V is
the
basis of a
same as any
other basis.
DEFINITION 4:
The
dimension
vector space V is the number
V.
of
of
a
finite-dimensional
elements in a basis of
6
1.111
LINEAR TRANSFORMATIONS
DEFINITION 1:
same
field
Let V and W be
A mapping
F.
vector
spaces
T: V ---) W
over
is
called
the
a
linear transformation if :
1.
T(v +v )
1 2
REMARK 1:
2
1
1
2
= aT(v ), for all v E V and a E F.
T(av )
1
2.
= T(v ) + T(v ), for all v ,v EV
1
1
transformation T : V ---) W is also
A linear
called an F-homomorphism or F-linear mapping.
REMARK 2:
If
T
is
and
one-to-one
onto
linear
transformation, then it is called an isomorphism.
DEFINITION 2:
ation, then
If
T: V ---) W
the
kernel
of T,
is
a linear transform­
denoted
by
ker(T)
is
defined by:
ker(T)
THEOREM 1:
ation,
=
{v E V
Let
T
T(v)
= O}
V ---) W
be
a
linear
transform­
then,
1.
ker(T) is a subspace of V
2.
T(V) is a subspace of W
Let V and W be vector spaces over the same field F.
Hom (V,W) be the set
of
Let
all linear transformation of V
F
into W.
We shall now proceed to introduce operations in
Hom (V,W) in
F
space over F.
such
a way
that
make Hom (V,W) a vector
F
7
we
For T , T EHom (V,W),
1
2
(T +T )(v)
1
define
their sum T + T
1
F
=
T (v) + T (v) for all v E V.
1
2
2
V ---) W
For a E F and TEHom (V, W), we define a map aT
F
by (aT)(v)
Then
it
= a(T(v»
is
by:
2
easy
for all v E V.
to
verify that T +T E Hom (V,W) and
1
aT E Hom (V,W).
F
these
operations makes Hom (V,W) a vector space over F.
F
we have:
Thus
Hom (V,W)
F
it
can
be
F
that
THEOREM 2:
Also
2
checked
is a vector space.
Moreover if V
and Ware finite-dimensional vector spaces of dimensions
m and n respectively then Hom (V,W) is finite-dimension­
F
al with dimension mn.
If in particular W
and in
this
case
= V, we denote
an
Hom (V,V)
F
by End (V)
F
element T E End (V) is called an
F
endomorphism of V.
I.IV
THE MATRIX OF A LINEAR TRANSFORMATION
Suppose
now
that V and Ware finite-dimensional vector
spaces over the same field F, and
dim W
=
n.
Let B = {v ,
1
let
dim V
= m,
and
Let T : V ---) W be a linear transformation.
...
,
v ) and B'
m
= {w ,
1
...
,
w }
n
be basis
8
for V and W respectively.
n
T(v )
i
=
Law
j=l
ij
For
a
each
i = 1, 2,
•••
, m
E F,
ij
j
then the nxm matrix
t
·a
In
. a
all a 12
a 21 a 22
B'
2n
=
[A]
B
amI
of the elements
a
a m2
. . . a
mn
of F is called the matrix associated
ij
with the linear transformation T with respect to the
basis B for V and basis B' for W.
Of course the matrix
B'
[A]
depends on our choice of bases for V and W,
in the
B
sense that any change in the bases Band B' would result
in
a
fixed
different matrix for T.
then
each
T
If these bases are held
determines
a
conversely to each mxn matrix (a
unique
matrix
and
) over F corresponds a
ij
unique linear transformation T determined by:
n
T(v ) =
i
L a ij
j=l
W
j
we summarize formally:
THEOREM 1:
Let V and W be two finite dimensional vector
spaces over the
dim W = n.
same field F,
Let Band B' be
and
bases
let
dim V = m
and
for V and W respect­
9
ively.
Then for each linear transformation T : V ---> W
there
is
an nxm matrix A with
[Tv]
B'
where [v]
=
, for all v
A[v]
entries
in F such that
E V
B
the
coordinate matrices of v
bases
Band B' respectively.
are
and [Tv]
B'
B
and Tv relative
to
the
Furthermore, TI---> A is an isomorphism between the
space
of
all
Hom(V,W)
linear
and
the
transformations
space
of
from
V into W,
all nxm matrices over the
field F.
In
particular
we
shall
be
interested
representation by matrices of endomorphisms,
in
that is of
linear transformations of a vector space V into
Let B
= {
v ,
1
...
,
v } be
basis
a
for
the
itself.
V, and
an
T
n
endomorphism of V,
and let A
=
(0
)
be the matrix of T
ij
relative to the basis B.
If a change of basis
in V from B to a new basis B',
relative to this new basis?
what
is
made
is the matrix of T
The following theorem gives
the answer.
THEOREM 2:
over
Let V be
the field
F,
an
and
n-dimensional
vector
B = {v ,
let
1
B'= {v' ,
1
...
,
v'} be bases for V.
...
,
space
and
v
n
If A is the matrix
n
of T relative to B',
then
=
matrix P with columns P
j
there
,
[ v' ]
j
B
exists
a nonsingular
-1
such
that A'
=
P
AP,
10
where [v']
is
the
coordinate matrix of v' relative to
B
j
j
the basis B, and A' is the matrix of T relative to B'.
I.V
TRACE AND TRANSPOSE OF A MATRIX
Our
aim
in
this
short section is to develop the
concepts of trace and transpose of a matrix and describe
some of their properties that we need in later chapters.
DEFINITION 1:
Let A be an
The trace of A is
diagonal of A.
We
the
nxn matrix over the field F.
sum
denote
of the elements of the main
the
trace
of A by tr A; if
n
A = (a
), then tr A =
ij
2:
i=l
a
The fundamental properties
ii
of
the
trace are contained
in the following theorem:
THEOREM 1:
For any nxn matrices A and B over the
field
F and A. e F, we have
1.
tr (A+B) = tr A + tr B
2.
tr (A.A) = A.tr A
3.
tr (AB) = tr (BA)
REMARK 1:
a
Properties 1 and 2 assert
linear transformation
matrices over F to
REMARK 2:
the
of
the
that
vector space
of
nxn
one-dimensional vector space F.
If A is invertible,
property 3 implies
-1
tr (ABA
the trace is
) = tr B for any nxn matrix B.
that
11
DEFINITION 2:
(a
Let A =
) be
an
mxn
matrix over F.
ij
=
The nxm matrix B
(p ) such
that
ij
A.
is called the transpose of
t
of A by A •
We
Pij = a ji
denote
for each i,j
the transpose
other words, the transpose of A is the
In
matrix obtained by interchanging the rows and columns of
A.
THEOREM 2:
Let A and B be mxn matrices
nxn matrix.
Then
t
t
(A )
1.
and let C be an
= A
t
t
= A + B
2.
(A+B)
3.
(a A)
= aA
t
t
4.
(AC)
t
DEFINITION 3:
t
t
for any
a
E F
t
=C A
A matrix
A is
said
to
be a symmetric
t
matrix if
A = A, and A is called a skew symmetric matrix
t
if A
=
-A.
REMARI 3:
matrix can
The
be
concept
extended
linear transformation.
of
in
trace,
and
the
obvious
transpose of a
way
to
any
12
I.VI
SERIES OF MATRICES
This
section
is
series of matrices,
concerned
with
the
notion
particularly in chapter 4,
of
we need
A
the
is
concept of
a
square
the
exponential function
matrix
DEFINITION 1:
where
e
A
over the field R of real numbers.
Let A = (a
) be an mxn matrix over R or
ij
C, the
norm
II AII =
of A, is denoted
by II AII and defined
as
max
la
I
1(=i(=m ij
1(=j(=n
The norm has the following properties:
1.
IIAII)= 0 for any
matrix A, also IIAII = 0 if
and only if A = O.
2•
scalar
I lOA I I
=
101
I IAI I
for
any mat r i x
A
and any
a.
3.
IIA+BII (= IIAII + IIBII for any A and B.
REMARK 1:
These properties of the norm implies that the
vector space of mxn matrices over
vector space with respect to
R
or
C
is a normed
11.11.
DEFINITION 2: Let A , A , ••• to be an infinite sequence
1
of
mxn
matrices
2
over R.
to converge if there
lim
II A -A II
n
=
sequence
is
said
exists an mxn matrix A over R such
that:
n--)OO
This
O.
13
A is called the limit of the sequence and we write
A = A
lim
n--)OO
n
In order to define convergence
of an infinite series of
mxn matrices
00
L
n=l
A
n
first we construct the sequence
=
sums, S
1
A , S
1
=
A+ A ,
2
1
S, S , ••. of partial
1
2
S
2
+ • •• + A ,
= A
k
we say that the series converges to
1
k
the mxn matrix S if
the sequence of partial sums converges to S,
00
Using
= S.
S
i.e. if lim
k--)<X)
In this case we write
k
the
~.
k=l
notion
A = S.
k
of the norm of a matrix, we can
formulate the following test for convergence:
THEOREM 1:
Let A , A ,
1
••• be
mxn
matrices.
If
the
2
series of numbers
co
LilA II
n=l
n
co
converges, then the series of matrices
L
n=l
A converges.
n
We now define the exponential of a square matrix:
be an nxn matrix.
First observe that
Let A
14
2
II A II <=
n
3
I I A I I <=
2
2
n
I I A I I. I I A I I <=
n
II A II . II A
k
II A II <=
II A II . II A II = n I IA II
k-l
n
II <=
2
II A II
k-l
n
3
K
II A II
since the series of numbers
k-l
ex>
n
~O
II All
kl
k
converges (as can be shown by the ratio test), the
series of matrices
k
ex>
A
k=O
kl
2:
o
converges, by the previous theorem, note that A
denotes
A
the identity matrix I.
Now we define e
to be the sum,
i.e.
A
=
e
Now
00
1
k=O
kl
I
k
A
for any square matrix A.
we are going to consider a very important function,
At
the exponential matrix function e
matrix over Rand t
where A is
is a real variable.
This is defined
by the formula
At
e
= I
1
2 2
+ tA + ---t A
2I
1
square
k k
+ ••• + ---t A
kl
+ •••
15
At
THEOREM 2:
For any matrix A and any real number t,
e
is nonsingular.
To
prove
if
A
this theorem one can look at its eigenvalues;
is an eigenvalue of A then e
At
At
e
and since e
is nonsingular.
~
a
At
is an eigenvalue of
At
for
any
real number t, then e
16
CHAPTER
TWO
AL.GEBRAS
In
this Chapter,
algebras
in
a
we
introduce
the
language
of
form designed for material developed in
the later chapters.
We begin with some basic definitions, examples, and
basic properties of algebras.
DEFINITION 1:
Let F be a field.
By
F-algebra
(or
an
algebra over F) we mean a vector space V over F together
with a binary operation in V,
called
multiplication in
V, the image of an element (x,y)EVXV
under the multip­
lication is denoted by
x
and
y
in
V.
xy
And
and is called the product of
this
satisfies
conditions:
L
(x + y)z
=
xz + yz
2.
x(y + z)
=
xy + xz
3.
a(xy)
=
(ax)y
for all x, y, and z E V
=
and a E F.
x(ay)
the
following
17
REMARK 1:
In
the
sequal F will be
the
field of real
numbers R or the field of complex numbers C.
REMARK 2:
Conditions
1 -
3
in
this
definition are
equivalent to say the multiplication of the algebra V is
F-bilinear mapping of
REMARK 3:
When
no
VXV
into
V.
confusion is to be feared an F-alg­ ebra will be simply called an algebra.
REMARK 4:
It
should
be
noted
that we do not require
that multiplication in V to be associative nor it should
have a unit element.
DEINITION 2: 1.
An F-algebra V is called an associative
algebra if multiplication in V is associative, i.e. when
=
(xy)z
x(yz)
for all x, y, and z
2.
When
element,
=
V is called
Clearly if
ebra
if
V
=
ev
an
An
is
v
an
in V admits
element
e
an
E V
identity
such
a
F-algebra
for all v E V
unit
V
element
is
called
then it is unique.
commutative
multiplication in V is commutative,
when
=
for all x, y E
that
algebra with unity or a unital algebra.
has
the
xy
V
multiplication
i.e. there
ve
3.
E
yx
V
alg­
i.e.
18
EXAMPLE 1:
Every vector space V can be considered as an
(associative
and
commutative)
multiplication on V by xy
=
EXAMPLE 2:
(F)
The set Mat
algebra
by
defining
0 for every x, y EV.
of all nxn matrices, with
nxn
entries from a field F, is
an
the ground
vector
Mat
field F.
The
associative algebra over
space
structure
on
(F) is defined by the ordinary matrix addition and
nxn
scalar multiplication of a matrix by a scalar.
ebra
multiplication
multiplication.
tive, it
is
is
The alg­
defined by the ordinary matrix
Note that this algebra is not commuta­
called the Matrix algebra over F.
(Proofs
can be found in linear algebra books).
3
EXAMPLE 3:
Let
R
be
the
three-dimensional
real
3
Euclidean space.
The cross product of vectors makes
into a non-associative and non-commutative
R.
(Proof can be found
in
R
algebra over
calculus or linear
algebra
algebra books).
EXAMPLE 4:
Let
V
be an algebra over F.
We can define
two algebra structures on V by defining new multiplicat­
ion on V
by
x
*
y
=
xy + yx
and the bracket multiplication
[x,y]
=
xy -
yx
19
with the same underlying vector space structure
algebra
V.
These
multiplication
associative,
the
first
commutative.
The
bracket
are
multiplication
not
*
by
defining
the
matrix algebra Mat
in general
is
always
In partic­
bracket multiplication
(F),
the
multiplication will play an
important role in our study of Lie algebras.
ular
as
on
the
we obtain a new algebra, this
nxn
is called the General Linear Algebra of degree n over F,
we denote this algebra by gl(n,F).
bracket
1.
multiplication
defines
Now we show that the
a multiplication in V.
Show [x+y,z] = [x,z] + [y,z]
[x+y,z] = (x+y)z - z(x+y)
= xz + yz - zx - zy
= (xz-zx) + (yz-zy)
= [x,z] + [ y , z ]
2.
Similarly we can show [x,y+z] = [x,y] + [x,z]
3.
The third condition is
to
showa[x,y]=[ax,y]=[x,ay]
lets show a[x,y] = [ax,y]
a[x,y] = a(xy-yx)
= a(xy) -a(yx)
=
(a x ) y -
=
[a x, y]
similarly we can show
y (a x )
a[x,y] = [x,ay]
20
In this study we shall be interested mainly in the
case of algebras over fields which are finite-dimension­
For
a1 as vector spaces.
such
an
algebra
we have a
k
n
basis e ,e ,
1
where
2
are in F.
The n
elements
r y..
k=1
j
i
n
3
r's
=
and we can write e e
,e
1J
e
k
k
y a r e called the
ij
constants of multiplication (or structural constants) of
They
the algebra (relative to the chosen basis).
the values of every product e e
i
Moreover,
by
for i,j
1, 2,
••• , n.
j
extending this linearly,
determine every product in
=
give
V.
these
products
For, if x and yare any
elements of V,
n
and
x
= L
n
0
i=1
then
xy =
e
i
y
,
i
J=
(L a . e i)( ~
i
J
1
=
L
i,j
P.J
e.
where
a ,
i
J
P.
E F,
J
P.J e J.)
P
(0 e )(
e )
i i
j j
= L a
i,j
= L a
i,j
= ~1
(e
i
i
(fJ
j
e ))
j
p(e e )
i
j
i
j
and this is determined by the e e •
i
j
Thus any finite-dimensional vector space V can be
the structure
of
an
algebra over
a
given
field F by first
21
selecting a basis e , e ,
1
pair (i,j)
•••
,e
2
in
V. Then for every
n
we define in any way we please
as
e e
i
element in V,
an
j
sayee = v
and extending this linearly
i j
ij
V, that is if
to a product in
n
n
x =
L a
i=1
e
i
.2:..
1,J=1
L Pe
y =
i
n
we define xy =
and
a
j=1
.p.
1 J
j
j
.>
(e. e
1 J
n
=
L ·-1 a. Pj (v i· )
i ,J1
J
One can check immediately
that
this
multiplication is
bilinear in the sense that conditions (1-3) in
definit­
k
n
ion 1,
are
valid.
Letting
e e
i j
= v
Lye,
=
ij
k=1
ij
k
k
we obtain elements
(y )
of F which completely determine
ij
the product xy,
that
is
to say the choice of
e e
i
is
j
k
equivalent
to
the
choice
of
the
elements
1
in F.
ij
Thus,
the
set of algebras with underlying vector space
~
over F can be identified with the algebra F
DEFINITION 3: Let V be an F-algebra.
of V,
an
we mean a vector subspace
F-algebra
relative
to
the
By an F-subalgebra
W of V which is itself
multiplication
on
V.
22
DEFINITION 4:
A subset W of an
algebra V is
left ideal (respectively right ideal) of
a
subalgebra
of
V
and
V, then
when
W
a
is
for each wE W, v E V we have
wv E W (respectively vw EW).
right ideal of
V
called
W
If
is
W
is both a left and
called a two-sided ideal
(or ideal, for short) of V.
If
W
is an ideal of an algebra
space V/W
=
{v + W : v E V} is
V,
an
then
the quotient
algebra with respect
to the following multiplication
(v + W)(v + W)
1
It can
be
2
easily
+ W
v v
1 2
verified that this definition of the
product on
V/W
with
algebra
this
=
is well-defined and is bilinear.
structure,
algebra of the algebra
V
is
V/W,
called the quotient
by the ideal
W.
EXAMPLES OF SUB-ALGEBRA
We shall consider now some important sub-algebras of the
general linear algebra of degree n, gl(n,F).
EXAMPLE 1: The subalgebra sl(n,F)
=
{AEgl(n,F):tr(A)=O}
n
where tr(A) =
L
i=1
This
a
algebra
is
called
the
ii
special linear algebra of degree n.
In order to show that sl(n,F) = (A E gl(n,F)1
is a sub-algebra of gl(n,F),
tr(A) = O}
we have to show sl(n,F) is
23
closed
under
addition,
multiplication,
and
closed
closed
under
under
the
scalar
bracket
multiplcation.
1.
Show it is closed under addition
let A, B E sl(n,F),
we need to show that A+B E sl(n,F)
we know tr(A) = 0 and tr(B) = 0,
=
tr(A+B)
tr(A) + tr(B)
=
therefore A+B E
2.
Show
it
+
0
is
closed under the scalar multiplication
we need to show that
=
Otr(A)
=
a(o)
0 EF.
aA E sl(n,F)
0
=
3.
0
sl(n,F)
let A E sl(n,F), and
tr(aA)
=
0
Show it is closed under
let A, B
e
the
bracket multiplication
sl(n,F)
we need to show that [A,B] E sl(n,F)
tr[A,B]
=
tr(AB-BA)
=
tr(AB) - tr(BA)
=
tr(AB) - tr(AB)
=
0
therefore [A,B] E sl(n,F)
24
EXAMPLE 2:
The sub-algebra of skew-symmetric
so(n,F) = {A E gl(n,F)
I
matrices.
t
t
A
where
= -A},
A
is
the
transpose of A.
As
in
example
one we need
to
show so(n,F) is closed
under addition, closed under scalar multiplication,
and
closed under the bracket multiplication.
1.
Show it is closed under addition
let A, B E so(n,F)
t
we need to show (A+B)
t
we know A
t
(A+B)
2.
Show
=
t
-A, and B
=
t
t
A + B
=
(-A) + (-B)
=
-(A+B)
it
E so(n,F)
is
closed
=
-B
under
scalar
multiplication
let A E so(n,F), and aE F
need to show aA E so(n,F)
t
(aA)
t
= a(A)
= a(-A)
=
3.
Show
it
-(aA)
is closed under the bracket multiplication
let A, B E so(n,F)
need to show [A,B] E so(n,F).
25
t
[A,B]
t
= (AB-BA)
t
= (AB)
t
(BA)
-
t t
= B A -
t t
A B
= (-B)(-A) -
(-A)(-B)
= BA - AB
= -(AB-BA)
= -[A,B]
EXAMPLE 3: For n=2m, the symplectic sub-algebra sp(n,F),
t
formed by the matrices AEgl(n,F) such that A J + JA = 0
where J has the form:
o
I
I
m
J
=
o
-I
m
where
I
is
the identity matrix of order m,
and 0 is
m
the zero matrix of order m.
To show that sp(n,F) is a sub-algebra,
we
need to show
it is closed under addition, closed under scalar multip­
lication, and closed under the bracket mulitplication.
1.
Show it is closed under addition
let A, B E sp(n,F)
we need to show that A+B E sp(n,F)
t
t
we have A J + JA = 0, and B J + JB = 0
26
let C = A + B
then
t
t
C J + JC = (A+B) J + J(A+B)
t
t
= (A +B )J + JA + JB
t
t
= A J + B J + JA + JB
t
t
= (A J+JA) + (B J+JB) =
2.
Show
it
is
closed
under
0
scalar
multiplication
let A E sp(n,F), and 0 E F.
we need to show that OA
e sp(n,F)
t
A J + JA = 0
t
t
(OA) J + J(OA) = O(A J) + O(JA)
t
= O(A J+JA)
= 0(0)
= 0
3.
Show
it
is closed under the bracket multiplication
Let A,B E sp(n,F),
t
we need to show that [A,B] J + J[A,B] = 0
t
t
[A,B] J + J[A,B] = (AB-BA) J + J(AB-BA)
t
t
=
(AB) J - (BA) J + J(AB) - J(BA)
=
t t
t t
(B A )J - (A B )J + J(AB) - J(BA)
27
t t
t
t
= BAJ - ABJ
t
+ J(AB) - J(BA)
t
t
t
+ B JA - B JA + A JB - A JB
t
t
t
t t
t
= (B A J + B JA) - (A B J + A JB)
t
t
+ (JAB + A JB) - (JBA + B JA)
t
=B
t
t
t
(A J + JA) - A (B J + JB)
t
t
+ (JA + A J)B - (JB + B J)A
t
t
= B (0) - A (0)
=
EXAMPLE 4:
ut(n,F)
=
+ (O)B - (O)A
0
The sub-algebra of upper triangular matrices
=
{A E gl(n,F) : a
0
for
i)j}.
ij
1.
Show it is closed under addition
let A
=
], and let B
[a
=
[b
ij
let C
=
] be in ut(n,F),
ij
A + B
we need to show that A + B eut(n,F)
=
we know that c
a
ij
ij
=
for i)j, we have a
ij
=
therefore c
+ b
ij
0, and b
ij
=
0
0
ij
therefore it is closed under addition.
2.
Show
it
is
closed under the scalar mulitplication
28
let AE ut(n,F),aeF.
we need to show that
= aA,
let C
c
=
aA E ut(n,F)
then
aa
ij
ij
= 0, and hence
for i>j, we have a
ij
=
c
a(a
ij
)
ij
= a(O)
=
0
therefore
it
3.
is
closed
under
scalar multiplication.
the
bracket mutliplication
Show it is closed under
let A = [a
], and
B = [b
ij
let AB
=
[c
] be in ut(n,F)
ij
], first we are going to show
ij
AB E ut(n,F).
n
c
=
ij
L
r=l
a
b
ir rj
= 0 for i>r and b
Now, a
ir
i>r>j we have a
= 0 for r>j, hence for
rj
b = O.
ir rj
Thus if i>j, then
n
=
c
ij
Lab =
r=l ir rj
o.
Hence AB E ut(n,F).
Similarly it can be shown that BA E ut(n,F).
Thus
[A,B] E ut(n,F), therefore it is closed under the
bracket multiplication.
29
V' be algebras over the same
V and
Let
DEFINITION 1:
field F. By an algebra homomorphism of V into V' we mean
f: V ---) V'
mapping
a
=
property that f(v v )
1 2
If f is
is F-1inear and has the
which
f(v )f(v ) for every v , v E
1
2
1
2
also one-to-one and onto,
V.
then it is called an
isomorphism.
THEOREM 1:
f
Let
V
: V ---) V' an
and
V'
be
two
F-a1gebras
algebra homomorphism.
The
and
image fey)
is a sub-algebra of V' and the kernel of f,
ker f
The
=
f
-1
(O),(the inverse image of 0) is an ideal in V.
fundamental homomorphism theorems of group and ring
theory have their counterparts for algebras we cite:
THEOREM 2:
If
f:
V ---) V'
is an algebra homomorphism
then V/ker f is isomorphic to Im(f).
of V included in ker f,
phism
g : VII ---) V'
there
making
commute.
If I is any ideal
exists a unique homomor­
the
following
diagram
f
V
TT
V'
1/
\:I
VII
Where the
mapping
homomorphism
TT :
V ---) VII
vl---) v + I.
is
the
natural
30
THEOREM 3:
If I and
that I C J, then
J
are ideals of an algebra V such
J/I is an ideal of ViI and (V/I)/(J/I)
is isomorphic to V/J.
THEOREM 4:
If W is a sub-algebra of an algebra V and if
I is an ideal in V,
W+ I
then
is a sub-algebra of V,
Wn I is an ideal in W, and there is a unique isomorphism
'II:
W/(WnI)
--->
(W + 1)/1
such
that
the
following
diagram commutes.
i
~
W
(W + I)
11
11
W/(WnI)
,
'II
Where the mapping i of W
mapping.
> (W
into
W+ I
+ i)/I
is
the inclusion
(The proofs of the above four theorems can be
found in). [1]
EXAMPLE 5:
Consider the sub-algebra sl(n,F) of gl(n,F),
it can be easily shown that sl(n,F) is
in fact an ideal
in gl(n,F).
For let A E sl(n,F), and let BE gl(n,F)
we need to show that [A,B] E sl(n,F)
[A,B]
=
tr[A,B]
AB - BA
=
tr(AB-BA)
= tr(AB) - tr(BA)
=
tr(AB) - tr(AB)
= 0
31
r'
"
Now
therefore sl(n,F) is an ideal in gl(n,F).
consider the quotient algebra
is
we
can
Also F
gl(n,F)/sl(n,F).
an F-algebra with respect to the bracket multiplica­
tion.
Consider
AI---> tr(A).
the
map
Now, we
tr: gl(n,F) ---> F
show
the
given
by
map tr is an algebra
homomorphism.
1. tr is linear, means
a) tr(A+B)
=
b) tr(aA)
= atr(A)
tr(A) + tr(B)
2. tr([A,B]) = [tr(A),tr(B)]
The first condition of linearity follows from properties
of the trace.
For condition (2) we have tr([A,B]) = tr(AB-BA) = 0
and
[tr(A),tr(B)]
=
tr(A)tr(B) - tr(B)tr(A)
3. Moreover tr is onto,
consider the matrix A
for let a E F
= [a
= O.
be
any scalar,
=
OJ= sl(n,F),
], where
ij
a
ij
then tr A = a •
=
~
for i = j
=
1
otherwise
Also we have ker(tr) = {A E gl(n,F)ltr(A)
hence by the Fundamental Homomorphism Theorem 2 we have:
gl(n,F)/sl(n,F)
~
F.
32
EXAMPLE 6: (Algebra
of
Endomorphisms).
The
vector space over a field F.
Let
set
V
be any
of all F-linear
transformations of V into itself denoted by End (V) is a
F
The vector space
vector space over the ground field F.
structure in End (V)
is defined by ordinary addition of
F
linear transformation and the scalar multiplication of a
linear
transformation
T, T' E End (V)
by
a
and
a
E F,
Recall
scalar.
then
T + T'
and
that if
aT
are
F
defined by:
(T+T')(v)
= T(v) + T'(V)
(aT)(v)
for every v
e
=
a(T(v»
V.
If we define a multiplication on End (V) by
F
(ToT')(v)
then
it
can
be
=
T(T'(v»
shown
that
bilinear and associative.
this
multiplication
This algebra
is
is
called the
algebra of endomorphisms of V.
It is well known that if V is finite-dimensional
vector
space of dimension n, then End (V) is finite-dimensional
F
2
of dimension n
for
over F.
If
e,e,
1 2
,e
is
a basis
n
V, then the linear transformations E
such
ij
that:
33
E
ij
=[j
(e )
r
if
r = i
1 <= i, j <= n
if
form
a
r :t/= i
If TEEnd (V)
basis for End (V) over F.
F
we can write
=
T(e )
n
L
j=1
i
then
F
a
e
ij
=
, i
1, 2,
••• , n
and
j
t
]
is the matrix of T relative to the
basis (e ), 1<=i<=n.
The correspondence TI---> A is an
the matrix A =
[a
ij
i
algebra isomorphism
of the endomorphism algebra End (V)
F
onto the matrix
algebra Mat
(F) of nxn matrices
with
nxn
entries in F.
THEOREM 4:
Thus we have:
The Matrix Algebra Mat
(F)
is
isomorphic
nxn
to the algebra of endomorphisms End (V), where n
F
(Note
that
the
= dim
~
isomorphism depends upon a choice
of basis for V.
Let
us consider for any algebra
V
over F, the algebra
of endomorphism End (V) of the vector space V.
For any
F
vEV define a map T : V ---> V by T (x)=vx for all xEV.
v
T
v
is called the left multiplication by v.
v
Then it can
34
be
shown that T
is
an endomorphism of V and hence an
v
element of End (V). Also if the algebra V is associative
F
then T (xy)
=
T (x) T (y) for every x,y EV, and in this
v
v
v
case, the mapping
phism of
V
'J!:
1--->
v
is an algebra homomor­
T
v
into the algebra
End (V)
of
endomorphism
F
of
V.
the vector space
element 1,
then
isomorphism of
V
Now
the
0/:
mapping
into
V has
if
End (V).
an identity
1--->
v
is
T
an
v
Hence V is isomorphic
F
to an algebra of endomorphism,
on
the other hand if
V
does not have an identity, we can adjoin one in a simple
way
to
get an algebra
V
with
+ dim V since
V
is isomorphic to an
dim V = 1
of endomorphism,
THEOREM 5:
algebra
If
an
identity such that
the same is true for V.
V
is
a
algebra
Thus we have:
finite-dimensional associative
then V is isomorphic to an algebra of
endomor­
phism of a finite-dimensional vector space.
DEFINITION 6:
into
an
A homomorphism
algebra
End (V)
of
an algebra
A
over F
of endomorphisms of a vector
F
space
V
particular
over F is called
representation
a
representation of A.
~: v
1--->
T
in
The
the above
v
discussion
is
called
the regular represntation of
V.
35
CHAPTER
LIE
111.1
THREE
ALGEBRAS
BASIC DEFINITIONS AND EXAMPLES:
Some basic concepts and definitions of Lie algebras
are
discussed
in
this chapter from an algebraic view­
point.
DEFINITION 1:
Let F be a field. A Lie algebra over F is
a (non associative) F-algebra,
denoted by [x,y] for x and y
L
in
whose multiplication,
L,
and
satisfies
in
addition the following conditions:
=
1.
[x,x]
2.
[[x,y],z] + [[y,z],x] + [[z,x],y]
0, for all x in L
=
0
for all x,y, and z in L.
Condition 2 is called the Jacobi identity.
[x,y] is often called the Lie
of x and y.
The
product
bracket or the commutator
36
PROPOSITION 1: In any Lie algebra L we have [x,y]=-[y,x]
for all x,y E L.
Conversely, if [x,y] = -[y,x] for all
x,y E L, then [x,x] = 0 provided that the characteristic
of F is not 2.
Proof:
have
From 1 and the bilinearity of multiplication
we
0 = [x+y,x+Y]
= [x,x] + [x,y] + [y,x] + [y,y]
= [x,y] + [y,x]
thus [x,y] = -[y,x]
conversely,
if
[x,y] = -[y,x]
hence 2[x,x] = 0,
but
then
[x,x] = -[x,x],
since the characteristic of F is
not 2, then [x, x] = O.
Note
that
the
multiplication
EXAMPLE 1:
proposition
above
in
any
implies
that
Lie algebra is anticommutative.
Any vector space Lover F can
be considered
as a Lie algebra over F by defining [x,y] = 0,
x,y E L.
Those
are
the
the
abelian
or
for
all
commutative
Lie
algebras.
EXAMPLE 2:
matrices
product
Let
with
by
the ordinary
gl(n,F)
entries
be the
from
vector
space
field F.
Define
of
nxn
a
Lie
[A,B] = AB - BA, A,B E gl(n,F), where AB is
matrix
Chapter II that
multiplication.
gl(n,F)
with respect
multiplication is an algebra.
We have shown in
to
the
bracket
Now we are going to prove
that the Lie product [A,B] satisfies
in defintion (1) of Lie algebra.
conditions 1 and 2
37
Proof:
1.
Show that [A,A]
=
0
the proof of this condition is very easy,
[A,A] = AA - AA = 0
2.
The second condition to be proved is:
[[A,B],C] + [[B,C],A] + [[C,A],B]
= 0
[AB-BA,C] + [BC-CB,A] + [CA-AC,B]
=
(AB-BA)C - C(AB-BA)
+ (BC-CB)A - A(BC-CB)
+ (CA-AC)B - B(CA-AC)
=
ABC - BAC -CAB + CBA
+ BCA -CBA -ABC + ACB
+ CAB -ACB -BCA + BAC
=
(ABC-ABC) -(BAC-BAC)
-(CAB-CAB) -(BCA-BCA)
-(ACB-ACB) -(CBA-CBA)
=
0
The Lie algebra obtained in this
general
linear
convenience it
Lie algebra
of
way is called the
degree n over F.
For
will be called the linear Lie algebra of
degree n over F.
EXAMPLE 3:
Example 2 above can
be
generalized.
Let A
be any associative algebra over a field F. We can always
make A into
iplication
verifies
the
a
Lie
[x,y]
algebra
=
xy - yx
at once that this
structure of F-a1gebra.
x E A, and
the
associative law
Jacobi
in
A,
by
for
defining
a
Lie mult­
all x, yEA.
One
definition of [x,y] gives A
Clearly [x,x]
identity
follows
=
0 for any
from
the
38
)"l '}
,
= (xy-yx)z - z(xy-yx)+
[[x,y],z] + [[y,z],x] + [[z,x],y]
+ (yz-zy)x - x(yz-zy) + (zx-xz)y - y(zx-xz)
,j
Thus the product [x,y] satisfies
.S:
the product in a Lie algebra.
all
=
0
the conditions of
The Lie algebra obtained
in this way is called the Lie algebra of the associative
l
J
algebra A,
and
REMARK 1:
we shall denote this Lie algebra by A •
L
The
fact
that every associative algebra can
be turned into a Lie algebra
operation
theory
is
of
very important
Lie algebras since
connection between
algebra.
by
an
means
in
of
the bracket
many aspects
it
of
the
establishes a direct
associative algebra
and
a
Lie
In fact, every Lie algebra is isomorphic to
suba1gebra of a Lie algebra A,
L
a
where A is an associat­
ive algebra [2].
EXAMPLE 4:
Let V be
a
finite-dimensional vector space
over a field F. Since the set of all F-1inear endomorph­
isms of V, End (V) forms an associative algebra,
we can
F
j'
make
End (V) a Lie algebra by the bracket operation, we
F
write (End (V»
F
for
End (V) viewed as Lie algebra and
L
F
call it the Lie algebra of endomorphisms of V.
confusion
is
to
be
When no
feared we will denoted (End (V»
F
simply by End (V).
Any
suba1gebra B of
L
a Lie algebra
F
End (V) is called a Lie algebra of linear transformation.
F
39
In
view
of
isomorphic
the previous
to
a
remark
every Lie algebra is
Lie algebra of linear transformation.
In particular the general linear Lie algebra gl(n,F)
isomorphic to the Lie algebra
=
End (V), when n
F
111.11
linear transformation
dim V.
STRUCTURE CONSTANTS
Let
{e ,
1
of
is
L
•••
be
a
, e}
Lie
be
a
algebra over
basis
for
a field F and let
the
vector
space L.
n
elements [e ,e ] of L as
Expanding the
i
inations of the basis e ,
1
[e ,e ]
i
=
linear comb­
•••
, e
we obtain
n
k
n
L
k=l
j
a
j
rij
e
k
k
where
the
scalars
y
E
F
are
called
the
structure
ij
constants of the Lie algebra L with respect to the basis
{e ,e ,
1
these
,e}.
2
Moreover, we have seen in Chapter II,
n
products
following
.,
t.:
determine
every
product
in
L.
The
theorem characterize Lie algebras in terms of
structure constants and basis elements.
THEOREM 1:
I
•
t
I';'
t:1
Let L be a
(nonassociative)
algebra over a
k
field F with basis {e ,e ,
1 2
structure constants
,e } and let
n
Y
be the
ij
of L relative to the basis.
For L
to be a Lie algebra, it is necessary and sufficient that
the basis elements satisfies the following conditions:
40
a)
[e ,e ] = 0
i
i
b)
[e ,e ] = - [e ,e ]
i
i
j
j
c)
[[e ,e ],e ] + [[e ,e ],e ] + [[e ,e ],e ] = 0
k i
j
k
i
j
k
i j
for all i, j, k = 1, 2,
, n.
•••
These conditions are equivalent to say that the
k
Y
constants
satisfies:
ij
k
Y
a')
= 0
ii
b')
k
k
Y
=-Y'
ij
ji
r
rr crij
c' )
I
s
rk
r
+Y
s
jk
r
Y
for all i, j, k, and s = 1, 2,
+Y
ri
ki
s
rrj
) =
0
••• , n.
Proof: Clearly if L is a Lie algebra then the conditions
(a)-(c) are satisfied.
Conversely, assume that L
is an F-algebra such that (a)-(c) are
n
rae
i=1 i i
For let x =
,
1. First we need to show:
n
[x,x] = [
La
n
e ,
i=1 i i
e ]
j=1 j j
n
=
2: a
n
2: a
[e , 2: a e ]
i=1 i
i
j=1 j j
n
= L
n
Laa
i=1 j=1 i j
[e ,e ]
i
j
n
y =
LP
e
i=1 i i
[x,x] = 0
,
satisfied.
n
and z =
L
r
e
i=1 i i
41
n
=
n
n
2: L a a [e ,e ]+ L L a a [e ,e ]+ L, L a a [e ,e]
i=1 j=1 i j
i
j
i=1 j)i i j
i
j
j=1 i)j i j
i
j
J=i
n
=
2
2:
n
n
n
n
r
[e ,e ]+.L
L a a [e ,e ]+ L
a a [e ,e ]
i=1 i
i
i
i=1 j)i i j
i
j
j=1 i)j i j
i
j
a
n
2
[e ,e ]+ L
L a a [e ,e ]- L ~ a a [e ,e ]
i
i
i=1 j)i i j
i
j
j=1 i)j i j
i
j
=Ia
i=1 i
(by condition b)
n
= }: a
2
[e ,e ] + 0
i=1 i i i
n
= I a
2
[e ,e ]
i=1 i i i
= 0
2.
(by condition a)
The second condition to be satisfied is
[x,y] = -[y,x]
n
n
r a e , 'L{3e]
i=1 i i
j=1 j j
[x,y]=[
: i
I,
...',
,
n
=
[e ,e ]
i=1 j=1 i j
i
j
n
La {3
2:
(-[e ,e ])
j
i
i=1 j=1 i j
n
n
p
~ a
[e ,e ]
i=1 j=1 i j
j
i
= -;b
n
= - [
=
n
P e ,-2: a e ]
j=1 j j i=1 i i
L
-[y,x]
..,.
,""
L 2: a {J
n
=
t
n
/,
..
42
3.
The third condition to be satisfied is
[[x,y],z] + [[y,z],x] + [[z,x],y] = 0
[[x,y],z] + [[y,z],x] + [[z,x],y]
n
= [[
n
n
n
e , L fJ e ], Lye ]+[ [ L
i=1 i i j=1 j j
k=1 k k
j
La
k
= [2:
i
e ], I
iii
j
Lye, La
+ [[
2: a
j
k k
i
Pj [e i ,e j ],
= L L 2: a
i
j
k
i
j
j
j j
e ] + [L
k k
j
Pj kY[e j ,e k ], Liai i
e ]
L
p Y[ [e i ,e j ], e k ] + Lj
2: '2:
k
i j k
k i
j
k
e , L y e ], I a e
j
k k k
iii
fJe ]
I Y
+L L L ya {J[[e
k
{J
k
k
i
Pj ya
[[ e ,e ], e ]
k i
j
k
i
,e],e]
i
j
Now since addition is commutative and associative and
the a
n 1. =n. yo
i~j k
~ k i
=ya
k i
{J
for all i,j,k = 1,2, ••• ,n,
j
then the last expression can be written as:
=
L L Lap Y {[ [e ,e
i
j
k
i j k
i
j
], e ] + [ [e ,e ], e ] + [ [e ,e ], e ]}
k
j
k
i
k i
j
Condition (c) implies each term in the last summation is
zero, and this complete the prove of Jacobi's identity.
As an application to Theorem 1, we have:
EXAMPLE 1:
field F.
Let V be a 2-dimensional vector space over a
Pick
a
basis {e ,e } for V,
1 2
by
defining
a
multiplication table for the base elements by:
[e ,e ] = e , [e ,e ] = -e , and [e ,e ] = [e ,e ] = 0
1 2
121
1
1 1
2 2
and extending this linearly to a product in V.
going
to
show
the
multiplication
We
satisfies
are
the
43
conditions of theorem 1, and
hence
this multiplication
turns V into a two-dimensional Lie algebra.
& 2 follows directly from the definition.
Conditions 1
For condition 3 we have,
[[e ,e ],e ] + [[e ,e ],e ] + [[e ,e ],e ]
121
211
112
= [e ,e ] + [-e ,e ] + [[e ,e ],e ]
11
11
112
=
0 + 0 + [O,e ]
2
= 0
therefore the multiplication satisfies the conditions of
theorem 1.
REMARK 1:
There
are
couple
of
simplifying
remarks.
First, we note that if [e ,e ] = 0 and [e ,e ]=-[e ,e ],
i i i j
j
i
then the validity of
[[e ,e ],e ] + [[e ,e ],e ] + [[e ,e ],e ] = 0
i
j
k
j
k
i
k i
j
for a particular triple i, j, k implies
[[e ,e ],e ] + [[e ,e ],e ] + [[e ,e ],e ] = 0
j
i
k
i
k
j
k j
i
Since
cyclic
permutation of
i, j, k
are clearly
allowed, it follows that the Jacobi identity for
[[e
0"'( i)
,e
],e
cr( j )
mutation of i, j, k.
] is valid, where
rr is a per­
0"( k)
Next let i = j.
Then
the Jacobi
identity becomes:
[[e ,e ],e ] + [[e ,e ],e ] + [[e ,e ],e ]
i
i
k
i
k
i
k i
i
44
°
Hence [e ,e ] =
i
i
=
°+
=
°
[[e ,e ],e ] i
k
i
and [e ,e ] = i
[[e ,e ],e ]
i
k
i
implies that
[ e ,e ]
j
i
j
the Jacobi identities are satisfied for e , e , e .
i
particular,
the
with dim L <= 2
Jacobi
identity
In
k
i
for a Lie algebra L,
is a consequence of [x,x] = 0,
and
if
dim L = 3, the only identity we have to check is:
[[e ,e ],e ] + [[e ,e ],e ] + [[e ,e ],e ] = 0.
12323
1
312
3
EXAMPLE 2:
Let R
be the 3-dimensional
real
Euclidean
3
space.
We can make
R
Lie multiplication
into a Lie algebra by defining a
3
for all a,b E R , where
[a,b] = aXb
aXb is the usual cross product of a and b.
That
the
Lie multiplication
satisfies
[a,a] = 0, for
3
all a in R
above
is evident, and
remark
it
by
taking advantage of the
suffices to prove the Jacobi identity
3
R.
holds for the orthonormal standard basis of
Let e
= <1,0,0> , e
1
[e ,e ]
1
= <0,1,0> , and e
2
= <0,0,1>, then
3
= e , [e ,e ] = e , and [e ,e ] = e •
2
323
131
2
have [[e ,e ],e ] + [[e ,e ],e ] + [[e ,e ],e ]
12323
1
312
= [e ,e ] + [e ,e ] + [e ,e ]
3
=
°.
3
1
1
2
2
and the Jacobi identity holds.
Hence we
45
Before
proceeding
algebras we need
to
more
with
trihedron,
i.e. the
vectors a,b,c.
the
figure
and
triangle,
their normal vectors,
Using
vectors,
aX(bXc)
of
the triangle is the
formed
the
of
the trihedron are planes
i.e.
First we are going to
by the noncop1anar
These vectors correspond to the vertices
correspond to the sides
aXb.
Lie
a geometric interpretation of the Jacobi identity.
A three-dimensional analog
of
of
study some interesting facts about
3
the Lie algebra structure of R •
give
examples
the
faces
of
the triangle.
for
i.e.
which
the
we
the
trihedron
The faces of
may substitute
vectors, bXc, cXa, and
same correspondence between planes and
we see that the vectors, aX(bXc), bX(cXa), and
correspond
to
the altitudes of the trihedron,
the planes containing an edge and perpendicular to
the opposite face.
.,
If the sum of three vectors is the zero vector, the
;.'.
'"
r ..
,,­
".
three vectors must
three planes having
and only if,
the
be
a
coplanar.
point
planes
also
The normal vectors of
in common are coplanar if
have
a line in common.
Hence the geometric interpretation of the Jacobi
ity is:
ident­
The altitudes of the trihedron are three planes
having a line in common. This is a generalization of the
familiar theorem from plane geometry asserting that
the
altitudes of the triangle are three lines having a point
( the orthocenter ) in common.
46
It
is
linear
customary
algebra:
converted
to
to
while
an
talk
every
inner
about the "Paradox" of
vector
product space
with a dot product, regardless
of
can
space
by
be
endowing it
its dimension,
only
the three-dimensional vector space can be converted into
a
Lie
algebra
by
the
product.
This
lack
higher-dimensional
a
Paradox
is
problem,
111.111
implies
the
of
introduction
the cross
the
product
cross
seems
to
generalization.
This
result of a vicious formulation of the
as we shall see in the next section.
THE LIE ALGEBRA OF ANTISYMMETRIC OPERATORS
First let us consider the special case of the three
3
R.
dimensional vector space
3
3
: R
the map f
Let a be a fixed vector,
---) R
given
by
is
for
all
a
3
v E R
(v) = aXv
f
a
linear.
3
Let A(R ) = {f
3
a E R } be the set
a
3
of all such linear operators on R •
, g E
For f
b
a
define the sum f + g and
a
b
scalar
multiplication
3
shown that A(R )
f
as
a
usua1,i.e. (f +g )(v) = f (v)+g (v)= (axv) + (bxv)
a
b
a
b
(af )(v) = a(f (v)) = a(axv).
a
a
3
A(R )
Then
it
=
and
can be easily
3
is a vector subspace of End (R).
R
introducing the Lie bracket multiplication we have:
By
47
3
A(R ) is a Lie algebra
THEOREM 1:
= f og
multiplication [f ,g ]
a b
a
b
with respect to the
- g of •
b a
Moreover there
3
is a natural Lie algebra isomorphism between R
with the
3
cross product and A(R ).
3
It is routine to show that A(R ) is a Lie a1g­
Proof:
bra.
To show there is a Lie algebra isomorphism between
3
3
Rand A(R ),
define ~: R
by
'1J:
a
3
3
---> A(R ).
1--->
f , where f (v) = aXv for all v
a
e
a
We claim ~ is a Lie algebra isomorphism.
1.
To show ~ is linear
First we need to show that
'l1(a+b) (v)
= f
'I' (a+b)
= 'IJ(a) + ~(b)
(v)
a+b
= (a+b)Xv
= (aXv) + (bXv)
= f (v) + f (v)
a
b
= 'f'(a)(v) + ~ (b) (v)
Second we must show
'/I(A,a) = A,~(a), where A,e R
~(Aa)(v) = f
A,a
(v) = A(aXv) =Af (v)
= A( '1'(a)(v))
a
3
R
48
2.
'II
To show that
is one-to-one we need the following
3
If a E R is a fixed vector such that aXv = 0
REMARK
3
for all v E R , then a = O.
Proof:
let a = a i+a j+a k and v = li+Oj+Ok be vectors in
123
3
R then the cross product aXv is:
i
j
aXv = I a
a
1
k
2
1
, ,
I I
a a
a a
a a
1 31 +k 1 1 2
a 1 = i 1 2 3 -j
3
0 o I
I 1 01 10 0
0
0
= Oi - a j - a k ===> (0, -a , -a ) = (0, 0, 0)
3
2
2
3
therefore a
= 0,
= a
3
2
similarly we find a
= 0 when v = (0, 0, 1 )
1
hence a
= a = a = 0, and a = O.
123
3
Now, ker'll= {a E R I ~(a) = O}
3
= {a E R I f
= O}
a
3
3
= {a E R I f (v) = 0, for all v E R }
a
3
3
= {a E R I aXv = 0, for all v E R }
= {O},
Hence'll is one-to-one.
3.
It is clear that ~ is onto.
49
'f
4. Finally we show
preserves product, this
=
need to show that'J!([a,b])
means
we
[o/(a),o/(b)]
= ~(aXb)(v)
o/([a,b])(v)
=
f
(v)
=
(aXb)Xv
=
aX(bXv) - bX(aXv)
=
f (f )(v) - f (f )(v)
a
b
b a
aXb
=
f
(bXv) - f
a
=
(aXv)
b
(f of -f of )(v)
a
b b a
= [~(a),o/(b)](v)
0/
therefore
preserves product.
3
Let f EA(R);
a
The matrix of f
relative to the standard
a
0
a
-a
2
3
3
basis in R
I
is
a
0
1
a
-a
2
where a
=
-a
3
0
1
<a ,a ,a ).
123
Note that this matrix is antisymmetric,
say the linear
operator f
is
in this case we
antisymmetric
operator.
a
In general
we
define
antisymmetric operators
on
the
n
n-Euclidean space R
as:
DEFINITION 1:
=R
n
space.
A
Let V
linear
operator
t
antisymmetric if f
Let f
: V ---) V
be the n-dimensional Euclidean
=
f : V ---) V
is
called
t
-f, where f
be antisymmetric.
is the transpose of f
Select
a
basis B
50
for V, and let [M]
be
the
matrix
associated
with
f
B
relative
B.
to
Then
we
[M]
have
is antisymmetric
B
matrix.
This notation is independent
the basis,
i.e. if [M]
of
the choice of
is the matrix of f relative to
Bt
another basis Bt
,
then
it
is
known that there exists
-1
invertible matrix
such
N
r
~MJB.r - [N-\M1 BN
hence
B
t
= Nt [M]Bt
=
is
antisymmetric
r
and
N,
B
-1
N
-1
t
Jt
-1
( N [M]B N
= -[M]
Bt
[
[M]
/(-rMJJ[N J
=-
Le. [M]
=N
that [M]
Bt
if [M]
is antisymmetric.
B
The entries of the matrix
M must satisfy
m
=
ij
and in particular,
-m
,
ji
the diagonal entries of M must be O.
Now we want to show that antisymmetric operators of
3
R are
precisely the cross product by a fixed vector n.
THEOREM 2:
3
3
Let f: R ---) R be antisymmetric operator.
Then there exists a unique vector n such that f(v) = nXv
for all v in
3
R.
The
converse of this also true.
51
3
Proof:
Every antisymmetric operator of
a
0
0
-a
A
=
I
has a matrix
a
12
of the form:
R
13
a
12
23
-a
0
-a
23
13
and hence fCv) = nXv, for all v with
n = -a
,
where e
1
e
e + a
e - a
23 1
12 3
13 2
,
e
and e
2
is the standard orthonormal basis
3
3
for R • Conversely we have seen the matrix of the linear
operator fCv)
=
nXv is antisymmetric.
This theorem brings out the importance of the
etric operators;
they
are
antisymm­
distinct to generalize
the
cross product to higher dimensions.
n
In the sequal let V
Let AO(V)
=
addition
and
{f
e
=
R •
t
End(V)
f
=
Under
-f}.
scalar multiplication AO(V) is
the
usual
a
vector
subspace of End(V). The product gof of two antisymmetric
operators f and g in general fails
to
be anti symmetric
but by introducing a bracket multiplication we have
THEOREM 3:
AO(V) is a Lie subalgebra of (End(V»
•
L
Proof:
To show that AO(V) is closed under the bracket
multiplication,
[f,g]
=
fog -
gof,
52
t
t
[f,g]
=
t
=
E AO(V), then f
let f,g
-f and g
t
(fog - gof)
=
-g
-
(gof)
t
t
=
(fog)
=
t
t
(g of ) -
=
(-g)o(-f) -
=
(gof) -
=
-[f,g]
t
t
(f og )
(-f)o(-g)
(fog)
this implies [f,g] E AO(V).
Now
we
going
are
to
find the dimension of AO(V)
by
costructing a basis for it as follows:
Let B
=
...
{e ,e ,
1 2
,
e }
n
the dual basis {e
Consider
an
be
*, e *,
1
*
space V
of V.
Recall
2
that
arbitrary basis of V.
...
the
*
,
e } for the dual
n
dual
basis
satisfy
the following condition:
*
e (e )
=
j
i
8
Now we define s
( Kronecker's delta)
ij
=
(v)
*
e (v)e
ij
j
=
(i,j
1,2, •••
, n),
i
for each v E V.
e=
It is easy to check that s
End(V),
and satisfy the
ij
following conditions:
2
1.
2.
s
=
(v)
ij
,
[s
ij
s
(v)
8
ij
ij
iff j
s
rs
i,j
{s
ij
rand i
=
s
1 = [" ij
0
3.
=
= 1, ••• ,
otherwise
n} form a basis for End(V)
53
2
= n
In fact this proves that dim(End(V))
If
the
...
e , e ,
1
2
basis
,
is
e }
coincides with its dual, i.e. e
*
=
e,
i
t
=
in this case s
s
=
=
for i, j
i
=
1, 2, ••• , n
i
1, 2,
• ••
, n.
ji
ij
Define t
orthonormal, it
n
i+j
(-1)
t
(s
ij
- s
=
), i,j
1, 2,
• ••
,n •
ij
ij
One can show the set {t ..
lJ
I
<=
1
<
i
<=
j
n } satisfies
the following conditions:
n(n-l)
1.
The set has
elements and each
2
element is antisymmetric operator.
2.
[ t
t
ij
=
]
t
rs
is
.S
jr
3.
The set is linearly independent in AO(V)
4.
The set spans AO(V).
Now let us pull things together, we have shown that
if dim V
= n,
then
the
Lie
algebra
of antisymmetric
n(n-l)
operators
n
i=
AO(V)
has
------.
dimension
Therefore if
2
0, then
n(n-l)
------ =
<===>
n
n
=
3
2
Since there is
a
natural
Lie algebra isomorphism
3
3
between R and AO(R ),
we have
the
following theorem:
54
THEOREM 4:
There
is
a natural Lie algebra isomorphism
n
between R and the Lie algebra of antisymmetric operators
n
AO(R ) if and only if n
=
One final remark about
3
the Lie algebra structure of R ,
3.
3
it is compatible with the usual inner product on R.
By
this we mean that
2
laXbl
i.e.
2
= I al I b I
2
2
-
(a.b)
the length ofaXb equal to the area of the parall­
elogram spanned by a and b.
In fact n
=
3 is the only case in which it
is
possible
3
to convert R
into a noncommutative
so that the Lie product is
product on
3
R •
This
is
Lie
compatible
algebra over R
with
the
inner
rather a deep result and its
proof is beyond the scope of this thesis.
55
III.IV
IDEALS AND HOMOMORPHISMS
In
this
algebras,
section
of
algebras,
some
we
of
study
analogues,
for
Lie
the concepts we encountered
concerning
quotient
algebras
and
in
algebra
homomorphisms.
DEFINITION 1:
algebra
of
itself a
Let
L
we
L
be
mean
a
a
Lie algebra.
By a sub-Lie
subalgebra L' of L which is
Lie algebra relative
to the multiplication on
L.
DEFINITION 2:
called
a
e
an
A sub-Lie algebra B of a Lie algebra L is
ideal of L if [b,a] E B, for every b E Band
L.
REMARK 1:
definition
Since [b,a]
could
Thus in the
just
=
-[a,b],
as
the
well be
condition in the
written [a,b] E B.
case of Lie algebras "left ideal" coincides
with" right ideal ".
Ideals play
the
role
in Lie algebra theory which
are played by normal subgroups in group theory,
two sided ideals in ring theory;
and
by
They arise as kernels
of homomorphisms.
DEFINITION 3:
Let L be a Lie algebra.
The center
is defined by
Z(L) = {x E L : [x,y] = 0 for all y
of L
e
L}.
56
THEOREM 1:
Proof:
Z(L) is an ideal of L.
To show Z(L) is an ideal we need to show
1. Z(L) is a vector subspace of L.
2. for all zEZ(L), and for all x EL, then
[z,x] E Z(L).
1. To show Z(L) is a vector subspace of L, let x,x'eZ(L)
a
then [x,y] = [x' ,y] =
for all y E L.
The bilinearity of multiplication implies:
a
[x+x' ,y] = [x,y] + [x' ,y] =
[a
x, y]
=
[x, y]
=
(a)
for all y E L.
= a f or all a E F and y
6: L.
Hence Z(L) is a vector subspace of L.
2. Let z E Z(L) and x E L, we need to show [z,x]
For any y
e
e:
Z(L).
L, Jacobi's identity implies:
[[z,x],y] = -[[x,y],z] - [[y,z],x]
= -[a,z] - [a,x]
= a
Hence [z,x] E Z(L).
THEOREM 2:
L, then
This complete the proof.
If A and B are
A + B
= {a + b
I
two
ideals of a Lie algebra
a E A, b E B} and
n
[A, B]
= { I-
i=1
[a ,b ]
i
i
a
E
e
A, b
i
B} are
i
ideals of L.
Proof:
Let a E A, b 6 B, and x E L.
If a E A means [a,x] E A,
[a + b,x]
=
and if be B means [b,x] E B,
[a,x] + [b,x]
but [a,x] + [b,x]
e:
A + B, therefore A + B is an ideal.
57
prove [A,B] is
an
ideal
steps of the proof
of
the first part
To
of L we follow
Then [a,x] E
Let aEA, and bE B.
the
same
of this theorem.
A and [b,x] E
B for
Then we have the following
every x
L.
[ [ a, b ] , x ]
=
[a',x] E
A where [a,b]
=
a',
[[a,b],x]
=
[b',x] E
B where [a,b]
=
b',
therefore [A,B] is an ideal of L
By
using
this
we
theorem
can define
the ideal
2
=
L
which
[L,L]
is
called
the
derived algebra
or
commutator algebra of L.
EXAMPLE 1:
algebra.
Let L
=
the
gl(n,F),
general
The center of L is the set
matrices, i.e.
Z(gl(n,F))
=
{dI
of
: d
e
linear
Lie
all nxn scalar
F and r
n
is
the
n
identity matrix of order n}.
Clearly the set of all nxn scalar matrices is
=
in the center, since [dI ,A]
(dr )A - A(dI )
n
To
see
the
reverse
contained
n
n
=
dA -
=
0
inclusion,
Ad
let
A
=
(a
) be
an
ij
element
E
pq
=
where
in
the
(sip .sqj ) i,j
as
usual
center
for
8
of
L
and consider the matrix
any p and q with
is
the
Kronecker's
1
<=
p,q
delta.
<=
n,
Now,
ij
A
E Z(L) implies [A, E
]
pq
=
= E A,
0, which implies AE
pq
pq
58
which in turn implies a
ip
1 <= i,j <= n.
obtain a
also a
which
a
ij
for all i, j with
a
ip
qj
p = q = j to
choose
= 0 if i = j and
implies a
ij
jj
for any i and j. Thus A is a scalar matrix.
=a
ii
qj
Given i and j, and
8
=
ij
8 =8
jj
Note that L is abelian if and only if L = 0, and in this
case 2(L) = L.
EXAMPLE 2:
In this example we are going to show that if
2
L = gl(n,f), then L
2
= [L,L] = sl(n,F). Let X e L , then
m
X=
L
[A ,B ], where A , B Egl(n,F) for alII <=i <=m.
i=1
Since
i i i
i
Tr ( [A ,B ] ) = Tr ( A B -B A ) = 0
i i i iii
for
each
2
i = 1, 2,
•••
, m, then L C
In order
to show the reverse inclusion we will make use
of the matrices E
sl(n,F).
introduced
in the previous example.
pq
First note that every element of sl(n,F) can
be written
in the form:
diag (0 , 0 ,
1
2
...
,o ) +
, 0
)
n
n
1
where
ij ij
i=j
n
E
a
L a = o.
i=1
i
n-l
diag (0 ,
1
Since
n
,E
[E
ik
i:
i:l=j
a
= L
E
ij ij
for
] = E
kj
0
i=1
(E
i
-E
ii
i #= j
) •
nn
it
follows
that
ij
belongs to [L,L]; and since [E
,E
in
]=E
ni
-E
ii
nn
59
,a )
for all i, it follows that diag (a ,
1
[L,L].
belongs to
n
Hence sl(n,F) C [L,L], and thus sl(n,F)
=
[L,L].
DEFINITION 4: Let L be a Lie algebra. If L has no ideals
2
except itself and {OJ, and if moreover L
= [L,L]
~
{OJ,
then L is called simple.
2
The condition L
non abelian,
+ 0,
is
which is equivalent to saying L is
imposed
in order to avoid giving over
prominence to the one-dimensional Lie algebra.
EXAMPLE 3:
The three-dimensional
multiplication defined
3
R has
Lie algebra.
algebras,
no
proper
subalgebras other than
which
assume the contrary,
i.e.
3
Lie subalgebra of R
dimensional
independent
vectors e
1
that
and e ,
2
B is
ruction
then it would
2
which is impossible
2
construction
an
two
= [e ,e ] e S.
1
The
assume S is a two
Then S contain
and perpendicular to the plane S,
since a
are clearly not
= [e ,e ] would have to be distinct from
a
1
o
3
R with
3
To see that R has no two-dimensional Lie sub-
ideals.
follow
algebra
by the cross product is a simple
the one-dimensional subalgebras,
Linearly
Lie
of
a quotient Lie algebra LIB, where
ideal of L is formally the same as the
const­
of a quotient algebra: as a vector space LIB is
60
just the quotient space, while its Lie multiplication is
defined by [x+B,y+B] = [x,y] + B. This multiplication is
well-defined,
since if x+B = x'+B and y+B = y'+B , then
we have x' = x+b I(b e B), and y'= y+b
I(b EB), whence
1 1 2
2
[x',y'] = [x+b ,y+b ] = [x,y] + ([b ,y]+[x,b ]+[b ,b ])
1
2
1
2
1 2
and
therefore
[x',y']+B = [x,y]+B,
since the terms in
the parenthesis are all in B.
DEFINITION 5:
F.
Let Land L' be Lie algebras over a field
A Linear transformation t : L --->L'
algebra homomorphism if
all x,y
If
0/
e
is
is called a Lie
t([x,y]) = [t(x), t(y)],
for
L.
also
then
one-to-one,
it
is
called
an
be Lie algebras over F.
And
isomorphism.
THEOREM 3:
t
Let Land
: L ---> L'
0/,
image of
kernel of
Proof:
a
L'
Lie algebra
homomorphism,
o/(L) is a sub-Lie algebra of L',
t , ker(t) = {x EL : t(x)=O}
1.
We need
to
is
1
=
2
[t( a ),
1
1
therefore [a',a']-,*,(L)
2
E
2
0/ (a
=O/[a ,a]
1
and the
an ideal.
That is let a', a' et(L),
1
then we need to show [a', a ']
we know [a', a ' ]
the
show that t (L) is closed under
the bracket multiplication.
1
then
2
)]
2
t (L)
2
61
2.
First we show,
'IJ (L)
'IJ( L )
a E
{'IJ (a) I
=
=
L}
Im('IJ) is a verctor subspace.
= {
a'
e
L'
I
a'
= 'IJ (a )
for
some a E L}.
1.
Show
teL) is closed under addition.
Let a',a'e'IJ(L), we need to show a' + a'e'IJ(L).
1
2
1 2
a'
1
= 'IJ( a
1
= 'IJ( a
) and a'
2
E L
) for some a ,a
1
2
2
'IJ(
then a' + a' =
a ) + tea )
2
121
= 'IJ( a + a )
1
2
E 'IJ(L)
therefore a' + a'
1
2.
2
Show'IJ(L) is closed under scalar multiplication.
Let a' E
1
'IJ(L) and
a E
F.
What we need to show
aa' E 'IJ(L)
1
aa'
1
= 'IJ(aa
)
1
= a 'IJ( a
but
)
1
'IJ(a ) E
'IJ(L)
1
aa' E
1
therefore
'IJ(L)
3. We need to show that ker
('IJ) =
{x
e
LI
'IJ(x) = O}
is
an ideal of L.
First we need to show ker
('IJ)
is a vector subspace
of L.
1.
Show ker
('IJ)
Let a, be ker
is closed under addition.
('IJ)
we need to show a + b E Ker
('IJ)
62
o/(a + b) = ~(a) + ~(b)
=
0
ker (~).
e
therefore a + b
Show ker (~) closed under scalar multiplication.
2.
Let a
aa
ker (~), a
e
e
E F.
We need to show
ker (~)
~(Cla) = a~(a)
= 0
3.
ker (~)
aa e
therefore
If a E ker (~) and beL, we need to show that
[a ,b]
Eke r
(~)
~([a,b]) = [o/(a) , o/(b)]
=
[O,b']
= 0
theref ore [a, b] E ker (0/).
The
standard
homomorphism
theorems
of
algebras have
their counterparts for Lie algebras we cite:
THEOREM 4:
If~: L
--->
L' is
a
homomorphism
algebras, then L/ker(o/) is isomorphic to Im( ~ ).
of
Lie
If I is
any ideal of L included in ker(~h there exists a unique
t:
homomorphism
L/I
diagram commute:
--->
L'
making
~
L
the
following
L'
n
L7I
Here
x
n:
1--->
L
x + 1.
--->
L/I
is
the natural homomorphism
63
THEOREM 5:
then
J/I
If I and J are
ideals
of L such
that Ie J
is an ideal of L/I and (L/I)/(J/I) is isomor­
phic to L/J.
THEOREM 6:
Let B be a
sub-algebra
and I an ideal of a
Lie algebra L; then B+I is a subalgebra of L, B
ideal of Band B/(B
n I)
nI
is an
is isomorphic to (B+I)/I as Lie
algebras.
Here
we
shall recall the
sub-Lie algebras of the
general Linear Lie algebra gl(n,F),
1.
The special Linear sub-Lie algebra sl(n,F) =
{ A E gl(n,F):trace(A)=O}.
In fact sl(n,F) is
an ideal of gl(n,F).
2.
The sub-Lie algebra of skew-symmetric matrices
t
so(n,F) = { A E gl(n,F) : A
3.
= -A }
Let n=2m, the symplectic sub-Lie algebra
sp(n,F), which by definition consists of all
t
matrices AE gl(n,F) such that A J + JA = 0,
for some matrix J, which has the form:
J
o : I
= 1-----.. ----­
m
-I
m:
where I
o
4.
0
is the identity matrix of order m, and
is the zero matrix of order m.
The
sub-Lie
algebra
of
upper
triangular
matrices,ut(n,F) = {Aegl(n,F) : a
=0 for i>j}.
ij
64
CHAPTER
LXE
IV.I
ALGEBRA
OF
DERXVATXONS
DERIVATION ALGEBRA
Some
most
FOUR
Lie algebras of
naturally
section
we
as
linear transformations arise
derivations of algebras.
will study the Lie algebra of
DEFINITION 1:
Let
A
be
any
algebra
this
derivations.
( not
over F
A derivation
necessarily associative).
In
D
A is
in
a linear mapping D : A ---) A satisfying:
=
D(xy)
D(x).y + x.D(y)
for all x, yEA.
EXAMPLE 1:
Let
A
be
the
R-algebra
of
functions of
R into R which have derivatives of all orders.
the differential operator.
given
by
derivation A.
D(f)
=
f'
Let D be
Then the mapping D: A ---) A
(the
derivative
of f )
is
a
65
We denote the set of all derivation of an F-algebra
A by Der (A).
F
Since derivation mappings are F-endomor-
phisms of A we
can
D
(D +D )(x)
1 2
and D
1
by:
2
for every x
e
A and
D by a scalar a
=
sum of two derivations
D (x) + D (x)
1
2
multiplication
of
derivation
for every x E A.
(D ( x ) )
respect to these operations we have the following:
THEOREM 1:
Proof:
the
the
by:
(a D ) (x) = a
With
define
Der (A)
F
is a vector
subspace
of
End (A).
F
We need to show if D , D E Der (A) and a ,a E F
1
2
F
1 2
then a D + a D E Der (A).
1 1
2 2
F
That is we must show
(a D +a D )(xy)
1122
=
(a D +a D )(xy)
1122
= (a D )(xy) + (a D )(xy)
=a
1
a
11
22
D (xy) + a D (xy)
1 1
2 2
=a
=
(a D +a D )(x).y + x(O D +a D )(y)
1122
1122
(D (x).y + xD (y)) + a (D (x).y + xD (y)
1
122
2
D (x). y + a xD (y) +
11
11
a
D (x). y +
22
a
xD (y)
22
= (a D (x) + a D (x)).y + x(a D (y) + a D (y))
11
=
(a D +
11
22
a
11
22
D )(x). y + x(a D + a D )(y)
22
11
22
which completes the proof.
66
On
Der (A)
F
it
is
possible
define
to
an
algebraic
composition of two derivation D and D by D o D ,
2
2
1
1
composition of
in the ordinary sense.
D and D
2
1
the
Then
for every x,y E A, we have
(D oD )(xy)
1 2
=
D (D (xy»
1 2
=
=
(D (D (x»(y) + D (x)D (y)
1 2
2
1
D (D (x)y + xD (y»
1 2
2
+ (D (x»(D (y»
1
= ((D
1
+ xeD (D (y»)
2
1
2
oD )(x»(y) + (D (x»(D (y»
2
2
1
+ (D (x»(D (y»
1
+ x((D oD )(y»
2
1
2
which, in general, is not equal to
((D oD )(x»(y) + x((D oD )(y»
1 2
1 2
(D (x»
2
Hence Der (A)
+ (D (x»
(D (y»
1
is
1
not
closed
because
(D (y»
2
under
~
the
sum
0, generally.
this
operation.
F
However,
we
shall
see that Der (A) can be made into a
F
Lie algebra if we define the bracket
[D ,D ]
1 2
= D oD
1
2
multiplicaion
by:
- D oD •
2 1
To see that Der (A) is a Lie algebra with respect to the
F
bracket operation we first show the following
property:
67
For every D , D e Der (A), [D ,D ]
LEMMA 1:
1
2
F
1
i.e. the bracket multiplication is
2
closed
Der (A).
F
in
Der (A).
F
Proof:
For any x and y in Der (A) we have
F
[D ,D ]( x.y)
1 2
=
(D oD - D oD )(x.y)
1 2
2 1
=
(D oD )(x.y) - (D oD )(x.y)
1
2
2
1
=
(D (D »(x.y) - (D (D »(x.y)
=
D ((D x).y + x.(D y»
1
2
2
1
2
2
1
- D ((D x).y - x.(D y»
2
1
1
=
((D oD )x).y + x.((D oD )y
1
2
1
2
- ((D oD )x).y + x.((D oD )y)
2
=
1
Hence
it
e
2
1
([D ,D ]x).y + x.([D ,D ]y)
1
Thus [D ,D ]
1
2
1
2
Der (A).
2
F
follows
that
Der (A) is a subalgebra of the
F
algebra End (A)
of
endomorphism of the vector space A.
F
Finally we state the following:
THEOREM 2:
Der (A) is a Lie algebra with respect to the
F
bracket multiplication.
68
We shall call Der (A) the Lie algebra of derivations
in
F
A or simply the derivation algebra of A.
Next we are going to study the link between the Lie
algebra
of
derivations Der (A) and the group of
auto­
R
morphisms of A,
over
the
several
where
field
useful
THEOREM 3:
R of
A is finite-dimensional algebra
real
First, we state
numbers.
of
properties
derivation
mappings.
(Leibniz Rule) Let A be an algebra.
For any D E Der (A) and x,y E A, we have:
F
n
D (xy)
=
Ln
(n,
j)
j=O
j
n-j
D (x).D
(y),
o
i+1
i
where D is the identity map on A, D
= DoD
and
(~)
for all i,
is the binomial coefficient,
Proof:
Using mathematical induction
for n=l,
1
D (xy) = (
6)
x D( y) +
= x(D(y»
(D
D( x) • y
+ y(D(x»
n
next we assume that D (xy)
m
D (xy)
=
m (m)
L j
is
true
for
n=m, i.e.
j
m-j
D (x).D
(y)
j=O
Now we are going to show the formula is true for n=m+1,
m+1
m
D
(xy) = D(D (xy»
=
Lm
j=O
(m)
j
m-j
j
D(D (x).D
(y»
69
j
m-j
but D(D (x).D
(y»
j
m-j
j
m-j
= D(D (x».D
(y)+D (x).D(D
(y»
j+1
m-j
j
m-j+1
= D
(x).D
(y) + D (x).D
(y)
m+1
D
(xy) =
m-j+1
(m)
j+1
m-j
m (m) j
(y)
j D
(x).D
(y)+ L j D (x).D
j=O
j=O
Lm
By changing the subscripts j to i-I for
the
first
sum
above, and j to i for the second sum above, we obtain
m+1
m+l( m) i
m-i+1
m (m) i
m-i+1
D
(xy)= L i-I D (x).D
(y)+ L i D (x).D
(y)
i=1
i=O
=
since
(~)
m+l U
m-i+l
( m ) (m~ i
Oi-l + i~D (x).D
(y)
i=1
+(~)
L
= (m;l)and
(i~l)
+
(~)
= (
o
m+l
D (x).D
(y)
m+l)
i
,the above
expression can be written into:
m+l
m+l m+l)
m-i+l
(m+l)
i
D (x).D
(y) + 0
(xy)= 2: ( i
D
i=1
o
m+l
D (x).D
(y)
m+1 (m+l)
i
m-i+l
L i D (x).D
(y)
i=O
=
now change i to j and m+l to n we get the following:
n
D (xy) =
Ln
(n)
j
n-j
j D (x).D
(y)
j=O
which completes the proof.
THEOREM 4:
algebra with
we have:
Let
A
be
identity
a
commutative
element 1.
and
associative
For any D E Der (A)
F
70
1) D(a.1) = 0 for all a E F
n-1
n
2) D(x ) = nx
D(x), for any x
e
A, and n >= 0 where
o
x =1.
Proof: (1) To show D(a.1) = 0, first we need to show
D(l) = 0,
D(l) = D(l.l)
= 1.D(1) + D(l).l, but 1 is identity element
= D(l) + D(l)
D(l) - D(l) = D(l) + D(l) - D(l)
o =
D(l)
but D(a.1) =aD(l)
-
= a.o
0
=
(2)
on n.
The proof is by using mathematical induction
For n = 0, it is always true.
suppose it is true for n = k.
That is assume
k
k-1
D(x ) = kx
D(x) is true
for n
k+1
D(x
=
k+1
k
k
k
) = D(x.x ) = D(x).x + x.D(x )
k
k-1
+ x.kx
= D(x)x
k
= D(x).x
k
+ kx D(x)
k
= (l+k).x D(x)
n
Thus D(x ) =
D(x)
=
k
(k+1)x D(x)
n-1
fiX
D(x), for all n)=O.
71
We now
assume F = R, the field of real numbers.
Since
char R = 0, we can divide both sides of
n
D (xy) =
1
(~)
I
j=O
n
j
n-j
D (x) D
(y)
[1
n
I
D (xy) =
n!
j
----
j=O
D (x
j ~
by n! and obtain
1
------ D
n-j)!
j!
n- j
~
(y)
Therefore, we can write down formally the series:
n
D
---- = I
n!
00
I
n=O
3
D
2
D
+ D + ---- + ---- +
2!
3!
. ..
where I is the identity map on A.
We
want to show that in the case of A is a finite-
dimensional
algebra
over
the
field
R
of
reals the
series converges for every derivation mapping D.
To see
this let dim A = m, and select a basis B for A, then for
every D E Der (A), there is an
mxm
matrix with entries
R
in
associated
R
matrix by [M]
D
00
L
n=O
.
with D relative
Denote
this
Then the series above has the form:
2
M
n
M
---- =
I
+ M + ---- +
n!
where M stands
to B.
2!
for
and
[M]
I
...
is
the
mxm
identity
D
matrix.
We
have
seen
in
chapter
converges for any square matrix M.
I
that
Moreover if N is the
matrix of D relative to another basis B', then
converges to the same limit.
series
this
OJ
n
N
n=O
n!
I
...
72
n
D
CD
L
--
n=O
n!
Hence we shall write
THEOREM 5:
R.
Then
= exp D.
Let A be a finite-dimensional
for
every
~
D
Der (A),
algebra
exp D
over
is an algebra
R
automorphism of A.
Proof:
Clearly exp D is linear.
have
(exp D)(x)(exp D)(y) =
~=o
~=o l~!
k!
1
n
---
I
00
2:
n=O
k
nl
n-k
(y)
--------D (x)D
kl(n-k)1
L
n1
n=O
=
we
(I (~:~~~](I
(~:~~~]
J
j
00
=
For every x, yeA,
k=O
L (~)
1
n
--nl
k
n-k
D (x).D
(y)
k=O
By Leibniz Rule
00
=
l:
n=O
n
1
--- D (xy)
n!
= (exp D)(xy)
Therefore exp D is an algebra
homomorphism
DE Der (A).
R
to
one-to-one map.
Now,
we
If A ,
1
need
...
,A
every
for
show that exp D is
is
the
set
of
a
all
m
AI
A
...
Am
distinct eigenvalues of D then e
,e 1,
set of all distinct
of exp D with the same
multiplicities.
eigenvalues
Hence the
of D is non-singular.
exponential
Therefore
exp D
of
is
,e
is the
any matrix
one-to-one.
.
73
The
COROLLARY
Lie algebra
of derivations Der (A) is
R
isomorphic
to
the
Lie algebra
of automorphisms of A,
Aut(A).
IV.II
INNER DERIVATIONS OF ASSOCIATIVE
AND LIE ALGEBRAS--·
--
Let A be
an
associative
algebra
over a field F.
If a is any element of A, then a determines two mappings
1--->
a : x
L
ax
and
a: x
R
1--->
These
xa of A into A.
are called the left multiplication and the right multip­
lication
by
a.
The
bilinearity
algebra multiplication in A,
conditions
implies that a
for the
and a
L
linear
mappings.
Let D
a
=
linear mapping of A into A.
D (xy)
a
a
Hence,
- a •
R
L
are
R
D is
a
a
We also have
=
xya - axy
=
=
=
xya - axy + xay - xay
(xa - ax)y + x(ya - ay)
D (x)y + xD (y)
a
a
hence D is a derivation in A.
a
We
call D
a
the
inner
derivation determined by a.
THEOREM 1:
If
A
is
an
associative
algebra then the
inner derivations in A is an ideal of Der (A).
F
Proof:
Let D E Inn(A) and D E Der (A)
a
F
,.
74
=
where Inn(A)
a E A}
{D
a
= {a
R
- a
I
L
aE A}
Show that [D ,D] E Inn(A).
a
[D ,D]
a
=
b - b
R
=
for some b E
A.
L
=
[D ,D](x)
a
That is we must show
(D oD - DoD )(x)
a
a
«a - a )oD - (Do(a - a )))(x)
R
L
R
L
= «a - a )oD)(x) - (Do(a -a ))(x)
R L
R L
= a (D)(x) - a (D)(x) - (D(a (x)) + D(a (x))
R
L
R
L
= D(x)a - aD(x) - D(xa) + D(ax)
•
= D(x)a - aD(x) - D(x)a - xD(a) + D(a)x + aD(x)
= D(a)x - xD(a)
= xC-DCa)) - (-D(a)x)
= (-D(a))
- (-D(a))
L
R
=
b
R
-
b
by letting b = -D(a)
L
therefore [D ,D] is an inner derivation, hence the inner
a
~il"',
derivations in A is an ideal in Der (A).
F
Next
let
multiplication
Now
we
are
L
in
going
derivations in L.
concept of
~
be
a
L
Lie
algebra, with the algebra
denoted by [x,y] for all x,y
to
We
study
the
first
adjoint mappings
concept
of
e
L.
inner
introduce the very useful
~
75
DEFINITION 1:
of L.
Let L be a Lie algebra and
The linear mapping
called the
adjoint
x
1--->
mapping
of
[a,x]
a
and
a
an element
of L into L is
is denoted
by
ad a.
THEOREM 2:
If
L
is
a
Lie algebra,
derivation in L, for each a
e
then ad a
is
a
L.
Proof: Evidently ad a is an endomorphism of L.
Moreover
(ad a)[x,y] = [a,[x,y]]
=-[y,[a,x]]-[x,[y,a]] by the Jacobi Identity
Since multiplication in a Lie algebra is anticommutative
we have
(ad a)[x,y] = [[a,x],y] + [x,[a,y]]
•
= [(ad a)(x),y] + [x,(ad a)(y)]
Thus ad a is a derivation in L.
DEFINITION 2:
The mapping ad a is also called the inner
derivation determined by a E L.
Let L be
a
Lie algebra,
let
Adj (L) = {ad a : a E L}
denote the set of all adjoint mappings in L (i.e the set
.
.,
E!
of all inner derivations in L).
~
THEOREM 3:
If
L
is a Lie algebra over a field F, then
Adj (L) is an ideal in Der (L).
F
Proof:
We
first
show that Adj(L) is a vector subspace
of Der (L), by showing
F
ad a + ad b = ad(a+b)
and
Oad a = ad(Oa), for all a,b E L andaEF.
76
This follows immediately from the identities below:
ad (a+b)(x) = [a+b,x] = [a,x] + [b,x]
= (ad a)(x) + (ad b)(x)
and (ad aa)(x) = [aa,x] = a[a,x]
=a(ad a)(x)
Next we show that [D,ad a] E Adj(L) for
any a e; L
and DE Der (L), thus establishing the fact that Adj (L)
F
is closed under multiplication of elements from Der (L).
F
Consider:
[D,ad alex) = (Doad a - ad a
0
D)(x)
= Do[a,x] - [a,D(x)]
= [D(a),x] + [a,D(x)] - [a,D(x)]
= [D(a),x]
= (ad D(a))(x)
Therefore Adj(L) is an ideal in Der (L).
F
Now since Adj(L) is an ideal in Der (L) we can construct
F
the quotient Lie algebra of Der (L) by Adj(L).
We call
F
it
the
Lie algebra
of
outer derivations on L and
we
I"
denoted by
Out(L) = Der (L)/Adj(L)
F
THEOREM 4:
The
Out(L) is an ideal of L.
proof follows immediately from the following lemma.
LEMMA 1:
If L is a Lie algebra and J an ideal of L then
L/J is an ideal of L.
77
Proof:
Let
a+J
for any a' E
be an element in L/J where a E L, then
L, [(a+J),a'] = [a,a'] + J E L/J.
Hence L/J is an ideal of L.
THEOREM 5:
Let L be a Lie algebra.
Then ad : L ---) End (L)
is a Lie algebra homomorphism.
F
Proof:
We
have
seen
that
ad (a+b) = ad (a) + ad (b)
hence
and ad (Oa) =aad (a) for all a,bEL and aEF,
ad
is
a
vector
space homomorphism of L into End (L).
F
It remains to show ad preserves multiplication.
Let a,b E L, then we must show
ad [a,b] = (ad a
ad b) - (ad b
0
ad a)
0
That is we must show for any x EL,
•
i
HI'~l
ad [a,b](x) = (ad a
0
ad b)(x) - (ad b
0
ad a)(x)
ad [a,b](x) = [[a,b],x]
= -
•R
[[b,x],a] - [[x,a],b] by Jacobi's identity
.~
= [a,[b,x]] - [b,[a,x]]
••
••,.
= (ad a)([b,x]) - (ad b)([a,x])
= (ad a)(ad b(x»
= (ad a
-
ad b)(x) - (ad b
0
U
(ad b)(ad a(x»
0
...4
ad a)(x).
:1
.".
=
Therefore ad is a Lie algebra homomorphism.
The kernel of this homomorphism,
ker(ad) = ( x E L : [x,y] =
is an ideal of L.
DEFINITION 3:
L
if
B
for all y
e
L )
This ideal is called the center of L.
Let L be a Lie algebra over F.
A subspace B of
of
a
is
L
is
stable
called
under
a
characteristic
every
ideal
derivation of L.
78
D(b) E B for every D
i.e.
THEOREM 6:
e
Der (L) and b
F
e B.
Let L be a Lie algebra, then the center 2(L)
is a characteristic ideal of L.
Proof:
Firts recall that 2(L) is an ideal of L (Theorem
1 chapter 3).
It remains to show 2(L) is stable under
every D e Der(L).
e
Let z
D(z)
e
2(L) and D E Der (L), we need to show
2(L), this means we must show
=
[D(z),y]
0 for any y6 L.
Now since z E L, then [z,y]
=
0 for all y E L, then
D([z,y]) = 0, also we have
D([z,y])
=
[D(z),y] + [z,D(y)]
•
=
[D(z),y] + 0,
II
!~
01 1111
thus [D(z),y]
=
0 and hence D(z)
e
II
i
g
2(L).
..
Therefore 2(L) is a characteristic ideal of L.
DEFINITION 4:
A Lie algebra L is said to be complete if
=
1.
2(L)
2.
Der (L)
(O}
=
.
..."
..
..
il
If
'.
It
Adj (L).
F
;:1
lIill
.:1
THEOREM 7:
Let L be a Lie
If I is complete, there
L
=
is
algebra and I an ideal of L.
an
ideal J of L such
= (
x
that
I(VJ.
Proof:
Consider
the
set J
e
L
[x,a] = 0,
for
every a E I).
Claim: J is an ideal in L.
Evidently
J
is a subspace of L.
Let b E J and x E L,
79
then by Jacobi identity [a,[b,x]]
a' = [x,a] E
[b,x] E J.
I;
=
-[x,[a,b]]-[b,[x,a]]
=
0 -
where
[b,a'],
e
hence [a, [b,x]] = 0 for all a
I,
and
For let c E I
n J,
Hence J is an ideal.
Next we will show that I
=
then [c,a]
n
=
J
(Ole
0 for all a E L,
hence c is in the
center
of I, but since I is complete Z(I) = {OJ, and thus c = 0
nJ
Hence I
=
(Ole
Now let x e
L,
since I is
derivation
ad x
which
an
ideal of L,
we define a
maps I into itself and hence by
restricting ad x to I induces a derivation D in I.
is
inner
and
so
we
an element a E I such that
have
D(y) = [y,x] = [y,a] for all ye 1.
and x
=
b + a, thus L
= I
we have L
0
=
This
=
Then b
I + J and since I
nJ
x - a
=
e
J
{OJ then
I~
".
HI
II
i
J.
I
EXAMPLE 1:
Let
us
consider the
two-dimenisional Lie-
I
<II
algebras
with
[e ,e ] = -e
211
{e ,e },
1 2
basis
and
all
[e ,e ] = e ,
121
where
other products of base elements
are O. The derived algebra L
=
[L,L]
=
rae :aEF}
1
a e F•
a
derivation in L then D(e )
1
= a
e
=
Fe •
1
for some
1
Al so a d (a e ) has the pro per t y
2
»
(ad (a e
2
if E
=
(e )
1
i
it
2
If D is
'"
II
= [a e
2
D + ad (CZ e )
2
,e ]
1
=a
the n
E
[e ,e ] = - a e •
211
is
Hence
a de r i vat ion in Lan d
".
::1
·f
1
80
E(e ) = D(e ) + ad (ae )(e ) =
1
1
2
ae
1
+ [Qe ,e ]
121
=
ae
-
ae
1
implies E(e ) = E([e ,e ]) ===> 0
Then [e ,e ] = e
1
2
= O.
1
1
1
1
2
= [E(e ),e ] + [e ,E([e )] = 0 + [e ,E(e )]
1
2
1
2
1
2
= [e ,E(e )]
1
which implies E(e ) =ye
2
Now
for some yEF.
1
ad (Ye )(e ) = [Ye ,e ] = 0,
consider
1
1
ad (ye )(e ) = [Ye ,e ] =
1
Hence
2
E
=
2
1
and
1
ie .
121
ad (ye )
1
D = E - ad (ae)
is
an
inner
derivation
and
is also inner derivation thus we have
2
Now let us find the center of L,
= {a e +a e
1 1
{J
2 2
p. e
L : a
i
"
1 1
2
P[e
,e ] = 0 }
212
-a {Je
L : (a
(J -
1 2
= {O} since { e ,e
1
a
= 0 }
p.) e
221
} is a basis.
2
Thus L is a complete Lie algebra.
]=O}
2 2
1 2 1 2 2 1
2 2
,
I
•
+p'e +pe
[e ,e ] +a
121
2 2
= {a e +a e
1 1
1 1
2 2
= {a e +0 e
1 1
L : [a e +a e
L : a
~
i
,,'".
0 for all y E L }
2 2
= {a e +a e
1 1
=
••
II •
shown that every derivation in L is inner.
2(L) = { x E L : [x,y]
~L
=
0
1
:1'
rt
'.
'.
:1f
I
81
CHAPTER
SUKKARY
AND
FIVE
CONCLUSION
The subject of Lie algebras has much to
recommend
it as a subject for study immediately following
on general abstract algebra
and
linear
courses
algebra,
because of the beauty of its results and
both
its structure,
and because of its many contacts with other branches
of
mathematics.
In
this
thesis I have
treatment too abstract
and
tried
have
not
to
make
consistently followed
the point of view of treating the theory as a branch
liear algebra.
the
No attempt has been made
to
of
indicate
I
I
i
I
f
'.
••
"
the historical development of the subject.
to point out that
the
theory
of
I
just want
Lie algebras
is
an
outgrowth of the Lie theory of continuous groups.
•
I
•1.
f
The
basic ideas
basic
purpose
of
knowledge
of
this
Lie algebras
of
abstract
thesis is to introduce the
to
and
the
reader with some
elementary
linear
algebra.
In this study,
Lie algebras are considered from a
purely algebraic point of view, without reference to Lie
:
82
groups and differential geometry.
Such a view point has
the advantage of going immediately into
the
discussion
of Lie algebras without first establishing the topo10gi­
cal machineries for the sake of defining Lie groups from
which Lie algebras are introduced.
In Chapter I we summarize for the reader's conven­
ience rather
quickly
some
of
the
basic
concepts of
linear algebra with which he is assumed
to be familiar.
In Chapter II we introduce the language
of
a form designed
for
material developed
algebras in
in
the
later
chapters.
Chapters III and IV were
devoted
Lie algebras and the Lie algebra
definitions, basic properties,
given.
In
of
to the study of
derivations.
Some
and several examples are
Chapter II we also study the Lie algebra of
I
I
j
I
I
antisymmetric operators, Ideals and homomorphisms.
In
Chapter III we introduce
on
a
Lie
algebra
structure
DerF(A) and study the link between the group of automor­
phisms of A and the Lie algebra of derivations
Some of the materials introduced
consists mainly of materials
of
in
fairly
this
Der (A).
F
thesis
recent origin,
including some material on the general structure
of Lie
algebras.
Finally,
through out this thesis I made a lot
efforts to only introduce
this very extensive topic,
the
and
very
study
basic concepts
of
of
the relationship
I
I
I
•
II
83
between different concepts.
proved in details,
out completely.
and
Some important theorems are
most of the examples are worked
84
B::J:BL.::J:OGRAPHV
[1] Nicolas Bourbaki, Elements ~ Mathematics, Algebra I
Addison Wesley, Reading, Mass. 1974.
[2] Yutze Chow, General Theory £f Lie Algebras, vol. 1
New York: Gordon & Breach, 1978.
[3] Nathan Jacobson, Lie Algebras. New York:
Interscience Publishers, 1962.
[4] I. N. Herstein, Abstract Algebra. New York: McMillan
Publishing Company, 1986.
[ 5 ] David J. Winter, Abstract Lie Algebra. Cambridge,
Mass.: MIT Press, 1972.
[6] G. B. Seligman, Modular Lie Algebras. New York:
Springer-Verlag, 1967.
[7] Chih-Han Sah, Abstract Algebra. New York: Academic
Press, 1967.
[8] Paul R. Halmos, Finite Dimensional Vector Spaces.
Princeton, N.J.: Van Nonstrand, 1958.
[9] Jean-Pierre Serre, Lie Algebras and Lie Groups.
New York: W.A. Benjamin, 1965.
[10] Irving Kaplansky, Lie Algebras and Locally Compact
Groups. Chicago: Univ. of Chicago Press, 1974.
[11] Kenneth Hoffman and Ray Kunze, Linear Algebra 2nd.
ed. Englewood Cliffs, N.J.: Prentice-Hall,
1971.