INTRODUCTION TO LINEAR ALGEBRA

Transcription

INTRODUCTION TO LINEAR ALGEBRA
INTRODUCTION
TO
LINEAR
ALGEBRA
Fourth Edition
MANUAL FOR INSTRUCTORS
Gilbert Strang
Massachusetts Institute of Technology
math.mit.edu/linearalgebra
web.mit.edu/18.06
video lectures: ocw.mit.edu
math.mit.edu/gs
www.wellesleycambridge.com
email: [email protected]
Wellesley - Cambridge Press
Box 812060
Wellesley, Massachusetts 02482
Solutions to Exercises
2
Problem Set 1.1, page 8
1 The combinations give (a) a line in R3
2
3
4
5
6
7
8
9
10
11
(b) a plane in R3 (c) all of R3 .
v C w D .2; 3/ and v w D .6; 1/ will be the diagonals of the parallelogram with v
and w as two sides going out from .0; 0/.
This problem gives the diagonals v C w and v w of the parallelogram and asks for
the sides: The opposite of Problem 2. In this example v D .3; 3/ and w D .2; 2/.
3v C w D .7; 5/ and cv C d w D .2c C d; c C 2d /.
uCv D . 2; 3; 1/ and uCv Cw D .0; 0; 0/ and 2uC2vCw D . add first answers/ D
. 2; 3; 1/. The vectors u; v; w are in the same plane because a combination gives
.0; 0; 0/. Stated another way: u D v w is in the plane of v and w.
The components of every cv C d w add to zero. c D 3 and d D 9 give .3; 3; 6/.
The nine combinations c.2; 1/ C d.0; 1/ with c D 0; 1; 2 and d D .0; 1; 2/ will lie on
a lattice. If we took all whole numbers c and d , the lattice would lie over the whole
plane.
The other diagonal is v w (or else w v). Adding diagonals gives 2v (or 2w).
The fourth corner can be .4; 4/ or .4; 0/ or . 2; 2/. Three possible parallelograms!
i j D .1; 1; 0/ is in the base (x-y plane). i C j C k D .1; 1; 1/ is the opposite corner
from .0; 0; 0/. Points in the cube have 0 x 1, 0 y 1, 0 z 1.
Four more corners .1; 1; 0/; .1; 0; 1/; .0; 1; 1/; .1; 1; 1/. The center point is . 21 ; 21 ; 12 /.
Centers of faces are . 12 ; 12 ; 0/; . 12 ; 12 ; 1/ and .0; 21 ; 12 /; .1; 12 ; 12 / and . 21 ; 0; 12 /; . 12 ; 1; 12 /.
12 A four-dimensional cube has 24 D 16 corners and 2 4 D 8 three-dimensional faces
and 24 two-dimensional faces and 32 edges in Worked Example 2.4 A.
2:00 vector D 8:00 vector. 2:00 is 30ı from horizontal
D .cos 6 ; sin 6 / D . 3=2; 1=2/.
Moving the origin to 6:00 adds j D .0; 1/ to every vector. So the sum of twelve vectors
changes from 0 to 12j D .0; 12/.
3
1
The point v C w is three-fourths of the way to v starting from w. The vector
4
4
1
1
1
1
v C w is halfway to u D v C w. The vector v C w is 2u (the far corner of the
4
4
2
2
parallelogram).
All combinations with c C d D 1 are on the line that passes through v and w.
The point V D v C 2w is on that line but it is beyond w.
All vectors cv C cw are on the line passing through .0; 0/ and u D 12 v C 12 w. That
line continues out beyond v C w and back beyond .0; 0/. With c 0, half of this line
is removed, leaving a ray that starts at .0; 0/.
The combinations cv C d w with 0 c 1 and 0 d 1 fill the parallelogram with
sides v and w. For example, if v D .1; 0/ and w D .0; 1/ then cv C d w fills the unit
square.
With c 0 and d 0 we get the infinite “cone” or “wedge” between v and w. For
example, if v D .1; 0/ and w D .0; 1/, then the cone is the whole quadrant x 0,
y 0. Question: What if w D v? The cone opens to a half-space.
13 Sum D zero vector. Sum
p D
14
15
16
17
18
19
Solutions to Exercises
3
C 13 v C 13 w is the center of the triangle between u; v and w; 12 u C 21 w lies
between u and w
(b) To fill the triangle keep c 0, d 0, e 0, and c Cd Ce D 1.
20 (a)
1
u
3
21 The sum is .v
u/ C .w
are in the same plane!
v/ C .u
w/ D zero vector. Those three sides of a triangle
22 The vector 21 .u C v C w/ is outside the pyramid because c C d C e D
1
2
C 12 C 12 > 1.
23 All vectors are combinations of u; v; w as drawn (not in the same plane). Start by seeing
that cu C d v fills a plane, then adding ew fills all of R3 .
24 The combinations of u and v fill one plane. The combinations of v and w fill another
plane. Those planes meet in a line: only the vectors cv are in both planes.
25 (a) For a line, choose u D v D w D any nonzero vector
(b) For a plane, choose
u and v in different directions. A combination like w D u C v is in the same plane.
26 Two equations come from the two components: c C 3d D 14 and 2c C d D 8. The
solution is c D 2 and d D 4. Then 2.1; 2/ C 4.3; 1/ D .14; 8/.
27 The combinations of i D .1; 0; 0/ and i C j D .1; 1; 0/ fill the xy plane in xyz space.
28 There are 6 unknown numbers v1 ; v2 ; v3 ; w1 ; w2 ; w3 . The six equations come from the
components of v C w D .4; 5; 6/ and v
so v D .3; 5; 7/ and w D .1; 0; 1/.
w D .2; 5; 8/. Add to find 2v D .6; 10; 14/
29 Two combinations out of infinitely many that produce b D .0; 1/ are
2u C v and
No, three vectors u; v; w in the x-y plane could fail to produce b if all
three lie on a line that does not contain b. Yes, if one combination produces b then
two (and infinitely many) combinations will produce b. This is true even if u D 0; the
combinations can have different cu.
1
w
2
1
v.
2
30 The combinations of v and w fill the plane unless v and w lie on the same line through
.0; 0/. Four vectors whose combinations fill 4-dimensional space: one example is the
“standard basis” .1; 0; 0; 0/; .0; 1; 0; 0/; .0; 0; 1; 0/, and .0; 0; 0; 1/.
31 The equations cu C d v C ew D b are
2c
d
D1
c C2d
eD0
d C2e D 0
So d D 2e
then c D 3e
then 4e D 1
c D 3=4
d D 2=4
e D 1=4
Problem Set 1.2, page 19
1 uv D
1:8 C 3:2 D 1:4, u w D
4:8 C 4:8 D 0, v w D 24 C 24 D 48 D w v.
2 kuk D 1 and kvk D 5 and kwk D 10. Then 1:4 < .1/.5/ and 48 < .5/.10/, confirming
the Schwarz inequality.
3 Unit vectors v=kvk D . 35 ; 45 / D .:6; :8/ and w=kwk D . 45 ; 35 / D .:8; :6/. The cosine
w
24
ı
ı
ı
of is kv
vk kwk D 25 . The vectors w; u; w make 0 ; 90 ; 180 angles with w.
4 (a) v . v/ D 1
(b) .v C w/ .v w/ D v v C w v v w w w D
1C. / . / 1 D 0 so D 90ı (notice vw D wv)
(c) .v 2w/.v C2w/ D
v v 4w w D 1 4 D 3.
Solutions to Exercises
4
p
p
5 u1 D v=kvk D .3; 1/= 10 and u2 D p
w=kwk D .2; 1; 2/=3. U 1 D .1;
p 3/= 10 is
perpendicular to u1 (and so is . 1; 3/= 10). U 2 could be .1; 2; 0/= 5: There is a
whole plane of vectors perpendicular to u2 , and a whole circle of unit vectors in that
plane.
6 All vectors w D .c; 2c/ are perpendicular to v. All vectors .x; y; z/ with x Cy Cz D 0
lie on a plane. All vectors perpendicular to .1; 1; 1/ and .1; 2; 3/ lie on a line.
7 (a) cos D v w=kvkkwk D 1=.2/.1/ so D 60ı or =3 radians
(b) cos D 0
so D 90ı or p
=2 radians (c) cos D 2=.2/.2/ D 1=2 so D 60ı or =3
(d) cos D 1= 2 so D 135ı or 3=4.
(b) True: u .v C
v/ .u v/ splits into
8 (a) False: v and w are any vectors in the plane perpendicular to u
2w/ D u v C 2u w D 0 (c) True, ku
u u C v v D 2 when u v D v u D 0.
9 If v2 w2 =v1 w1 D
vk2 D .u
1 then v2 w2 D v1 w1 or v1 w1 Cv2 w2 D vw D 0: perpendicular!
10 Slopes 2=1 and 1=2 multiply to give 1: then vw D 0 and the vectors (the directions)
are perpendicular.
11 v w < 0 means angle > 90ı ; these w’s fill half of 3-dimensional space.
12 .1; 1/ perpendicular to .1; 5/
c.1; 1/ if 6 2c D 0 or c D 3; v .w
c D v w=v v. Subtracting cv is the key to perpendicular vectors.
cv/ D 0 if
13 The plane perpendicular to .1; 0; 1/ contains all vectors .c; d; c/. In that plane, v D
.1; 0; 1/ and w D .0; 1; 0/ are perpendicular.
14 One possibility among many: u D .1; 1; 0; 0/; v D .0; 0; 1; 1/; w D .1; 1; 1; 1/
and .1; 1; 1; 1/ are perpendicular to each other. “We can rotate those u; v; w in their 3D
hyperplane.”
p p p
15 12 .x C y/ D .2 C 8/=2 D 5; cos D 2 16= 10 10 D 8=10.
16 kvk2 D 1 C 1 C C 1 D 9 so kvk D 3I u D v=3 D . 13 ; : : : ; 13 / is a unit vector in 9D;
p
w D .1; 1; 0; : : : ; 0/= 2 is a unit vector in the 8D hyperplane perpendicular to v.
p
p
17 cos ˛ D 1= 2, cos ˇ D 0, cos D 1= 2. For any vector v, cos2 ˛ Ccos2 ˇ Ccos2 D .v12 C v22 C v32 /=kvk2 D 1.
18 kvk2 D 42 C 22 D 20 and kwk2 D . 1/2 C 22 D 5. Pythagoras is k.3; 4/k2 D 25 D
20 C 5.
19 Start from the rules .1/; .2/; .3/ for vw D wv and u.vCw/ and .cv/w. Use rule .2/
for .vCw/.vCw/ D .vCw/vC.vCw/w. By rule .1/ this is v.vCw/Cw.vCw/.
Rule .2/ again gives v v C v w C w v C w w D v v C 2v w C w w. Notice
v w D w v! The main point is to be free to open up parentheses.
20 We know that .v
w/ .v w/ D v v 2v w C w w. The Law of Cosines writes
kvkkwk cos for v w. When < 90ı this v w is positive, so in this case v v C w w
is larger than kv wk2 .
21 2vw 2kvkkwk leads to kvCwk2 D vvC2vwCww kvk2 C2kvkkwkCkwk2 .
This is .kvk C kwk/2 . Taking square roots gives kv C wk kvk C kwk.
22 v12 w12 C 2v1 w1 v2 w2 C v22 w22 v12 w12 C v12 w22 C v22 w12 C v22 w22 is true (cancel 4 terms)
because the difference is v12 w22 C v22 w12
2v1 w1 v2 w2 which is .v1 w2
v2 w1 /2 0.
Solutions to Exercises
5
23 cos ˇ D w1 =kwk and sin ˇ D w2 =kwk. Then cos.ˇ a/ D cos ˇ cos ˛ Csin ˇ sin ˛ D
v1 w1 =kvkkwk C v2 w2 =kvkkwk D v w=kvkkwk. This is cos because ˇ ˛ D .
1
.u21 C U12 / and ju2 jjU2 j 12 .u22 C U22 /. The whole line
2
becomes :96 .:6/.:8/ C .:8/.:6/ 21 .:62 C :82 / C 21 .:82 C :62 / D 1. True: :96 < 1.
p
The cosine of is x= x 2 C y 2 , near side over hypotenuse. Then j cos j2 is not greater
than 1: x 2 =.x 2 C y 2 / 1.
The vectors w D .x; y/ with .1; 2/ w D x C 2y D 5 lie on a line in the xy plane.
p
The shortest w on that line is .1; 2/. (The Schwarz inequality
kwk
v
w=kvk
D
5
p
is an equality when cos D 0 and w D .1; 2/ and kwk D 5.)
The length kv wk is between 2 and 8 (triangle inequality when kvk D 5 and kwk D
3). The dot product v w is between 15 and 15 by the Schwarz inequality.
Three vectors in the plane could make angles greater than 90ı with each other: for
example .1; 0/; . 1; 4/; . 1; 4/. Four vectors could not do this (360ı total angle).
How many can do this in R3 or Rn ? Ben Harris and Greg Marks showed me that the
answer is n C 1: The vectors from the center of a regular simplex in Rn to its n C 1
vertices all have negative dot products. If nC2 vectors in Rn had negative dot products,
project them onto the plane orthogonal to the last one. Now you have n C 1 vectors in
Rn 1 with negative dot products. Keep going to 4 vectors in R2 : no way!
For a specific example, pick v D p
.1; 2;p 3/ and then w D . 3; 1; 2/. In this example
cos D v w=kvkkwk D 7= 14 14 D 1=2 and D 120ı . This always
happens when x C y C z D 0:
1
1 2
v w D xz C xy C yz D .x C y C z/2
.x C y 2 C z 2 /
2
2
1
1
This is the same as v w D 0
kvkkwk: Then cos D :
2
2
p
Wikipedia gives this proof of geometric mean G D 3 xyz arithmetic mean
A D .x C y C z/=3. First there is equality in case x D y D z. Otherwise A is
somewhere between the three positive numbers, say for example z < A < y.
Use the known inequality g a for the two positive numbers x and y C z A. Their
mean a D 21 .x C y C z A/ is 21 .3A A/ D same as A! So a g says that
A3 g 2 A D x.y C z A/A. But .y C z A/A D .y A/.A z/ C yz > yz.
Substitute to find A3 > xyz D G 3 as we wanted to prove. Not easy!
24 Example 6 gives ju1 jjU1 j 25
26
27
28
29
30
There are many proofs of G D .x1 x2 xn /1=n A D .x1 C x2 C C xn /=n. In
calculus you are maximizing G on the plane x1 C x2 C C xn D n. The maximum
occurs when all x’s are equal.
31 The columns of the 4 by 4 “Hadamard matrix” (times 12 ) are perpendicular unit
vectors:
2
3
1
1
1
1
1 61
1
1
1
17
H D 4
:
1
1
1
15
2
2
1
1
1
1
32 The commands V D randn .3; 30/I D D sqrt .diag .V 0 V //I U D V \DI will give
30 random unit vectors in the columns of U . Then u 0 U is a row matrix of 30 dot
products whose average absolute value may be close to 2=.
Solutions to Exercises
6
Problem Set 1.3, page 29
1 2s1 C 3s2 C 4s3 D .2; 5; 9/. The same vector b comes from S times x D .2; 3; 4/:
"
1 0 0
1 1 0
1 1 1
#" # "
# " #
.row 1/ x
2
2
3 D .row 2/ x D 5 :
4
.row 2/ x
9
2 The solutions are y1 D 1, y2 D 0, y3 D 0 (right side D column 1) and y1 D 1, y2 D 3,
y3 D 5. That second example illustrates that the first n odd numbers add to n2 .
"
#" #
y1
D B1
y1 D B1
1
0 0
B1
D B2
1
1 0
B2
3 y1 C y2
gives y2 D B1 CB2
D
y1 C y2 C y3 D B3
y3 D
B2 CB3
0
1 1
B3
"
#
"
#
1 0 0
1 0 0
The inverse of S D 1 1 0 is A D 1 1 0 : independent columns in A and S !
1 1 1
0 1 1
4 The combination 0w1 C 0w2 C 0w3 always gives the zero vector, but this problem
looks for other zero combinations (then the vectors are dependent, they lie in a plane):
w2 D .w1 C w3 /=2 so one combination that gives zero is 12 w1 w2 C 12 w3 :
5 The rows of the 3 by 3 matrix in Problem 4 must also be dependent: r 2 D
1
.r 1
2
The column and row combinations that produce 0 are the same: this is unusual.
"
#
1 3 5
1 2 4 has column 3 D 2 .column 1/ C column 2
6 cD3
1 1 3
"
#
1 0 1
1 1 0 has column 3 D
cD 1
column 1 C column 2
0 1 1
"
#
0 0 0
2 1 5 has column 3 D 3 .column 1/ column 2
cD0
3 3 6
C r 3 /.
7 All three rows are perpendicular to the solution x (the three equations r 1 x D 0 and
r 2 x D 0 and r 3 x D 0 tell us this). Then the whole plane of the rows is perpendicular
to x (the plane is also perpendicular to all multiples cx).
2
32 3
x1 0 D b1
x1 D b1
1 0 0 0
b1
x2 x1 D b2
x2 D b1 C b2
6 1 1 0 0 7 6 b2 7
8
D4
D A 1b
x3 x2 D b3
x3 D b1 C b2 C b3
1 1 1 0 5 4 b3 5
x4 x3 D b4
x4 D b1 C b2 C b3 C b4
1 1 1 1
b4
9 The cyclic difference matrix C has a line of solutions (in 4 dimensions) to C x D 0:
2
1
6 1
4 0
0
0
1
1
0
0
0
1
1
32 3 2 3
2 3
1
x1
0
c
0 7 6 x2 7 6 0 7
6c 7
D
when x D 4 5 D any constant vector.
0 5 4 x3 5 4 0 5
c
1
x4
0
c
Solutions to Exercises
z2 z1 D b1
10 z3 z2 D b2
0 z3 D b3
7
z1 D
z2 D
z3 D
b1
b2
b2
b3
b3
b3
D
"
1
0
0
1
1
0
1
1
1
# "
b1
b2
b3
#
D
1
b
11 The forward differences of the squares are .t C 1/2
t 2 D t 2 C 2t C 1 t 2 D 2t C 1.
Differences of the nth power are .t C 1/n t n D t n t n C nt n 1 C . The leading
term is the derivative nt n 1 . The binomial theorem gives all the terms of .t C 1/n .
12 Centered difference matrices of even size seem to be invertible. Look at eqns. 1 and 4:
2
32 3 2 3
2
3 2
3
0
1
0 0
x1
b1
First
x1
b2 b4
0
1 0 7 6 x2 7 6 b2 7 solve
6 1
6 x2 7 6 b1
7
D
D
4 0
5
1
0 1 5 4 x3 5 4 b3 5 x2 D b1 4 x3 5 4 b4
0
0
1 0
x4
b4
x3 D b4
x4
b1 C b3
13 Odd size: The five centered difference equations lead to b1 C b3 C b5 D 0.
x2
x3
x4
x5
x1
x2
x3
x4
D b1
D b2
D b3
D b4
D b5
Add equations 1; 3; 5
The left side of the sum is zero
The right side is b1 C b3 C b5
There cannot be a solution unless b1 C b3 C b5 D 0.
14 An example is .a; b/ D .3; 6/ and .c; d / D .1; 2/. The ratios a=c and b=d are equal.
Then ad D bc. Then (when you divide by bd ) the ratios a=b and c=d are equal!
Problem Set 2.1, page 40
1 The columns are i D .1; 0; 0/ and j D .0; 1; 0/ and k D .0; 0; 1/ and b D .2; 3; 4/ D
2i C 3j C 4k.
2 The planes are the same: 2x D 4 is x D 2, 3y D 9 is y D 3, and 4z D 16 is z D 4. The
3
4
5
6
7
8
9
solution is the same point X D x. The columns are changed; but same combination.
The solution is not changed! The second plane and row 2 of the matrix and all columns
of the matrix (vectors in the column picture) are changed.
If z D 2 then x C y D 0 and x y D z give the point .1; 1; 2/. If z D 0 then
x C y D 6 and x y D 4 produce .5; 1; 0/. Halfway between those is .3; 0; 1/.
If x; y; z satisfy the first two equations they also satisfy the third equation. The line
L of solutions contains v D .1; 1; 0/ and w D . 12 ; 1; 12 / and u D 21 v C 12 w and all
combinations cv C d w with c C d D 1.
Equation 1 C equation 2 equation 3 is now 0 D 4. Line misses plane; no solution.
Column 3 D Column 1 makes the matrix singular. Solutions .x; y; z/ D .1; 1; 0/ or
.0; 1; 1/ and you can add any multiple of . 1; 0; 1/; b D .4; 6; c/ needs c D 10 for
solvability (then b lies in the plane of the columns).
Four planes in 4-dimensional space normally meet at a point. The solution to Ax D
.3; 3; 3; 2/ is x D .0; 0; 1; 2/ if A has columns .1; 0; 0; 0/; .1; 1; 0; 0/; .1; 1; 1; 0/,
.1; 1; 1; 1/. The equations are x C y C z C t D 3; y C z C t D 3; z C t D 3; t D 2.
(a) Ax D .18; 5; 0/ and (b) Ax D .3; 4; 5; 5/.
Solutions to Exercises
8
10 Multiplying as linear combinations of the columns gives the same Ax. By rows or by
11
12
13
14
15
16
17
18
19
20
21
22
columns: 9 separate multiplications for 3 by 3.
Ax equals .14; 22/ and .0; 0/ and (9; 7/.
Ax equals .z; y; x/ and .0; 0; 0/ and (3; 3; 6/.
(a) x has n components and Ax has m components (b) Planes from each equation
in Ax D b are in n-dimensional space, but the columns are in m-dimensional space.
2x C 3y C z C 5t D 8 is Ax D b with the 1 by 4 matrix A D Œ 2 3 1 5 . The
solutions x fill a 3D “plane” in 4 dimensions. It could be called a hyperplane.
1 0
0 1
(a) I D
(b) P D
0 1
1 0
0 1
1
0
ı
ı
2
90 rotation from R D
, 180 rotation from R D
D I.
1 0
0
1
"
#
"
#
0 1 0
0 0 1
P D 0 0 1 produces .y; z; x/ and Q D 1 0 0 recovers .x; y; z/. Q is the
1 0 0
0 1 0
inverse of P .
"
#
1 0 0
1 0
1 1 0 subtract the first component from the second.
ED
and E D
1 1
0 0 1
"
#
#
"
1 0 0
1 0 0
0 1 0 , Ev D .3; 4; 8/ and E 1 Ev recovers
E D 0 1 0 and E 1 D
1 0 1
1 0 1
.3; 4; 5/.
0 0
1 0
projects onto the y-axis.
projects onto the x-axis and P2 D
P1 D
0 1
0 0
0
5
5
.
and P2 P1 v D
has P1 v D
vD
0
0
7
p p
1
p2
p2 rotates all vectors by 45ı . The columns of R are the results from
RD
2
2
2
rotating .1; 0/ and .0; 1/!
" #
x
The dot product Ax D Œ 1 4 5  y D .1 by 3/.3 by 1/ is zero for points .x; y; z/
z
on a plane in three dimensions. The columns of A are one-dimensional vectors.
23 A D Œ 1 2 I 3 4  and x D Œ 5
2  0 and b D Œ 1 7  0 . r D b A x prints as zero.
24 A v D Œ 3 4 5  0 and v 0 v D 50. But v A gives an error message from 3 by 1
times 3 by 3.
25 ones.4; 4/ ones.4; 1/ D Œ 4 4 4 4  0 ; B w D Œ 10 10 10 10  0 .
26 The row picture has two lines meeting at the solution (4; 2). The column picture will
have 4.1; 1/ C 2. 2; 1/ D 4(column 1) C 2(column 2) D right side .0; 6/.
27 The row picture shows 2 planes in 3-dimensional space. The column picture is in
2-dimensional space. The solutions normally lie on a line.
Solutions to Exercises
9
28 The row picture shows four lines in the 2D plane. The column picture is in four-
dimensional space. No solution unless the right side is a combination of the two columns.
:7
:65
29 u2 D
and u3 D
. The components add to 1. They are always positive.
:3
:35
u7 ; v7 ; w7 are all close to .:6; :4/. Their components still add to 1.
:8 :3 :6
:6
:8 :3
30
D
D steady state s. No change when multiplied by
.
:2 :7 :4
:4
:2 :7
"
# "
#
8 3 4
5Cu 5 uCv 5 v
5
5 C u C v ; M3 .1; 1; 1/ D .15; 15; 15/;
31 M D 1 5 9 D 5 u v
6 7 2
5Cv 5Cu v 5 u
M4 .1; 1; 1; 1/ D .34; 34; 34; 34/ because 1 C 2 C C 16 D 136 which is 4.34/.
32 A is singular when its third column w is a combination cu C d v of the first columns.
A typical column picture has b outside the plane of u, v, w. A typical row picture has
the intersection line of two planes parallel to the third plane. Then no solution.
33 w D .5; 7/ is 5u C 7v. Then Aw equals 5 times Au plus 7 times Av.
2
2
6 1
34 4
0
0
1
2
1
0
0
1
2
1
32 3 2 3
2 3 2 3
0
x1
1
x1
4
0 7 6 x2 7 6 2 7
6 x2 7 6 7 7
D
has the solution 4 5 D 4 5.
1 5 4 x3 5 4 3 5
x3
8
2
x4
4
x4
6
35 x D .1; : : : ; 1/ gives S x D sum of each row D 1C C9 D 45 for Sudoku matrices.
6 row orders .1; 2; 3/, .1; 3; 2/, .2; 1; 3/, .2; 3; 1/, .3; 1; 2/, .3; 2; 1/ are in Section 2.7.
The same 6 permutations of blocks of rows produce Sudoku matrices, so 64 D 1296
orders of the 9 rows all stay Sudoku. (And also 1296 permutations of the 9 columns.)
Problem Set 2.2, page 51
1 Multiply by `21 D
10
2
to circle are 2 and 6.
2
D 5 and subtract to find 2x C 3y D 14 and 6y D 6. The pivots
6y D 6 gives y D 1. Then 2x C 3y D 1 gives x D 2. Multiplying the right side
.1; 11/ by 4 will multiply the solution by 4 to give the new solution .x; y/ D .8; 4/.
(or add 12 ) times equation 1. The new second equation is 3y D 3. Then
y D 1 and x D 5. If the right side changes sign, so does the solution: .x; y/ D . 5; 1/.
3 Subtract
1
2
or .ad
c
a
times equation 1. The new second pivot multiplying y is d
bc/=a. Then y D .ag cf /=.ad bc/.
4 Subtract ` D
.cb=a/
5 6x C 4y is 2 times 3x C 2y. There is no solution unless the right side is 2 10 D 20.
Then all the points on the line 3x C2y D 10 are solutions, including .0; 5/ and .4; 1/.
(The two lines in the row picture are the same line, containing all solutions).
6 Singular system if b D 4, because 4x C 8y is 2 times 2x C 4y. Then g D 32 makes
the lines become the same: infinitely many solutions like .8; 0/ and .0; 4/.
7 If a D 2 elimination must fail (two parallel lines in the row picture). The equations
have no solution. With a D 0, elimination will stop for a row exchange. Then 3y D
gives y D 1 and 4x C 6y D 6 gives x D 3.
3
Solutions to Exercises
10
8 If k D 3 elimination must fail: no solution. If k D
9
10
11
12
13
14
15
16
17
18
19
20
3, elimination gives 0 D 0 in
equation 2: infinitely many solutions. If k D 0 a row exchange is needed: one solution.
On the left side, 6x 4y is 2 times .3x 2y/. Therefore we need b2 D 2b1 on the
right side. Then there will be infinitely many solutions (two parallel lines become one
single line).
The equation y D 1 comes from elimination (subtract x C y D 5 from x C 2y D 6).
Then x D 4 and 5x 4y D c D 16.
(a) Another solution is 21 .x C X; y C Y; z C Z/. (b) If 25 planes meet at two points,
they meet along the whole line through those two points.
Elimination leads to an upper triangular system; then comes back substitution.
2x C 3y C z D 8
xD2
y C 3z D 4 gives y D 1 If a zero is at the start of row 2 or 3,
8z D 8
z D 1 that avoids a row operation.
2x 3y D 3
2x 3y
D3
2x 3y D 3
xD3
y C z D 1 and
4x 5y C z D 7 gives
y C z D 1 and y D 1
2x
y 3z D 5
5z D 0
zD0
2y C 3z D 2
Subtract 2 row 1 from row 2, subtract 1 row 1 from row 3, subtract 2 row 2 from
row 3
Subtract 2 times row 1 from row 2 to reach .d 10/y z D 2. Equation (3) is y z D 3.
If d D 10 exchange rows 2 and 3. If d D 11 the system becomes singular.
The second pivot position will contain 2 b. If b D 2 we exchange with row 3. If
b D 1 (singular case) the second equation is y z D 0. A solution is .1; 1; 1/.
0x C 0y C 2z D 4
Exchange
0x C 3y C 4z D 4
Example of
x C 2y C 2z D 5
but then
x C 2y C 2z D 5
(a) 2 exchanges
(b)
0x C 3y C 4z D 6
break down 0x C 3y C 4z D 6
(exchange 1 and 2, then 2 and 3)
(rows 1 and 3 are not consistent)
If row 1 D row 2, then row 2 is zero after the first step; exchange the zero row with row
3 and there is no third pivot. If column 2 D column 1, then column 2 has no pivot.
Example x C 2y C 3z D 0, 4x C 8y C 12z D 0, 5x C 10y C 15z D 0 has 9 different
coefficients but rows 2 and 3 become 0 D 0: infinitely many solutions.
Row 2 becomes 3y 4z D 5, then row 3 becomes .q C 4/z D t 5. If q D 4 the
system is singular—no third pivot. Then if t D 5 the third equation is 0 D 0. Choosing
z D 1 the equation 3y 4z D 5 gives y D 3 and equation 1 gives x D 9.
Singular if row 3 is a combination of rows 1 and 2. From the end view, the three planes
form a triangle. This happens if rows 1 C2 D row 3 on the left side but not the right
side: x Cy Cz D 0, x 2y z D 1, 2x y D 4. No parallel planes but still no solution.
3 4 5
3
4
5
21 (a) Pivots 2; 2 ; 3 ; 4 in the equations 2x C y D 0; 2 y C z D 0; 3 z C t D 0; 4 t D 5
after elimination. Back substitution gives t D 4; z D 3; y D 2; x D 1. (b) If
the off-diagonal entries change from C1 to 1, the pivots are the same. The solution is
.1; 2; 3; 4/ instead of . 1; 2; 3; 4/.
6
22 The fifth pivot is 5 for both matrices (1’s or
nC1
.
n
1’s off the diagonal). The nth pivot is
Solutions to Exercises
11
23 If ordinary elimination leads to x C y D 1 and 2y D 3, the original second equation
24
25
26
27
28
29
30
31
32
could be 2y C `.x C y/ D 3 C ` for any `. Then ` will be the multiplier to reach
2y D 3.
a 2
Elimination fails on
if a D 2 or a D 0.
a a
a D 2 (equal columns), a D 4 (equal rows), a D 0 (zero column).
Solvable for s D 10 (add the two pairs of equations to get aCb Cc Cd on the left sides,
12 and 2 C s on the right sides). The four
2 equations for
3 a; b; c; d are
2 singular! Two
3
1 1 0 0
1
1 0 0
1 3
0 4
1 1 07
61 0 1 07
60
solutions are
and
,AD4
and U D 4
.
1 7
2 6
0 0 1 15
0
0 1 15
0 1 0 1
0
0 0 0
Elimination leaves the diagonal matrix diag.3; 2; 1/ in 3x D 3; 2y D 2; z D 4. Then
x D 1; y D 1; z D 4.
A.2; W/ D A.2; W/ 3 A.1; W/ subtracts 3 times row 1 from row 2.
The average pivots for rand(3) without row exchanges were 12 ; 5; 10 in one experiment—
but pivots 2 and 3 can be arbitrarily large. Their averages are actually infinite ! With
row exchanges in MATLAB’s lu code, the averages :75 and :50 and :365 are much
more stable (and should be predictable, also for randn with normal instead of uniform
probability distribution).
If A.5; 5/ is 7 not 11, then the last pivot will be 0 not 4.
Row j of U is a combination of rows 1; : : : ; j of A. If Ax D 0 then U x D 0 (not true
if b replaces 0). U is the diagonal of A when A is lower triangular.
The question deals with 100 equations Ax D 0 when A is singular.
(a) Some linear combination of the 100 rows is the row of 100 zeros.
(b) Some linear combination of the 100 columns is the column of zeros.
(c) A very singular matrix has all ones: A D eye(100). A better example has 99
random rows (or the numbers 1i ; : : : ; 100i in those rows). The 100th row could
be the sum of the first 99 rows (or any other combination of those rows with no
zeros).
(d) The row picture has 100 planes meeting along a common line through 0. The
column picture has 100 vectors all in the same 99-dimensional hyperplane.
Problem Set 2.3, page 63
"
#
"
#
"
#"
# "
#
1 0 0
1 0 0
1 0 0
0 1 0
0 1 0
5 1 0 ; E32 D 0 1 0 ; P D 0 0 1
1 0 0 D 0 0 1 .
1 E21 D
0 0 1
0 7 1
0 1 0
0 0 1
1 0 0
2 E32 E21 b D .1; 5; 35/ but E21 E32 b D .1; 5; 0/. When E32 comes first, row 3
feels no effect from row 1.
"
# "
# "
#
"
#
1 0 0
1 0 0
1
0
0
1
0
0
4 1 0 ; 0 1 0 ; 0
1
0 M D E32 E31 E21 D
4
1
0 :
3
0 0 1
2 0 1
0
2
1
10
2
1
Solutions to Exercises
12
2 3
2 3
2
1
1
6 7 E21 6 7 E31 6
4 Elimination on column 4: b D 4 0 5 ! 4 4 5 ! 4
0
0
original Ax D b has become U x D c D .1; 4; 10/. Then
z D 5; y D 21 ; x D 12 : This solves Ax D .1; 0; 0/.
1
3
2
1
3
7 E32 6 7
4 5 ! 4 4 5. The
2
10
back substitution gives
5 Changing a33 from 7 to 11 will change the third pivot from 5 to 9. Changing a33 from
7 to 2 will change the pivot from 5 to no pivot.
2
32 3
2 3
2 3 7
1
4
6
76 7
6 7
6 Example: 4 2 3 7 5 4 3 5 D 4 4 5. If all columns are multiples of column 1,
2 3 7
there is no second pivot.
1
4
7 To reverse
E31 , add
3 7 times row
2 1 to row
3 3. The inverse of the elimination matrix
2
8
9
10
11
12
1 0 0
1 0 0
7
6
7
6
1
E D 4 0 1 0 5 is E D 4 0 1 0 5.
7 0 1
7 0 1
a b
a
b
M D
and M * D
. det M * D a.d `b/ b.c `a/
c d
c `a d `b
reduces to ad bc!
"
#
1 0 0
M D 0 0 1 . After the exchange, we need E31 (not E21 ) to act on the new row 3.
1 1 0
"
# "
#
"
#
1 0 1
1 0 1
2 0 1
E13 D 0 1 0 I 0 1 0 I E31 E13 D 0 1 0 : Test on the identity matrix!
0 0 1
1 0 1
1 0 1
#
"
1 2 2
An example with two negative pivots is A D 1 1 2 . The diagonal entries can
1 2 1
change sign during elimination.
#
"
#
"
rows and
1
2
3
9 8 7
also columns The second product is 0
1
2 .
The first product is 6 5 4
0
2
3
3 2 1
reversed.
13 (a) E times the third column of B is the third column of EB. A column that starts
at zero will stay at zero.
nonzero row.
(b) E could add row 2 to row 3 to change a zero row to a
`21 D 12 , E32 has `32 D 23 , E43 has `43 D 34 . Otherwise the E’s match I .
"
#
"
#
1
4
7
1
4
7
1
2
5 !
0
6
12 . The zero became 12,
15 aij D 2i 3j : A D
3
0
3
0
12
24
"
#
1
0 0
1 0 .
an example of fill-in. To remove that 12, choose E32 D 0
0
2 1
14 E21 has
Solutions to Exercises
13
16 (a) The ages of X and Y are x and y: x
2y D 0 and x C y D 33; x D 22 and
y D 11 (b) The line y D mx C c contains x D 2, y D 5 and x D 3, y D 7 when
2m C c D 5 and 3m C c D 7. Then m D 2 is the slope.
aC bC c D 4
17 The parabola y D aCbxCcx 2 goes through the 3 given points when aC 2bC 4c D 8 .
aC 3bC 9c D 14
Then a D 2, b D 1, and c D 1. This matrix with columns .1; 1; 1/, .1; 2; 3/, .1; 4; 9/
is a “Vandermonde matrix.”
"
#
"
#
"
#
"
#
1 0 0
1
0 0
1 0 0
1 0 0
2
3
a
1 0 , E D 2a 1 0 , F D 0 1 0 :
18 EF D a 1 0 , FE D
b c 1
b Cac c 1
2b 0 1
0 3c 1
19
20
21
22
"
#
"
#
0 1 0
0 0 1
PQ D 0 0 1 . In the opposite order, two row exchanges give QP D 1 0 0 ,
1 0 0
0 1 0
2
2
If M exchanges rows 2 and 3 then
M
D
I
(also
.
M
/
D
I
).
There
are
many
square
a
b
roots of I : Any matrix M D
has M 2 D I if a2 C bc D 1.
c
a
1 2 4
1 0
D
(a) Each column of EB is E times a column of B (b)
1 2 4
1 1
1 2 4
. All rows of EB are multiples of 1 2 4 .
2 4 8
2 1
1 1
1 1
1 0
.
but FE D
give EF D
and F D
No. E D
1 1
1 2
0 1
1 1
P
P
(a)
a3j xj (b) a21 a11 (c) a21 2a11 (d) .EAx/1 D .Ax/1 D a1j xj .
23 E.EA/ subtracts 4 times row 1 from row 2 (EEA does the row operation twice).
AE subtracts 2 times column 2 of A from column 1 (multiplication by E on the right
side acts on columns instead of rows).
2x1 C 3x2 D 1
2 3 1
2
3 1
24 A b D
!
. The triangular system is
0
5 15
5x2 D 15
4 1 17
Back substitution gives x1 D 5 and x2 D 3.
25 The last equation becomes 0 D 3. If the original 6 is 3, then row 1 C row 2 D row 3.
26 (a) Add two columns b and b
and x D
4
.
1
1
2
1
4 1 0
!
7 0 1
0
4
1
1 0
7
!xD
2
2 1
27 (a) No solution if d D 0 and c ¤ 0 (b) Many solutions if d D 0 D c. No effect from a; b.
28 A D AI D A.BC / D .AB/C D IC D C . That middle equation is crucial.
Solutions to Exercises
14
2
3
2
3
1
0
0 0
1 0 0 0
1
0 07
6 1
60 1 0 07
29 E D 4
subtracts each row from the next row. The result 4
0
1
1 05
0 1 1 05
0
0
1 1
0 1 2 1
still has multipliers
D
1
in
a
3
by
3
Pascal
matrix.
The
product
M
of
all
elimination
2
3
1
0
0 0
1
0 07
6 1
matrices is 4
. This “alternating sign Pascal matrix” is on page 88.
1
2
1 05
1
3
3 1
30 Given positive integers with ad
bc D 1. Certainly c < a and b < d would be
impossible. Also c > a and b > d would be impossible with integers.
This leaves
3 4
row 1 < row 2 OR row 2 < row 1. An example is M D
. Multiply by
2 3
1
1
1 1
1 0
1 1
to get
, then multiply twice by
to get
. This shows
0
1
2 3
1 1
0 1
1 1 1 0 1 0 1 1
that M D
.
0 1 1 1 1 1 0 1
2
3
2
3
2
1
1
1
6 1=2 1
7
6 0
7
6
1
7, E32 D 6
7, E43 D 6 0 1
31 E21 D 6
4 0
5
4 0 2=3 1
5
4 0 0 1
0 1
0
0 0 1
0
0
0 1
0 0 3=4 1
2
3
1
6 1=2
7
1
7
E43 E32 E21 D 6
4 1=3 2=3 1
5
1=4
2=4 3=4 1
Problem Set 2.4, page 75
1 If all entries of A; B; C; D are 1, then BA D 3 ones.5/ is 5 by 5; AB D 5 ones.3/ is 3
by 3; ABD D 15 ones.3; 1/ is 3 by 1. DBA and A.B C C / are not defined.
2 (a) A (column 3 of B)
(b) (Row 1 of A) B
(c) (Row 3 of A)(column 4 of B)
(d) (Row 1 of C )D(column 1 of E).
3 8
3 AB C AC is the same as A.B C C / D
. (Distributive law).
6 9
0 0
4 A.BC / D .AB/C by the associative law. In this example both answers are
0 0
from column 1 of AB and row 2 of C (multiply columns times rows).
n
1 2b
1 nb
4 4
2 2n
5 (a) A2 D
and An D
. (b) A2 D
and An D
.
0 1
0 1
0 0
0 0
10 4
16 2
2
2
2
2
2
6 .A C B/ D
D A C AB C BA C B . But A C 2AB C B D
.
6 6
3 0
7 (a) True
(b) False
(c) True
(d) False: usually .AB/2 ¤ A2 B 2 .
3
7
7,
5
Solutions to Exercises
15
8 The rows of DA are 3 (row 1 of A) and 5 (row 2 of A). Both rows of EA are row 2 of A.
9
10
11
12
13
14
15
16
17
The columns of AD are 3 (column 1 of A) and 5 (column 2 of A). The first column of
AE is zero, the second is column 1 of A C column 2 of A.
"
#
a aCb
AF D
and E.AF / equals .EA/F because matrix multiplication is
c cCd
associative.
"
#
"
#
aCc bCd
aCc
bCd
FA D
and then E.FA/ D
. E.FA/ is not
c
d
a C 2c b C 2d
the same as F .EA/ because multiplication is not commutative.
"
#
0 0 1
(a) B D 4I (b) B D 0 (c) B D 0 1 0
(d) Every row of B is 1; 0; 0.
1 0 0
"
#
"
#
a 0
a b
AB D
D BA D
gives b D c D 0. Then AC D CA gives
c 0
0 0
a D d. The only matrices that commute with B and C (and all other matrices) are
multiples of I : A D aI .
.A B/2 D .B A/2 D A.A B/ B.A B/ D A2 AB BA C B 2 . In a typical
case (when AB ¤ BA) the matrix A2 2AB C B 2 is different from .A B/2 .
(a) True (A2 is only defined when A is square)
(b) False (if A is m by n and B is n
by m, then AB is m by m and BA is n by n).
(c) True
(d) False (take B D 0).
(a) mn (use every entry of A) (b) mnp D ppart (a) (c) n3 (n2 dot products).
(a) Use only column 2 of B (b) Use only row 2 of A (c)–(d) Use row 2 of first A.
3
3
2
2
1 1 1
1
1
1
7
7
6
6
1
1 5 has aij D . 1/iCj D
A D 4 1 2 2 5 has aij D min.i; j /. A D 4 1
1
2 3
2
1=1
1=2
1=3
13
1
1
6
7
“alternating sign matrix”. A D 4 2=1 2=2 2=3 5 has aij D i=j (this will be an
3=1 3=2 3=3
example of a rank one matrix).
18 Diagonal matrix, lower triangular, symmetric, all rows equal. Zero matrix fits all four.
31
19 (a) a11
(b) `31 D a31 =a11
(c) a32 . aa11
/a12
(d) a22 . aa21
/a12 .
11
2
3
2
3
0 0 4 0
0 0 0 8
6 0 0 0 4 7
6 0 0 0 0 7
7
6
7
6
20 A2 D 6
7 ; A4 D zero matrix for strictly triangular A.
7 ; A3 D 6
4 0 0 0 0 5
4 0 0 0 0 5
0 0 0 0
2
x
6 y
6
Then Av D A 6
4 z
t
3
2
2y
0 0 0 0
3
2
4z
3
2
8t
3
7 6 2z 7
6 4t 7
6 0 7
7 6
7
6
7
6
7
7D6
7 ; A2 v D 6
7 ; A3 v D 6
7 ; A4 v D 0.
5 4 2t 5
4 0 5
4 0 5
0
0
0
Solutions to Exercises
16
2
3
21 A D A D A D D
"
:5
:5
:5
:5
#
"
:5
:5
#
and .AB/2 D zero matrix!
:5
1 1 1
0 0
D
;
1 1 1
0 0
but AB D
:5
1
1
2
has A D I ; BC D
0
1
1
0 1
1 0
D
D ED. You can find more examples.
0
1 0
0 1
#
1
has A2 D 0. Note: Any matrix A D column times row D uvT will
0
2
3
2
3
0 1 0
0 0 1
6
7
6
7
have A2 D uvT uvT D 0 if vT u D 0. A D 4 0 0 1 5 has A2 D 4 0 0 0 5
0 0 0
0 0 0
0
22 A D
1
0
DE D
1
"
0
23 A D
0
24
25
26
27
28
29
30
31
but A3 D 0; strictly triangular as in Problem 20.
n
n
2 2n 1
1 1
a
an 1 b
.A1 /n D
, .A2 /n D 2n 1
, .A3 /n D
.
0
1
1 1
0
0
2
32
3 2 3
2 3
2 3
a b c
1 0 0
a
d
c
1 0 0
0 1 0
0 0 1
4 d e f 54 0 1 0 5D4 d 5
C4 e 5
C4 f 5
.
g h i
0 0 1
g
h
i
#
" #
"
# "
" #
0 3 3 0
0 0 0
1 Columns of A
2 3 3 0 C 4 1 2 1 D 6 6 0 C 4 8 4 D
times rows of B
1 2 1
2
1
6 6 0
#
"
3 3 0
10 14 4 D AB.
7 8 1
(a) (row
are both zero.
" 2 of B) #
" #3 of A) (column
" 1 of B)#and (row
" #3 of A) (column
0 0 x
x 0 x x
x (b) x 0 x x D 0 x x and x 0 0 x D 0 0 x : both upper.
0 0 x
x
0 0 0
0
ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ
ˇ ˇ ˇ
ˇ ˇ
A times B
ˇ ˇ ˇ ,
ˇ ˇ
A ˇˇ ˇˇ ˇˇ ,
B,
ˇ ˇ ˇ
ˇ ˇ
with cuts
#
"
#
"
1 0 0
1 0 0
0 1 0 produce zeros in the 2; 1 and 3; 1 entries.
E21 D 1 1 0 and E31 D
0 0 1
4 0 1
#
"
#
"
1 0 0
2 1 0
1 1 0 . Then EA D 0 1 1 is the
Multiply E’s to get E D E31 E21 D
0 1 3
4 0 1
result of both E’s since .E31 E21 /A D E31 .E21 A/.
1 1
2
0 1
in the lower corner of EA.
, DD
, D cb=a D
In 29, c D
5 3
1 3
8
A
B
x
Ax By real part
Complex matrix times complex vector
D
B
A y
Bx C Ay imaginary part. needs 4 real times real multiplications.
Solutions to Exercises
17
32 A times X D Œ x 1 x 2 x 3  will be the identity matrix I D Œ Ax 1 Ax 2 Ax 3 .
" #
" #
"
#
3
3
1
0 0
8 I AD
1
1 0 will have
33 b D 5 gives x D 3x 1 C 5x 2 C 8x 3 D
8
16
0
1 1
those x 1 D .1; 1; 1/; x 2 D .0; 1; 1/; x 3 D .0; 0; 1/ as columns of its “inverse” A 1 .
aCb aCb
a C c b C b when b D c
34 A ones D
agrees with ones A D
cCd cCd
aCc bCd
and a D d
a b
Then A D
.
b a
2
3
2
3
0 1 0 1
2 0 2 0
aba, ada cba, cda These show
0
2
0
2
bab, bcb dab, dcb 16 2-step
61 0 1 07
6
7
35 A D 4
; A2 D 4
;
0 1 0 15
2 0 2 0 5 abc, adc cbc, cdc paths in
1 0 1 0
0 2 0 2
bad, bcd dad, dcd the graph
36 Multiplying AB D(m by n)(n by p) needs mnp multiplications. Then .AB/C needs
mpq more. Multiply BC D (n by p)(p by q) needs npq and then A.BC / needs mnq.
(a) If m; n; p; q are 2; 4; 7; 10 we compare .2/.4/.7/ C .2/.7/.10/ D 196 with the
larger number .2/.4/.10/ C .4/.7/.10/ D 360. So AB first is better, so that we
multiply that 7 by 10 matrix by as few rows as possible.
(b) If u; v; w are N by 1, then .uT v/wT needs 2N multiplications but uT .vwT / needs
N 2 to find vwT and N 2 more to multiply by the row vector uT . Apologies to use
the transpose symbol so early.
(c) We are comparing mnp C mpq with mnq C npq. Divide all terms by mnpq:
Now we are comparing q 1 Cn 1 with p 1 Cm 1 . This yields a simple important
rule. If matrices A and B are multiplying v for ABv, don’t multiply the matrices
first.
37 The proof of .AB/c D A.Bc/ used the column rule for matrix multiplication—this
rule is clearly linear, column by column.
Even for nonlinear transformations, A.B.c// would be the “composition” of A with B
(applying B then A). This composition A ı B is just AB for matrices.
One of many uses for the associative law: The left-inverse B = right-inverse C from
B D B.AC / D .BA/C D C .
Problem Set 2.5, page 89
1 A
1
D
0
1
3
1
4
0
and B
1
D
1
2
1
0
1
2
and C
2 A simple row exchange has P 2 D I so P
P
1
1
1
D
7
5
D P . Here P
= “transpose” of P , coming in Section 2:7.
1
4
.
3
"
D
#
0 0 1
1 0 0 . Always
0 1 0
Solutions to Exercises
18
1
x
:5
t
:2
5
2
1
3
D
and
D
so A D
. This question solved
y
:2
z
:1
2
1
10
1
AA D I column by column, the main idea of Gauss-Jordan elimination.
4 The equations are x C 2y D 1 and 3x C 6y D 0. No solution because 3 times equation
1 gives 3x C 6y D 3.
a
for any a. And also U .
1
1
5 An upper triangular U with U D I is U D
0
2
6 (a) Multiply AB D AC
by A
as B
x
x
C has the form
1
to
find B D C (since A is invertible)
(b) As long
y
1 1
, we have AB D AC for A D
.
y
1 1
7 (a) In Ax D .1; 0; 0/, equation 1 C equation 2
equation 3 is 0 D 1
(b) Right
(c) Row 3 becomes a row of zeros—no third pivot.
sides must satisfy b1 Cb2 D b3
8 (a) The vector x D .1; 1; 1/ solves Ax D 0
(b) After elimination, columns 1
and 2 end in zeros. Then so does column 3 D column 1 C 2: no third pivot.
1
9 If you exchange rows 1 and 2 of A to reach B, you exchange columns 1 and 2 of A
1
to reach B
2
1
. In matrix notation, B D PA has B
3
2
0
0
0 1=5
0 1=4 0 7
6 0
6
and B 1 D 4
10 A 1 D 4
0
1=3 0
0 5
1=2 0
0
0
block of B).
DA
3
4
0
0
1
1
P
2
3
0
0
1
D A P for this P .
3
0
0
0
07
(invert each
6
55
7
6
1 0
A then certainly ACB = zero matrix is not invertible. (b) A D
0 0
0 0
are both singular but A C B D I is invertible.
and B D
0 1
11 (a) If B D
12 Multiply C D AB on the left by A
13 M
1
CM
14 B
1
D C
A.
1
1
DA
1
B
1
1
1
0
1
A
1
1
and on the right by C
1
: Then A
1
D BC
so multiply on the left by C and the right by A W B
1
DA
1
1 0
: subtract column 2 of A
1 1
1
1
1
.
D
from column 1.
1
.
d
b
ad bc
0
The inverse of each matrix is
a b
D
.
16
the other divided by ad bc
c d
c
a
0
ad bc
#
#
"
"
#"
#"
1
1
1
1
1
1
1
1
1 1
D E.
D
17 E32 E31 E21 D
0
1 1
1
1 1
1
1
"
#
1
Reverse the order and change 1 to C1 to get inverses E211 E311 E321 D 1 1
D
1 1 1
L D E 1 . Notice the 1’s unchanged by multiplying in this order.
15 If A has a column of zeros, so does BA. Then BA D I is impossible. There is no A
18 A2 B D I can also be written as A.AB/ D I . Therefore A
1
is AB.
Solutions to Exercises
19
19 The .1; 1/ entry requires 4a
b D
aD
1
5
2
.
6
and a D
2
.
5
3b D 1; the .1; 2/ entry requires 2b a D 0. Then
For the 5 by 5 case 5a 4b D 1 and 2b D a give b D 16 and
20 A ones.4; 1/ is the zero vector so A cannot be invertible.
21 Six of the sixteen 0
1 matrices are invertible, including all four with three 1’s.
1 3 1 0
1
3
1
0
1
0
7
3
22
!
!
D I A 1 ;
2 7 0 1
0
1
2
1
0
1
2
1
1 4 1 0
1
4
1 0
1
0
3
4=3
!
!
D I A 1 .
3 9 0 1
0
3
3 1
0
1
1
1=3
"
#
"
#
2 1 0 1 0 0
1 0 0
2
1 0
1=2 1 0 !
23 ŒA I D 1 2 1 0 1 0 ! 0 3=2 1
0 1 2 0 0 1
0
1 2
0 0 1
"
#
"
#
2
1
0
1
0 0
2
1
0
1
0
0
0 3=2
1
1=2
1 0 ! 0 3=2
0
3=4
3=2
3=4 !
0
0 4=3
0
0 4=3
1=3
2=3 1
1=3
2=3
1
"
#
"
#
2
0
0
3=2
1
1=2
1 0 0
3=4
1=2
1=4
0 3=2
0
3=4
3=2
3=4 ! 0 1 0
1=2
1
1=2 D
0
0 4=3
0 0 1
1=3
2=3
1
1=4
1=2
3=4
ŒI A 1 .
"
# "
# "
#
1 a b 1 0 0
1 a 0 1 0
b
1 0 0 1
a ac b
c ! 0 1 0 0
1
c .
24 0 1 c 0 1 0 ! 0 1 0 0 1
0 0 1 0 0 1
0 0 1 0 0
1
0 0 1 0
0
1
"
# "
#" #
" #
# 1
2 1 1
3
1
1
2
1
1
1
0
1
1
3
1 I
1
2
1
1 D 0 so B 1 does
25 1 2 1
D
4
1 1 2
1
1
3
1
1
2
1
0
not exist.
1 0 1 2
1 2
1
1
1 0
1 0
26 E21 A D
D
. E12 E21 A D
AD
.
2 1 2 6
0 2
0
1
2 1
0 2
1
0
Multiply by D D
to reach DE12 E21 A D I . Then A 1 D DE12 E21 D
0 1=2
6
2
1
.
2
2
1
"
#
"
#
1
0
0
2
1
0
1
1
2
1
3 (notice the pattern); A D
1
2
1 .
27 A D
0
0
1
0
1
1
0 2 1 0
2 2 0 1
2 0
1 1
1 0
1=2 1=2
28
!
!
!
.
2 2 0 1
0 2 1 0
0 2
1 0
0 1
1=2 0
This is I A 1 : row exchanges are certainly allowed in Gauss-Jordan.
"
29 (a) True (If A has a row of zeros, then every AB has too, and AB D I is impossible)
(b) False (the matrix of all ones is singular even with diagonal 1’s: ones (3) has 3 equal
rows) (c) True (the inverse of A 1 is A and the inverse of A2 is .A 1 /2 /.
Solutions to Exercises
20
30 This A is not invertible for c D 7 (equal columns), c D 2 (equal rows), c D 0 (zero
column).
31 Elimination produces the pivots a and a b and a b. A
2
3
1
D
1
a.a
b/
"
#
a 0 b
a a 0 .
0 a a
1 1 0 0
60 1 1 07
32 A
D4
. When the triangular A alternates 1 and 1 on its diagonal,
0 0 1 15
0 0 0 1
A 1 is bidiagonal with 1’s on the diagonal and first superdiagonal.
33 x D .1; 1; : : : ; 1/ has P x D Qx so .P Q/x D 0.
I 0
A 1
0
D I
34
and
and
.
C I
I 0
D 1 CA 1 D 1
1
35 A can be invertible with diagonal zeros. B is singular because each row adds to zero.
36 The equation LDLD D I says that LD D pascal .4; 1/ is its own inverse.
37 hilb(6) is not the exact Hilbert matrix because fractions are rounded off. So inv(hilb(6))
38
39
40
41
42
is not the exact either.
The three Pascal matrices have P D LU D LLT and then inv.P / D inv.LT /inv.L/.
Ax D b has many solutions when A D ones .4; 4/ D singular matrix and b D ones
.4; 1/. Anb in MATLAB will pick the shortest solution x D .1; 1; 1; 1/=4. This is the
only solution that is combination of the rows of A (later it comes from the
“pseudoinverse” AC D pinv(A) which replaces A 1 when A is singular). Any vector that solves Ax D 0 could be added to this particular solution x.
3
3
2
2
1
a
0
0
1 a ab abc
1
b
07
bc 7
60
60 1 b
The inverse of A D 4
is A 1 D 4
. (This
0
0
1
c5
0 0 1
c 5
0
0
0
1
0 0 0
1
would be a good example for the cofactor formula A 1 D C T = det A in Section 5.3)
32
3 2
3
32
2
1
1
1
1
1
76
7 6a 1
7
7 60 1
6a 1
The product 4
54
5 D 4b d 1
5
5 40 d 1
1
b 0 1
c 0 0 1
0 e 0 1
f 1
c e f 1
that in this order the multipliers shows a; b; c; d; e; f are unchanged in the product
(important for A D LU in Section 2.6).
MM 1 D .In U V / .In CU.Im V U / 1 V / .this is testing formula 3/
D In U V CU.Im V U / 1 V U V U.Im V U / 1 V .keep simplifying/
D In U V CU.Im V U /.Im V U / 1 V D In .formulas 1; 2; 4 are similar/
43 4 by 4 still with T11 D 1 has pivots 1; 1; 1; 1; reversing to T D UL makes T44
D 1.
44 Add the equations C x D b to find 0 D b1 C b2 C b3 C b4 . Same for F x D b.
CA 1 B (and d cb=a is the correct
second
2 by 2 matrix).
The example problem has
pivot of an ordinary
1
1 0
4
5 6
SD
.
3 3 D
4
0 1
6 5
2
45 The block pivots are A and S D D
Solutions to Exercises
21
1
A 1 D A 1 .I C
AB/ 1 . So I CBA and I CAB are both invertible or both singular when A is invertible.
(This remains true also when A is singular : Problem 6.6.19 will show that AB and BA
have the same nonzero eigenvalues, and we are looking here at D 1.)
46 Inverting the identity A.I C BA/ D .I C AB/A gives .I C BA/
Problem Set 2.6, page 102
1 0
1 0 x
5
1 `21 D 1 multiplied row 1; L D
times
D
D c is Ax D b:
1 1
1 1 y
2
1 1 x
5
D
.
1 2 y
7
1 0 c1
5
5
2 Lc D b is
D
, solved by c D
as elimination goes forward.
1 1 c2
7
2
1 1 x
5
3
U x D c is
D
, solved by x D
in back substitution.
0 1 y
2
2
3 `31 D 1 and `32 D 2 (and `33 D 1): reverse steps to get Au D b from U x D c:
4
5
6
7
8
9
1 times .xCyCz D 5/C2 times .yC2z D 2/C1 times .z D 2/ gives xC3yC6z D 11.
"
#" # " #
"
#" # " #
" #
1
5
5
1 1 1
5
5
2 D
7 ; Ux D
1 2
x D 2 ; xD
2 .
Lc D 1 1
1 2 1
2
11
1
2
2
"
#"
# "
#
1
2 1 0
2 1 0
0 1
0 4 2 D 0 4 2 D U . With E 1 as L, A D LU D
EA D
3 0 1
6 3 5
0 0 5
"
#
1
0 1
U.
3 0 1
#"
#
"
#
"
#
"
1
1 1 1
1 0 0
1
0 1
2 1
A D 0 2 3 D U . Then A D 2 1 0 U is
0 2 1
0 0 1
0 0 6
0 2 1
the same as E211 E321 U D LU . The multipliers `21 ; `32 D 2 fall into place in L.
#"
#"
#
#"
"
1 0 0
1
1
1
2 1
2 2 2 . This is
1
1
E32 E31 E21 A D
3 4 5
2 1
3
1
1
#
"
#
"
1 0 0
1 0 1
0 2 0 D U . Put those multipliers 2; 3; 2 into L. Then A D 2 1 0 U D LU .
0 0 2
3 2 1
#
#"
# "
"
#"
1
1
1
1
a 1
a
1
1
1
.
D
E D E32 E31 E21 D
ac b
c 1
b
1
1
c 1
The multipliers are just a; b; c and the upper triangular U is I . In this case A D L and
its inverse is that matrix E D L 1 .
#
#"
"
# "
1
d e g d D 1; e D 1, then l D 1
1 1 0
f h f D 0 is not allowed
2 by 2: d D 0 not allowed; 1 1 2 D l 1
i
no pivot in row 2
m n 1
1 2 1
Solutions to Exercises
22
10 c D 2 leads to zero in the second pivot position: exchange rows and not singular.
11
12
13
14
15
16
c D 1 leads to zero in the third pivot position. In this case the matrix is singular.
"
#
"
#
2 4 8
2
3
A D 0 3 9 has L D I (A is already upper triangular) and D D
I
0 0 7
7
"
#
1 2 4
A D LU has U D A; A D LDU has U D D 1 A D 0 1 3 with 1’s on the
0 0 1
diagonal.
2 4
1 0 2 4
1 0 2 0 1 2
AD
D
D
D LDU ; U is LT
4 11
2 1 0 3
2 1 0 3 0 1
"
#"
# "
#"
#"
#
1
1 4 0
1
1
1 4 0
4 1
0 4 4 D 4 1
4
0 1 1 D LDLT .
0 1 1
0 0 4
0 1 1
4
0 0 1
2
3 2
32
3
a a a a
1
a
a
a
a
a ¤ 0 All of the
b a b a b a7
b ¤ a multipliers
6a b b b 7 61 1
76
. Need
4 a b c c 5 D 4 1 1 1 54
c b c b5
c ¤ b are `ij D 1
a b c d
1 1 1 1
d c
d ¤ c for this A
2
3 2
32
3
a r r r
1
a
r
r
r
a¤0
b r s r s r7
b¤r
6a b s s 7 61 1
76
. Need
4a b c t 5 D 41 1 1
54
c s t s5
c¤s
a b c d
1 1 1 1
d t
d ¤t
5
2
2 4
2
2
1 0
.
gives x D
xD
. Then
gives c D
cD
3
3
0 1
3
11
4 1
2
2 4
2
2 4
D c.
xD
. Forward to
xD
Ax D b is LU x D
3
0 1
11
8 17
" #
#
" #
"
" #
#
" #
"
3
4
1 1 1
4
4
1 0 0
1 1 0 c D 5 gives c D 1 . Then 0 1 1 x D 1 gives x D 0 .
1
1
0 0 1
1
6
1 1 1
Those are the forward elimination and back substitution steps for
#
" #
#"
"
4
1 1 1
1
1 1 xD 5 .
Ax D 1 1
6
1
1 1 1
(b) I goes to L 1 (c) LU goes to U . Elimination multiply by L 1 !
18 (a) Multiply LDU D L1 D1 U1 by inverses to get L1 1 LD D D1 U1 U 1 . The left
side is lower triangular, the right side is upper triangular ) both sides are diagonal.
(b) L; U; L1 ; U1 have diagonal 1’s so D D D1 . Then L1 1 L and U1 U 1 are both I .
#
"
#
#
"
"
#"
a
1
1 1 0
a
a
0
b
b
1 1 D LI U I a a C b
D (same L)
19 1 1
0
b
bCc
c
1
0 1 1
(same U ). A tridiagonal matrix A has bidiagonal factors L and U .
20 A tridiagonal T has 2 nonzeros in the pivot row and only one nonzero below the pivot
(one operation to find ` and then one for the new pivot!). T D bidiagonal L times
bidiagonal U .
17 (a) L goes to I
Solutions to Exercises
23
21 For the first matrix A; L keeps the 3 lower zeros at the start of rows. But U may not
22
23
24
25
26
have the upper zero where A24 D 0. For the second matrix B; L keeps the bottom left
zero at the start of row 4. U keeps the upper right zero at the start of column 4. One
zero in A and two zeros in B are filled in.
"
#
"
#
"
#
5 3 1
4 2 0
2 0 0
Eliminating upwards, 3 3 1 ! 2 2 0 ! 2 2 0 D L. We reach
1 1 1
1 1 1
1 1 1
a lower triangular L, and the multipliers are in an upper triangular U . A D UL with
"
#
1 1 1
U D 0 1 1 .
0 0 1
The 2 by 2 upper submatrix A2 has the first two pivots 5; 9. Reason: Elimination on A
starts in the upper left corner with elimination on A2 .
The upper left blocks all factor at the same time as A: Ak is Lk Uk .
The i; j entry of L 1 is j= i for i j . And Li i 1 is .1 i /= i below the diagonal
.K 1 /ij D j.n i C 1/=.n C 1/ for i j (and symmetric): .n C 1/K 1 looks good.
Problem Set 2.7, page 115
1
1 AD
9
1
AD
c
0
1 9
1
0
T
1
has A D
;A D
; .A 1 /T D .AT /
3
0 3
3 1=3
1 0
c
c
has AT D A and A 1 D 2
D .A 1 /T .
0
1
c c
1
1
3
D
;
0 1=3
2 .AB/T is not AT B T except when AB D BA. Transpose that to find: B T AT D AT B T .
1 T
/ D .B 1 A 1 /T D .A 1 /T .B 1 /T . This is also .AT / 1 .B T / 1 .
(b) If U is upper triangular, so is U 1 : then .U 1 /T is lower triangular.
0 1
AD
has A2 D 0. The diagonal of AT A has dot products of columns of A with
0 0
themselves. If AT A D 0, zero dot products ) zero columns ) A D zero matrix.
"0#
1 2 3
2
T
1 D 5 (b) x T A D 4 5 6 (c) Ay D
(a) x Ay D 0 1
.
4 5 6
5
0
T
A
CT
T
M D
; M T D M needs AT D A and B T D C and D T D D.
B T DT
0 A
(a) False:
is symmetric only if A D AT . (b) False: The transpose of AB
A 0
0 A
0 AT
T T
is B A D BA when A and B are symmetric
transposes to
.
A 0
AT 0
T
So .AB/ D AB needs BA D AB. (c) True: Invertible symmetric matrices have
symmetric in verses! Easiest proof is to transpose AA 1 D I . (d) True: .ABC /T is
C T B T AT .D CBA for symmetric matrices A; B; and C ).
The 1 in row 1 has n choices; then the 1 in row 2 has n 1 choices . . . (n! overall).
3 (a) ..AB/
4
5
6
7
8
Solutions to Exercises
24
"
#"
#
"
#
"
#
0 1 0
1 0 0
0 0 1
0 1 0
0 0 1
0 0 1
0 1 0 but P2 P1 D
1 0 0 .
9 P1 P2 D
D
1 0 0
0 1 0
1 0 0
0 0 1
If P3 and P4 exchange different pairs of rows, P3 P4 D P4 P3 does both exchanges.
10 .3; 1; 2; 4/ and .2; 3; 1; 4/ keep 4 in place; 6 more even P ’s keep 1 or 2 or 3 in place;
.2; 1; 4; 3/ and .3; 4; 1; 2/ exchange 2 pairs. .1; 2; 3; 4/; .4; 3; 2; 1/ make 12 even P ’s.
"
#"
#
"
#
0 1 0
0 0 6
1 2 3
1 2 3 D 0 4 5 is upper triangular. Multiplying on
11 PA D 0 0 1
1 0 0
0 4 5
0 0 6
the right by a permutation matrix P2 exchanges the columns. To make this A lower tri"
#
1
1
angular, we also need P1 to exchange rows 2 and 3: P1 AP2 D
1
"
# "
#
1
6 0 0
1
A
D 5 4 0 .
1
3 2 1
12 .P x/T .P y/ D x T P T P y D x T y since P T P D I . In general P xy D xP T y ¤ xP y:
Non-equality where P ¤ P T :
"
0
0
1
1 0
0 1
0 0
#" # " # " # "
#" #
1
1
1
0 1 0
1
2 1 ¤ 2 0 0 1
1 .
3
2
3
1 0 0
2
"
#
0 1 0
13 A cyclic P D 0 0 1 or its transpose will have P 3 D I W .1; 2; 3/ ! .2; 3; 1/ !
1 0 0 b4 D P
b ¤ I:
b D 1 0 for the same P has P
.3; 1; 2/ ! .1; 2; 3/. P
0 P
14 The “reverse identity” P takes .1; : : : ; n/ into .n; : : : ; 1/. When rows and also columns
are reversed, .PAP /ij is .A/n
iC1;n j C1 .
In particular .PAP /11 is Ann .
15 (a) If P sends row 1 to row 4, then P T sends row 4 to row 1
P T with E D
16 A2
0
1
(b) P D
E
0
0
E
D
1
moves all rows: 1 and 2 are exchanged, 3 and 4 are exchanged.
0
B 2 (but not .A C B/.A B/, this is different) and also ABA are symmetric if A
and B are symmetric.
0 1
1 1
needs row exchange
17 (a) A D
D AT is not invertible (b) A D
1 1
1 1
1 0
1 1
.
has D D
(c) A D
1 0
0
1
18 (a) 5 C 4 C 3 C 2 C 1 D 15 independent entries if A D AT (b) L has 10 and D has 5;
total 15 in LDLT (c) Zero diagonal if AT D
T
T
T
T
by n matrix R) (b) .R R/jj
of column j ) 0.
A, leaving 4 C 3 C 2 C 1 D 10 choices.
D RT AR D n by n when AT D A (any m
D (column j of R) (column j of R) D (length squared
19 (a) The transpose of R AR is R A R
TT
Solutions to Exercises
25
0
1 3
1 0 1
0 1 3
1 b
1 0 1
1 b
20
D
;
D
3 2
3 1 0
7 0 1
b c
b 1 0 c b2
0 1
2
3
32
32
1
"
#
1
2
0
1
2
1
0
2
3
6 1
7
76
76
T
2
1
1
2
1 D4 2
5 D LDL .
54
54
1
2
3
2
4
0
1
2
0
1
1
3
3
21 Elimination on a symmetric 3 by 3 matrix leaves a symmetric lower right 2 by 2 matrix.
"
#
"
#
2 4 8
1 b c
5
7
d b 2 e bc
The examples 4 3 9 and b d e lead to
and
.
7 32
e bc f c 2
8 9 0
c e f
"
#
"
#"
# "
#
"
#"
#
1
1
1
0
1
1
1
1
2
0
1
1 ;
1 AD 1 1
1
1
22 1
AD 0 1
1
2 3 1
1
1
2 0 1
1
2
3
0 0 0 1
This cyclic P exchanges rows 1-2 then
61 0 0 07
23 A D 4
D P and L D U D I .
0 1 0 05
rows 2-3 then rows 3-4.
0 0 1 0
"
#"
# "
#"
#
1
0 1 2
1
2
1
1
1
0 3 8 D 0 1
3
8 . If we wait
24 PA D LU is
1
2 1 1
0 1=3 1
2=3
"
#"
#"
#
1
1
2 1 1
1
0 1 2 .
to exchange and a12 is the pivot, A D L1 P1 U1 D 3 1
1 1
0 0 2
25 The splu code will not end when abs.A.k; k// < tol line 4 of the slu code on page 100.
Instead splu looks for a nonzero entry below the diagonal in the current column k, and
executes a row exchange. The 4 lines to exchange row k with row r are at the end of
Section 2.7 (page 113). To find that nonzero entry A.r; k/, follow abs.A.k; k// < tol
by locating the first nonzero (or the largest A.r; k/ out of r D k C 1; : : : ; n).
26 One way to decide even vs. odd is to count all pairs that P has in the wrong order. Then
P is even or odd when that count is even or odd. Hard step: Show that an exchange
always switches that count! Then 3 or 5 exchanges will leave that count odd.
#
#
"
"
1
1 0 0
T
3 1
puts 0 in the 2; 1 entry of E21 A. Then E21 AE21 D 0 2 4
27 (a) E21 D
0 4 9
1
#
"
1
1
is still symmetric, with zero also in its 1, 2 entry. (b) Now use E32 D
4 1
T
T
to make the 3, 2 entry zero and E32 E21 AE21
E32
D D also has zero in its 2, 3 entry.
Key point: Elimination from both sides gives the symmetric LDLT directly.
2
3
0 1 2 3
61 2 3 07
28 A D 4
D AT has 0; 1; 2; 3 in every row. (I don’t know any rules for a
2 3 0 15
3 0 1 2
symmetric construction like this)
Solutions to Exercises
26
29 Reordering the rows and/or the columns of
a b
c d will move the entry a. So the result
cannot be the transpose (which doesn’t move a).
"
#"
#
"
#
1
0
1
yBC
yBC C yBS
1
1
0
yCS
yBC C yCS .
30 (a) Total currents are AT y D
D
0
1
1
yBS
yCS yBS
(b) Either way .Ax/T y D x T .AT y/ D xB yBC C xB yBS xC yBC C xC yCS
xS yCS xS yBS .
"
#
" 700 #
1
50 x1
1
40
2
6820
1 truck
T
3
31 40 1000
D Ax; A y D
D
x2
50 1000 50
188000 1 plane
2
50
3000
32 Ax y is the cost of inputs while x AT y is the value of outputs.
33 P 3 D I so three rotations for 360ı ; P rotates around .1; 1; 1/ by 120ı .
34
1
4
2
1 0 1
D
9
2 1 2
35 L.U T /
2
D EH D (elementory matrix) times (symmetric matrix).
5
1
is lower triangular times lower triangular, so lower triangular. The transpose
of U DU is U T D T U T T D U T DU again, so U T DU is symmetric. The factorization
multiplies lower triangular by symmetric to get LDU which is A.
T
36 These are groups: Lower triangular with diagonal 1’s, diagonal invertible D, permuta1
tions P , orthogonal matrices with QT D Q
.
1
is southeast: 11 10
D 01 11 .
The rows of B are in reverse order from a lower triangular L, so B D PL. Then
B 1 D L 1 P 1 has the columns in reverse order from L 1 . So B 1 is southeast.
Northwest B D PL times southeast P U is .PLP /U D upper triangular.
37 Certainly B T is northwest. B 2 is a full matrix! B
1
38 There are nŠ permutation matrices of order n. Eventually two powers of P must be
the same: If P r D P s then P r s D I . Certainly r s n!
"
0 1
P2
0 1
and P3 D 0 0
P D
is 5 by 5 with P2 D
1 0
P3
1 0
#
0
1 and P 6 D I .
0
39 To split A into (symmetric B) C (anti-symmetric C ), the only choice is B D
and C D
1
.A
2
T
T
A /.
40 Start from Q Q D I , as in
"
#
q T1 q T2
q1
q2
1
D
0
0
1
(a) The diagonal entries give q T1 q 1 D 1 and q T2 q 2 D 1: unit vectors
(b) The off-diagonal entry is q T1 q 2 D 0 (and in general q Ti q j D 0)
cos sin (c) The leading example for Q is the rotation matrix
.
sin cos 1
.ACAT /
2
Solutions to Exercises
27
Problem Set 3.1, page 127
1 x C y ¤ y C x and x C .y C z/ ¤ .x C y/ C z and .c1 C c2 /x ¤ c1 x C c2 x.
2 When c.x1 ; x2 / D .cx1 ; 0/, the only broken rule is 1 times x equals x. Rules (1)-(4)
3
4
5
6
7
8
9
10
11
for addition x C y still hold since addition is not changed.
(a) cx may not be in our set: not closed under multiplication. Also no 0 and no x
(b) c.x C y/ is the usual .xy/c , while cx C cy is the usual .x c /.y c /. Those are equal.
With c D 3, x D 2, y D 1 this is 3.2 C 1/ D 8. The zero vector is the number 1.
0 0 1
1
1
2 2
The zero vector in matrix space M is
I AD
and A D
.
1
1
2 2
0 0 2
The smallest subspace of M containing the matrix A consists of all matrices cA.
(a) One possibility: The matrices cA form a subspace not containing B (b) Yes: the
subspace must contain A B D I (c) Matrices whose main diagonal is all zero.
When f .x/ D x 2 and g.x/ D 5x, the combination 3f
4g in function space is
h.x/ D 3f .x/ 4g.x/ D 3x 2 20x.
Rule 8 is broken: If cf .x/ is defined to be the usual f .cx/ then .c1 C c2 /f D
f ..c1 C c2 /x/ is not generally the same as c1 f C c2 f D f .c1 x/ C f .c2 x/.
If .f C g/.x/ is the usual f .g.x// then .g C f /x is g.f .x// which is different. In
Rule 2 both sides are f .g.h.x///. Rule 4 is broken there might be no inverse function
f 1 .x/ such that f .f 1 .x// D x. If the inverse function exists it will be the vector
f.
(a) The vectors with integer components allow addition, but not multiplication by 12
(b) Remove the x axis from the xy plane (but leave the origin). Multiplication by any
c is allowed but not all vector additions.
The only subspaces are (a) the plane with b1 D b2 (d) the linear combinations of v
and w (e) the plane with b1 C b2 C b3 D 0.
a b
a a
(a) All matrices
(b) All matrices
(c) All diagonal matrices.
0 0
0 0
12 For the plane x C y
13
14
15
16
17
2z D 4, the sum of .4; 0; 0/ and .0; 4; 0/ is not on the plane. (The
key is that this plane does not go through .0; 0; 0/.)
The parallel plane P0 has the equation x C y 2z D 0. Pick two points, for example
.2; 0; 1/ and .0; 2; 1/, and their sum .2; 2; 2/ is in P0 .
(a) The subspaces of R2 are R2 itself, lines through .0; 0/, and .0; 0/ by itself (b) The
subspaces of R4 are R4 itself, three-dimensional planes n v D 0, two-dimensional
subspaces .n1 v D 0 and n2 v D 0/, one-dimensional lines through .0; 0; 0; 0/, and
.0; 0; 0; 0/ by itself.
(a) Two planes through .0; 0; 0/ probably intersect in a line through .0; 0; 0/
(b) The plane and line probably intersect in the point .0; 0; 0/
(c) If x and y are in both S and T , x C y and cx are in both subspaces.
The smallest subspace containing a plane P and a line L is either P (when the line L is
in the plane P) or R3 (when L is not in P).
(a) The invertible matrices do notinclude
so they are not a subspace
the zero matrix,
1 0
0 0
is not singular: not a subspace.
(b) The sum of singular matrices
C
0 0
0 1
Solutions to Exercises
28
18 (a) True: The symmetric matrices do form a subspace
19
20
21
22
23
(b) True: The matrices with
AT D A do form a subspace (c) False: The sum of two unsymmetric matrices
could be symmetric.
The column space of A is the x-axis D all vectors .x; 0; 0/. The column space of B
is the xy plane D all vectors .x; y; 0/. The column space of C is the line of vectors
.x; 2x; 0/.
(a) Elimination leads to 0 D b2 2b1 and 0 D b1 C b3 in equations 2 and 3:
Solution only if b2 D 2b1 and b3 D b1
(b) Elimination leads to 0 D b1 C 2b3
in equation 3: Solution only if b3 D b1 .
A combination
of C is also a combination of the columns
of the columns
of A.
Then
1 3
1 2
1 2
C D
and A D
have the same column space. B D
has a
2 6
2 4
3 6
different column space.
(a) Solution for every b (b) Solvable only if b3 D 0 (c) Solvable only if b3 D b2 .
The extra column
b enlarges
the column space unless
b is already
in the column space.
1 0 1 (larger column space)
1 0 1 (b is in column space)
ŒA b D
0 0 1 (no solution to Ax D b) 0 1 1 (Ax D b has a solution)
24 The column space of AB is contained in (possibly equal to) the column space of A.
25
26
27
28
29
30
31
32
The example B D 0 and A ¤ 0 is a case when AB D 0 has a smaller column space
than A.
The solution to Az D b C b is z D x C y. If b and b are in C .A/ so is b C b .
The column space of any invertible 5 by 5 matrix is R5 . The equation Ax D b is
always solvable (by x D A 1 b/ so every b is in the column space of that invertible
matrix.
(a) False: Vectors that are not in a column space don’t form a subspace.
(b) True: Only the zero matrix has C .A/ D f0g.
(c) True: C .A/ D C .2A/.
1 0
(or other examples).
(d) False: C .A I / ¤ C .A/ when A D I or A D
0 0
#
#
"
#
"
"
1 2 0
1 1 2
1 1 0
A D 1 0 0 and 1 0 1 do not have .1; 1; 1/ in C .A/. A D 2 4 0
3 6 0
0 1 0
0 1 1
has C .A/ D line.
When Ax D b is solvable for all b, every b is in the column space of A. So that space
is R9 .
(a) If u and v are both in S C T , then u D s1 C t 1 and v D s2 C t 2 . So u C v D
.s1 C s2 / C .t 1 C t 2 / is also in S C T . And so is cu D cs1 C ct 1 : a subspace.
(b) If S and T are different lines, then S [ T is just the two lines (not a subspace) but
S C T is the whole plane that they span.
If S D C .A/ and T D C .B/ then S C T is the column space of M D Œ A B .
The columns of AB are combinations
of the columns of A. So all columns of ŒA AB
0 1
0 0
are already in C .A/. But A D
has a larger column space than A2 D
.
0 0
0 0
n
For square matrices, the column space is R when A is invertible.
Solutions to Exercises
29
Problem Set 3.2, page 140
"
1 2 2
1 (a) U D 0 0 1
0 0 0
4 6
2 3
0 0
#
"
#
2 4 2
Free variables x2 ; x4 ; x5
Free x3
(b) U D 0 4 4
Pivot variables x1 ; x3
Pivot x1 ; x2
0 0 0
2 (a) Free variables x2 ; x4 ; x5 and solutions . 2; 1; 0; 0; 0/, .0; 0; 2; 1; 0/, .0; 0; 3; 0; 1/
(b) Free variable x3 : solution .1; 1; 1/. Special solution for each free variable.
3x5 ; x4 ; x5 ) with x2 ; x4 ; x5
free. The complete solution to Bx D 0 is (2x3 ; x3 ; x3 ). The nullspace contains only
x D 0 when there are no free variables.
"
#
"
#
1 2 0 0 0
1
0
1
1
1 , R has the same nullspace as U and A.
4 RD 0 0 1 2 3 , RD 0
0 0 0 0 0
0
0
0
1 3 5
1 0
1 3 5
1 3 5
1 0
5 A D
D
I B D
D
2 6 10
2 1
0 0 0
2 6 7
2 1
1 3
5
D LU .
0 0
3
3 The complete solution to Ax D 0 is ( 2x2 ; x2 ; 2x4
6 (a) Special solutions .3; 1; 0/ and .5; 0; 1/
(b) .3; 1; 0/. Total of pivot and free is n.
x C 3y C 5z D 0; it contains all the
vectors .3y C 5z; y; z/ D y.3; 1; 0/ C z.5; 0; 1/ D combination of special solutions.
(b) The line through .3; 1; 0/ has equations x C3y C5z D 0 and 2x C6y C7z D 0.
The special solution for the free variable x2 is .3; 1; 0/.
1
3
0
1 0
1
3
5
with I D Œ 1 ; R D
with I D
.
8 RD
0
0
0
0
0
1
0 1
7 (a) The nullspace of A in Problem 5 is the plane
9 (a) False: Any singular square matrix would have free variables
10
11
12
13
14
15
(b) True: An invertible square matrix has no free variables. (c) True (only n columns to hold pivots)
(d) True (only m rows to hold pivots)
(a) Impossible row 1 (b) A D invertible (c) A D all ones (d) A D 2I; R D I .
3
32
32
2
0 0 0 1 1 1 1
1 1 1 1 1 1 1
0 1 1 1 1 1 1
60 0 0 1 1 1 17 60 0 1 1 1 1 17 60 0 0 0 0 1 17
40 0 0 0 1 1 15 40 0 0 0 0 1 15 40 0 0 0 0 0 05
0 0 0 0 0 0 0
0 0 0 0 0 0 1
0 0 0 0 0 0 0
3 2
3
2
1 1 0 1 1 1 0 0
0 1 1 0 0 1 1 1
60 0 1 1 1 1 0 07 60 0 0 1 0 1 1 17
4 0 0 0 0 0 0 1 0 5, 4 0 0 0 0 1 1 1 1 5. Notice the identity
0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0
matrix in the pivot columns of these reduced row echelon forms R.
If column 4 of a 3 by 5 matrix is all zero then x4 is a free variable. Its special solution
is x D .0; 0; 0; 1; 0/, because 1 will multiply that zero column to give Ax D 0.
If column 1 D column 5 then x5 is a free variable. Its special solution is . 1; 0; 0; 0; 1/.
If a matrix has n columns and r pivots, there are n r special solutions. The nullspace
contains only x D 0 when r D n. The column space is all of Rm when r D m. All
important!
Solutions to Exercises
30
16 The nullspace contains only x D 0 when A has 5 pivots. Also the column space is R5 ,
because we can solve Ax D b and every b is in the column space.
17 A D Π1
3
1  gives the plane x 3y z D 0; y and z are free variables. The
special solutions are .3; 1; 0/ and .1; 0; 1/.
" #
x
18 Fill in 12 then 4 then 1 to get the complete solution to x 3y z D 12: y D
z
" #
" #
" #
12
4
1
0 C y 1 C z 0 D x particular C x nullspace .
0
0
1
19 If LU x D 0, multiply by L
20
21
22
23
24
25
26
30
1
to find U x D 0. Then U and LU have the same
nullspace.
Column 5 is sure to have no pivot since it is a combination of earlier columns. With
4 pivots in the other columns, the special solution is s D .1; 0; 1; 0; 1/. The nullspace
contains all multiples of this vector s (a line in R5 ).
For
special solutions
.2; 2; 1; 0/ and .3; 1; 0; 1/ with free variables x3 ; x4 : R D
1 0
2
3
and A can be any invertible 2 by 2 matrix times this R.
0 1
2
1
"
#
1 0 0
4
3 is the line through .4; 3; 2; 1/.
The nullspace of A D 0 1 0
0 0 1
2
#
"
1 0
1=2
2 has .1; 1; 5/ and .0; 3; 1/ in C .A/ and .1; 1; 2/ in N .A/. Which
AD 1 3
5 1
3
other A’s?
This construction is impossible: 2 pivot columns and 2 free variables, only 3 columns.
#
"
1
1
0
0
0
1
0 has .1; 1; 1/ in C .A/ and only the line .c; c; c; c/ in N .A/.
AD 1
1
0
0
1
10
01
T
.
AD
has N .A/ D C .A/ and also (a)(b)(c) are all false. Notice rref.A / D
00
00
27 If nullspace D column space (with r pivots) then n
r D r. If n D 3 then 3 D 2r is
impossible.
28 If A times every column of B is zero,
the column space of B
is contained in the nullspace
1
1
1 1
. Here C .B/ equals N .A/.
and B D
of A. An example is A D
1 1
1
1
(For B D 0; C .B/ is smaller.)
29 For A D random 3 by 3 matrix, R is almost sure to be I . For 4 by 3, R is most likely
to be I with fourth row of zeros. What about a random 3 by 4 matrix?
31 If N .A/ D line through x D .2; 1; 0; 1/; A has three pivots (4 columns and 1 special
#
"
1 0 0
2
1 (add any zero rows).
solution). Its reduced echelon form can be R D 0 1 0
0 0 1
0
Solutions to Exercises
31
1 0
3 , R D
0 1
32 Any zero rows come after these rows: R D Π1
33 (a)
1
0
2
0
1 0
1 1
0 1
0 0
;
,
,
,
1
0 0
0 0
0 0
0 0
0
, R D I.
0
(b) All 8 matrices are R’s !
34 One reason that R is the same for A and
A: They have the same nullspace. They also
have the same column space, but that is not required for two matrices to share the same
R. (R tells us the nullspace and row space.)
y
35 The nullspace of B D Œ A A  contains all vectors x D
for y in R4 .
y
36 If C x D 0 then Ax D 0 and Bx D 0. So N .C / D N .A/ \ N .B/ D intersection.
37 Currents: y1
y3 C y4 D y1 C y2 C Cy5 D y2 C y4 C y6 D y4 y5 y6 D 0.
These equations add to 0 D 0. Free variables y3 ; y5 ; y6 : watch for flows around loops.
Problem Set 3.3, page 151
1 (a) and (c) are correct; (b) is completely false; (d) is false because R might have 1’s
in nonpivot columns.
"
4
4
4
4
4
4
2 AD
4
4
4
"
1
2
3
2
3
4
AD
3
4
5
"
1
1
1
1
1
1
AD
1
1
1
"
#
1 2 0
3 RA D 0 0 1
RB
0 0 0
#
"
4
4 has R D
4
#
"
4
5 has R D
6
#
"
1
1 has R D
1
1
0
0
1
0
0
1
0
0
1
0
0
0
1
0
1
0
0
1
0
0
1
2
0
1
0
0
RA
0
#
1
0 .
0
#
2
3 .
0
#
1
0 .
0
0
RA
The rank is r D 1;
The rank is r D 2;
The rank is r D 1
Zero rows go
to the bottom
0 I
I
4 If all pivot variables come last then R D
. The nullspace matrix is N D
.
0 0
0
D RA RA
RC !
5 I think R1 D A1 ; R2 D A2 is true. But R1
T
!
R2 may have 1’s in some pivots.
6 A and A have the same rank r D number of pivots. But pivcol (the column number)
"
0
is 2 for this matrix A and 1 for AT : A D 0
0
7 Special solutions in N D Π2
4 1 0I
3
"
1
2
8 The new entries keep rank 1: A D
4
a
b
M D
.
c bc=a
#
1 0
0 0 .
0 0
5 0 1  and Œ 1 0 0I
#
"
2
2 4
1
4 8 ; B D
8 16
2
0 2 1 .
#
6
3
3
3=2 ;
6
3
Solutions to Exercises
32
9 If A has rank 1, the column space is a line in Rm . The nullspace is a plane in Rn (given
by one equation). The nullspace matrix N is n by n 1 (with n 1 special solutions
in its columns). The column space of AT is a line in Rn .
2
3 2 3
3 6 6
3
1 2 2
2
2
6
4
2 1 1 3 2
4
5
4
5
10 1 2 2 D 1
and
D
1
1
3
2
1
4 8 8
4
11 A rank one matrix has one pivot. (That pivot is in row 1 after possible row exchange; it
could come in any column.) The second row of U is zero.
Invertible r by r submatrices
1 3
1 0
12
SD
and S D Œ 1  and S D
.
Use pivot rows and columns
1 4
0 1
13 P has rank r (the same as A) because elimination produces the same pivot columns.
14 The rank of RT is also r. The example matrix A has rank 2 with invertible S :
"
1
P D 2
2
3
6
7
#
PT D
1 2
3 6
2
7
ST D
1
3
2
7
SD
1 3
:
2 7
15 The product of rank one matrices has rank one or zero. These particular matrices have
rank.AB/ D 1; rank.AM / D 1 except AM D 0 if c D 1=2.
16 .uvT /.wzT / D u.vT w/zT has rank one unless the inner product is vT w D 0.
17 (a) By matrix multiplication, each column of AB is A times the corresponding column
of B. So if column j of B is a combination of earlier columns, then column j of AB
is the same combination of earlier columns of AB. Then rank .AB/ rank .B/. No
new pivot columns! (b) The rank of B is r D 1. Multiplyingby A
cannot increase
1 1 . It drops to zero
this rank. The rank
of
AB
stays
the
same
for
A
D
I
and
B
D
1
1
1
for A2 D 11 11 .
18 If we know that rank.B T AT / rank.AT /, then since rank stays the same for transposes,
(apologies that this fact is not yet proved), we have rank.AB/ rank.A/.
19 We are given AB D I which has rank n. Then rank.AB/ rank.A/ forces rank.A/ D
n. This means that A is invertible. The right-inverse B is also a left-inverse: BA D I
and B D A 1 .
20 Certainly A and B have at most rank 2. Then their product AB has at most rank 2.
Since BA is 3 by 3, it cannot be I even if AB D I .
21 (a) A and B will both have the same nullspace and row space as the R they share.
(b) A equals an invertible matrix times B, when they share the same R. A key fact!
#
#
"
"
1 0 1 1 0
1 1 0
D 1 1 0 C
22 A D .pivot columns/.nonzero rows of R/ D 1 4
0 0 1
1 1 0
1 8
"
#
0 0 0
2 2 1 0
columns
2 0
0 2
0 0 4 . BD
D
D
C
2 3 0 1
times rows
2 0
0 3
0 0 8
Solutions to Exercises
23 If c D 1; R D
"
33
1
0
0
1 2 2
0 0 0
0 0 0
#
has x2 ; x3 ; x4 free. If c ¤ 1; R D
2
"
1 0
0 1
0 0
2 2
0 0
0 0
#
3
1
2
2
0
07
6 1
has x3 ; x4 free. Special solutions in N D 4
(for c D 1) and N D
0
1
05
0
0
1
2
3
2
2
07
1
2
0 1
6 0
(for
c
¤
1).
If
c
D
1;
R
D
and
x
free;
if
c
D
2;
R
D
4 1
1
05
0 0
0
0
0
1
1
and x2 free; R D I if c ¤ 1; 2. Special solutions in N D
.c D 1/ or N D
0
2
.c D 2/ or N D 2 by 0 empty matrix.
1
I
I I
24 A D I I has N D
IB D
has the same N ; C D I I I has
I
0 0
"
#
I
I
I
0 .
N D
0
I
#
# "
"
1 1 1 1 2 4
1 0 2 3
D (pivot columns) times R.
25 A D 1 2 2 5 D 1 2
0 1 0 1
1 3
1 3 2 6
26 The m by n matrix Z has r ones to start its main diagonal. Otherwise Z is all zeros.
I
r by r
r by n r
I F
T
; rref.R / D
D
27 R D
0
m r by r m r by n r
0 0
I 0
;I
28 The row-column reduced echelon form is always
0 0
0
; rref.RT R/ Dsame R
0
is r by r.
Problem Set 3.4, page 163
"
# "
# "
#
2 4 6 4 b1
2 4 6 4 b1
2 4 6 4 b1
1 2 5 7 6 b2 ! 0 1 1 2 b2 b1 ! 0 1 1 2 b2 b1
2 3 5 2 b3
0 1 1 2 b3 b1
0 0 0 0 b3 C b2 2b1
Ax D b has a solution when b3 C b2 2b1 D 0; the column space contains all combinations of .2; 2; 2/ and .4; 5; 3/. This is the plane b3 Cb2 2b1 D 0 (!). The nullspace
contains all combinations of s1 D . 1; 1; 1; 0/ and s2 D .2; 2; 0; 1/I xcomplete D
xp C c1 s1 C c2 s2 I
R
"
1
d D 0
0
0 1
1 1
0 0
2
2
0
4
1
0
#
gives the particular solution xp D .4; 1; 0; 0/:
Solutions to Exercises
34
"
#
"
#
"
#
2 1 3 b1
2 1 3 b1
1 1=2 3=2 5
0
0
2 6 3 9 b2 ! 0 0 0 b2 3b1
Then Œ R d  D 0 0
4 2 6 b3
0 0 0 b3 2b1
0 0
0
0
Ax D b has a solution when b2 3b1 D 0 and b3 2b1 D 0; C .A/ D line through
.2; 6; 4/ which is the intersection of the planes b2 3b1 D 0 and b3 2b1 D 0;
the nullspace contains all combinations of s1 D . 1=2; 1; 0/ and s2 D . 3=2; 0; 1/;
particular solution x p D d D .5; 0; 0/ and complete solution x p C c1 s1 C c2 s2 .
" #
" #
2
3
0 C x2 1 . The matrix is singular but the equations are
3 x
D
complete
1
0
still solvable; b is in the column space. Our particular solution has free variable y D 0.
D x p C x n D . 21 ; 0; 21 ; 0/ C x2 . 3; 1; 0; 0/ C x4 .0; 0; 2; 1/.
"
# "
#
1 2
2 b1
1 2
2 b1
4 b2 ! 0 1
0 b2 2b1
5 2 5
solvable if b3 2b1 b2 D 0.
4 9
8 b3
0 0
0 b3 2b1 b2
Back-substitution gives the particular solution to Ax D b and the special solution to
"
#
" #
5b1 2b2
2
Ax D 0: x D b2 2b1 C x3 0 .
0
1
5b1 2b3
6 (a) Solvable if b2 D 2b1 and 3b1 3b3 C b4 D 0. Then x D
D xp
b3 2b1
"
#
" #
5b1 2b3
1
(b) Solvable if b2 D 2b1 and 3b1 3b3 C b4 D 0. x D b3 2b1 C x3 1 .
0
1
"
# "
#
1 3 1 b1
1
3
1 b2
One more step gives Œ 0 0 0 0  D
1
1 b2 3b1 row 3 2 (row 2) C 4(row 1)
7 3 8 2 b2 ! 0
2 4 0 b3
0
2
2 b3 2b1 provided b3 2b2 C4b1 D0.
4 x
complete
8 (a) Every b is in C .A/: independent rows, only the zero combination gives 0.
(b) We need b3 D 2b2 , because .row 3/ 2.row 2/ D 0.
"
#"
# "
1
0 0
1 2 3 5 b1
1 2
1 0
0 0 2 2 b2 2b1
9 L U c D 2
D 2 4
3
1 1
0 0 0 0 b3 C b2 5b1
3 6
D A b ; particular x p D . 9; 0; 3; 0/ means 9.1; 2; 3/ C 3.3; 8; 7/
This is Ax p D b.
1 0
1
2
xD
has x p D .2; 4; 0/ and x null D .c; c; c/.
10
0 1
1
4
#
3 5 b1
8 12 b2
7 13 b3
D .0; 6; 6/.
11 A 1 by 3 system has at least two free variables. But x null in Problem 10 only has one.
12 (a) x 1 x 2 and 0 solve Ax D 0
(b) A.2x 1 2x 2 / D 0; A.2x 1 x 2 / D b
13 (a) The
multiplied by 1
solution
xp is always
particular
(b) Any
can be xp
solution
p
1
6
3 3 x
2
is shorter (length 2) than
D
. Then
(c)
(length 2)
6
1
3 3 y
0
(d) The only “homogeneous” solution in the nullspace is x n D 0 when A is invertible.
Solutions to Exercises
35
14 If column 5 has no pivot, x5 is a free variable. The zero vector is not the only solution
15
16
17
18
19
20
21
22
23
24
25
26
to Ax D 0. If this system Ax D b has a solution, it has infinitely many solutions.
If row 3 of U has no pivot, that is a zero row. U x D c is only solvable provided
c 3 D 0. Ax D b might not be solvable, because U may have other zero rows needing
more ci D 0.
The largest rank is 3. Then there is a pivot in every row. The solution always exists.
The column space is R3 . An example is A D Œ I F  for any 3 by 2 matrix F .
The largest rank of a 6 by 4 matrix is 4. Then there is a pivot in every column. The
solution is unique. The nullspace contains only the zero vector. An example is A D
R D Œ I F  for any 4 by 2 matrix F .
Rank D 2; rank D 3 unless q D 2 (then rank D 2). Transpose has the same rank!
Both matrices A have rank 2. Always AT A and AAT have the same rank as A.
"
#"
#
1 0 0
1 0
1
0
1 0 3
4 1 0
0 2
2
3 .
A D LU D
I A D LU 2 1 0
2 1 0
3 0 1
0 3 1
0 0 11
5
" # " #
" #
" #
" # " #
" #
x
4
1
1
x
4
1
1 Cz
0
0 . The second
(a) y D 0 C y
(b) y D 0 C z
z
0
0
1
z
0
1
equation in part (b) removed one special solution.
If Ax 1 D b and also Ax 2 D b then we can add x 1 x 2 to any solution of Ax D B:
the solution x is not unique. But there will be no solution to Ax D B if B is not in
the column space.
For A; q D 3 gives rank 1, every other q gives rank 2. For B; q D 6 gives rank 1, every
other q gives rank 2. These matrices cannot have rank 3.
1 1
b1
x1
1
Œx  D
has 0 or 1 solutions, depending on b (b)
D
(a)
1
b2
x2
Œ b  has infinitely many solutions for every b (c) There are 0 or 1 solutions when A
has rank r < m and r < n: the simplest example is a zero matrix. (d) one solution
for all b when A is square and invertible (like A D I ).
(a) r < m, always r n (b) r D m, r < n (c) r < m; r D n (d) r D m D n.
#
"
#
"
#
"
1 0
2
2 4 4
2 4 4
0 3 6 !RD 0 1
2 and 0 3 6 ! R D I .
0 0 5
0 0 0
0 0
0
27 If U has n pivots, then R has n pivots equal to 1. Zeros above and below those pivots
make R D I .
" # 2
1 2 0
1 2 3 5
1 2 3 0
1 2 0 0
1 ;
!
!
; xn D
28
0 0 1
0 0 4 8
0 0 4 0
0 0 1 0
0
Free x2 D 0 gives x p D . 1; 0; 2/ because the pivot columns contain I .
"
#
" #
"
1 0 0 0
0
1 0
29 Œ R d  D 0 0 1 0 leads to x n D 1 ; Œ R d  D 0 0
0 0 0 0
0
0 0
no solution because of the 3rd equation
1
.
2
0
1
0
#
1
2 :
5
Solutions to Exercises
36
2 3
2 3
#
4
2
4
6 37
6 07
3 ; 4 5; x n D x3 4 5.
30
0
1
2
2
0
"
#
" #
1 1
1
0
31 For A D 0 2 , the only solution to Ax D 2 is x D
. B cannot exist since
1
0 3
3
2 equations in 3 unknowns cannot have a unique solution.
2
3
2
32
3
1 3 1
1
1
3 1
1 27
61 2 37
61 1
7 60
32 A D 4
factors into LU D 4
and the rank
5 40
2 4 65
2 2 1
0 05
1 1 5
1 2 0 1
0
0 0
is r D 2. The special solution to Ax D 0 and U x D 0 is s D . 7; 2; 1/. Since
b D .1; 3; 6; 5/ is also the last column of A, a particular solution to Ax D b is
.0; 0; 1/ and the complete solution is x D .0; 0; 1/ C cs. (Or use the particular solution
x p D .7; 2; 0/ with free variable x3 D 0.)
"
#
"
#
"
1 0 2 3 2
1 0 2 3 2
1 0 2 0
1 3 2 0 5 ! 0 3 0 3 3 ! 0 1 0 0
2 0 4 9 10
0 0 0 3 6
0 0 0 1
For b D .1; 0; 0; 0/ elimination leads to U x D .1; 1; 0; 1/ and the fourth equation is 0 D 1. No solution for this b.
1
1
0
1 0
33 If the complete solution to Ax D
is x D
C
then A D
.
3
0
c
3 0
34 (a) If s D .2; 3; 1; 0/ is the only special solution to Ax D 0, the complete solution is
x D cs (line of solution!). The rank of A must be 4
1 D 3.
"
1 0
(b) The fourth variable x4 is not free in s, and R must be 0 1
0 0
#
2 0
3 0 .
0 1
(c) Ax D b can be solve for all b, because A and R have full row rank r D 3.
35 For the 1; 2; 1 matrix K(9 by 9) and constant right side b D .10; ; 10/, the
solution x D K 1 b D .45; 80; 105; 120; 125; 120; 105; 80; 45/ rises and falls along
the parabola xi D 50i 5i 2 . (A formula for K 1 is later in the text.)
36 If Ax D b and C x D b have the same solutions, A and C have the same shape and
the same nullspace (take b D 0). If b D column 1 of A, x D .1; 0; : : : ; 0/ solves
Ax D b so it solves C x D b. Then A and C share column 1. Other columns too: A D C !
Problem Set 3.5, page 178
1
"
1 1 1
0 1 1
0 0 1
#"
c1
c2
c3
#
D 0 gives c3 D c2 D c1 D 0. So those 3 column vectors are
"
#
" #
1 1 1 2
0
independent. But 0 1 1 3 Œ c  D 0 is solved by c D .1; 1; 4; 1/. Then
0 0 1 4
0
v1 C v2 4v3 C v4 D 0 (dependent).
2 v1 ; v2 ; v3 are independent (the 1’s are in different positions). All six vectors are on
the plane .1; 1; 1; 1/ v D 0 so no four of these six vectors can be independent.
Solutions to Exercises
37
3 If a D 0 then column 1 D 0; if d D 0 then b.column 1/
a.column 2/ D 0; if f D 0
then all columns end in zero (they are all in the xy plane, they must be dependent).
"
#" #
" #
a b c
x
0
y D 0 gives z D 0 then y D 0 then x D 0. A square
4 Ux D 0 d e
0 0 f
z
0
triangular matrix has independent columns (invertible matrix) when its diagonal has no
zeros.
"
# "
# "
#
1 2 3
1
2
3
1
2
3
5
7 ! 0
5
7 : invertible ) independent
5 (a) 3 1 2 ! 0
2 3 1
0
1
5
0
0
18=5
columns.
"
#
"
#
"
# " # " #
1
2
3
1
2
3
1 2
3
1
0
3
1
2 ! 0
7
7 ! 0 7
7 I A 1 D 0 , columns
(b)
2
3
1
0
7
7
0 0
0
1
0
add to 0.
6 Columns 1, 2, 4 are independent. Also 1, 3, 4 and 2, 3, 4 and others (but not 1, 2, 3).
Same column numbers (not same columns!) for A.
w2 / D 0. So the
#
0
1
1
0
1 .
difference are dependent and the difference matrix is singular: A D 1
1
1
0
7 The sum v1
v2 C v3 D 0 because .w2
w3 /
.w1
w3 / C .w1
"
8 If c1 .w2 C w3 / C c2 .w1 C w3 / C c3 .w1 C w2 / D 0 then .c2 C c3 /w1 C .c1 C c3 /w2 C
.c1 C c2 /w3 D 0. Since the w’s are independent, c2 C c3 D c1 C c3 D c1 C c2 D 0.
The only solution is c1 D c2 D c3 D 0. Only this combination of v1 ; v2 ; v3 gives 0.
9 (a) The four vectors in R3 are the columns of a 3 by 4 matrix A. There is a nonzero
solution to Ax D 0 because there is at least one free variable (b) Two vectors are
dependent if Œ v 1 v2  has rank 0 or 1. (OK to say “they are on the same line” or “one is
a multiple of the other” but not “v2 is a multiple of v1 ” —since v1 might be 0.) (c) A
nontrivial combination of v1 and 0 gives 0: 0v1 C 3.0; 0; 0/ D 0.
10 The plane is the nullspace of A D Π1 2
3 1 . Three free variables give three
solutions .x; y; z; t / D .2; 1 0 0/ and .3; 0; 1; 0/ and .1; 0; 0; 1/. Combinations
of those special solutions give more solutions (all solutions).
11 (a) Line in R3
(b) Plane in R3
(c) All of R3
(d) All of R3 .
12 b is in the column space when Ax D b has a solution; c is in the row space when
AT y D c has a solution. False. The zero vector is always in the row space.
13 The column space and row space of A and U all have the same dimension = 2. The row
spaces of A and U are the same, because the rows of U are combinations of the rows
of A (and vice versa!).
1
.v
2
C w/ C 12 .v w/ and w D 12 .v C w/ 12 .v w/. The two pairs span the
same space. They are a basis when v and w are independent.
14 v D
15 The n independent vectors span a space of dimension n. They are a basis for that space.
If they are the columns of A then m is not less than n .m n/.
Solutions to Exercises
38
(a) .1; 1; 1; 1/ for the space of all constant vectors
.c; c; c; c/
(b) .1; 1; 0; 0/; .1; 0; 1; 0/; .1; 0; 0; 1/ for the space of vectors with
sum of components = 0
(c) .1; 1; 1; 0/; .1; 1; 0; 1/ for the space perpendicular to .1; 1; 0; 0/ and .1; 0; 1; 1/
(d) The columns of I are a basis for its column
space, the empty set is a basis (by convention) for N .I / D {zero vector}.
1 0 1 0 1
17 The column space of U D
is R2 so take any bases for R2 ; (row 1
0 1 0 1 0
and row 2) or (row 1 and row 1 C row 2) and (row 1 and row 2) are bases for the row
spaces of U .
16 These bases are not unique!
18 (a) The 6 vectors might not span R4
(b) The 6 vectors are not independent
(c) Any four might be a basis.
19 n-independent columns ) rank n. Columns span Rm ) rank m. Columns are basis
for Rm ) rank D m D n. The rank counts the number of independent columns.
20 One basis is .2; 1; 0/, . 3; 0; 1/. A basis for the intersection with the xy plane is
.2; 1; 0/. The normal vector .1; 2; 3/ is a basis for the line perpendicular to the plane.
21 (a) The only solution to Ax D 0 is x D 0 because the columns are independent
(b) Ax D b is solvable because the columns span R5 . Key point: A basis gives
exactly one solution for every b.
22 (a) True
(b) False because the basis vectors for R6 might not be in S.
23 Columns 1 and 2 are bases for the (different) column spaces of A and U ; rows 1 and
2 are bases for the (equal) row spaces of A and U ; .1; 1; 1/ is a basis for the (equal)
nullspaces.
24 (a) False A D Œ 1 1  has dependent
columns, independent row
(b) False column
0 1
(c) True: Both dimensions D 2 if A is inspace ¤ row space for A D
0 0
vertible, dimensions D 0 if A D 0, otherwise dimensions D 1
(d) False, columns
may be dependent, in that case not a basis for C .A/.
c d
has rank 2 except when c D d or
25 A has rank 2 if c D 0 and d D 2; B D
d c
c D d.
# "
# "
#
"
0 0 0
0 0 0
1 0 0
26 (a) 0 0 0 ; 0 1 0 ; 0 0 0
0 0 0
0 0 0
0 0 1
#
"
# "
# "
0 0 1
0 0 0
0 1 0
(b) Add 1 0 0 ; 0 0 0 , 0 0 1
1 0 0
0 1 0
0 0 0
#
# "
"
# "
0 1 0
0 0 1
0 0 0
1 0 0 ;
0 0 0 ; 0 0 1 .
(c)
0
1 0
0 0 0
1 0 0
These are simple bases (among many others) for (a) diagonal matrices (b) symmetric
matrices (c) skew-symmetric matrices. The dimensions are 3; 6; 3.
Solutions to Exercises
39
"
# "
# "
# "
# "
#
1 0 0
1 0 0
1 1 0
1 0 1
1 0 0
27 I , 0 1 0 , 0 2 0 , 0 1 0 , 0 1 0 , 0 1 1 ; echelon matri0 0 2
0 0 1
0 0 1
0 0 1
0 0 1
ces do not form a subspace; they span the upper triangular matrices (not every U is
echelon).
1 0 0
0
1
0
0
0
1
1
1
0
1
0
1
28
,
,
;
and
.
1 0 0
0
1
0
0
0
1
1
1
0
1
0
1
29 (a) The invertible matrices span the space of all 3 by 3 matrices
(b) The rank one
matrices also span the space of all 3 by 3 matrices (c) I by itself spans the space of
all multiples cI .
1 2 0
1 0 2
0 0 0
0 0 0
30
,
,
,
.
0 0 0
0 0 0
1 2 0
1 0 2
31 (a) y.x/ D constant C
(b) y.x/ D 3x this is one basis for the 2 by 3 matrices with
.2; 1; 1/ in their nullspace (4-dim subspace). (c) y.x/ D 3x C C D yp C yn solves
dy=dx D 3.
32 y.0/ D 0 requires A C B C C D 0. One basis is cos x
33 (a) y.x/ D e
2x
cos 2x and cos x
cos 3x.
0
is a basis for, all solutions to y D 2y (b) y D x is a basis for all
solutions to dy=dx D y=x (First-order linear equation ) 1 basis function in solution
space).
34 y1 .x/; y2 .x/; y3 .x/ can be x; 2x; 3x .dim 1/ or x; 2x; x 2 .dim 2/ or x; x 2 ; x 3 .dim 3/.
35 Basis 1, x, x 2 , x 3 , for cubic polynomials; basis x
1, x 2
1, x 3
1 for the subspace
with p.1/ D 0.
36 Basis for S: .1; 0; 1; 0/, .0; 1; 0; 0/, .1; 0; 0; 1/; basis for T: .1; 1; 0; 0/ and .0; 0; 2; 1/;
S \ T D multiples of .3; 3; 2; 1/ D nullspace for 3 equation in R4 has dimension 1.
37 The subspace of matrices that have AS D SA has dimension three.
38 (a) No, 2 vectors don’t span R3
(b) No, 4 vectors in R3 are dependent (c) Yes, a
basis (d) No, these three vectors are dependent
39 If the 5 by 5 matrix Œ A b  is invertible, b is not a combination of the columns of A.
40
If Œ A b  is singular, and the 4 columns of A are independent, b is a combination of
those columns. In this case Ax D b has a solution.
(a) The functions y D sin x, y D cos x, y D e x , y D e
to d 4 y=dx 4 D y.x/.
x
are a basis for solutions
(b) A particular solution to d 4 y=dx 4 D y.x/ C 1 is y.x/ D 1. The complete
solution is y.x/ D 1 C c; sin x C c2 cos x C c3 e x C c4 e x (or use another basis
for the nullspace of the 4th derivative).
"
# "
# "
# "
# "
#
1
1
1
1
1
The six P ’s
1 C
1
1
1
41 I D 1
C
.
.
are dependent
1
1
1
1
1
Those five are independent: The 4th has P11 D 1 and cannot be a combination of the
others. Then the 2nd cannot be (from P32 D 1) and also 5th (P32 D 1). Continuing,
a nonzero combination of all five could not be zero. Further challenge: How many
independent 4 by 4 permutation matrices?
Solutions to Exercises
40
(a) zero when x D 0
(b) one when x D .1; 1; 1; 1/
(c) three when x D .1; 1; 1; 1/ because all rearrangements of this x are perpendicular to .1; 1; 1; 1/
(d) four when the x’s are not
equal and don’t add to zero. No x gives dim S D 2. I owe this nice problem to Mike
Artin—the answers are the same in higher dimensions: 0; 1; n 1; n.
The problem is to show that the u’s, v’s, w’s together are independent. We know the
u’s and v’s together are a basis for V , and the u’s and w’s together are a basis for W .
Suppose a combination of u’s, v’s, w’s gives 0. To be proved: All coefficients D zero.
Key idea: In that combination giving 0, the part x from the u’s and v’s is in V . So the
part from the w’s is x. This part is now in V and also in W . But if x is in V \ W it
is a combination of u’s only. Now the combination uses only u’s and v’s (independent
in V !) so all coefficients of u’s and v’s must be zero. Then x D 0 and the coefficients
of the w’s are also zero.
The inputs to an m by n matrix fill Rn . The outputs (column space!) have dimension
r. The nullspace has n r special solutions. The formula becomes r C .n r/ D n.
If the left side of dim.V/ C dim.W/ D dim.V \ W/ C dim.V C W/ is greater than n,
then dim.V \ W/ must be greater than zero. So V \ W contains nonzero vectors.
If A2 D zero matrix, this says that each column of A is in the nullspace of A. If the
column space has dimension r, the nullspace has dimension 10 r, and we must have
r 10 r and r 5.
42 The dimension of S spanned by all rearrangements of x is
43
44
45
46
Problem Set 3.6, page 190
1 (a) Row and column space dimensions D 5, nullspace dimension D 4, dim.N .AT //
2
3
4
5
6
D 2 sum D 16 D m C n (b) Column space is R3 ; left nullspace contains only 0.
A: Row space basis D row 1 D .1; 2; 4/; nullspace . 2; 1; 0/ and . 4; 0; 1/; column
space basis D column1 D .1; 2/; left nullspace . 2; 1/. B: Row space basis D
both rows D .1; 2; 4/ and .2; 5; 8/; column space basis D two columns D .1; 2/ and
.2; 5/; nullspace . 4; 0; 1/; left nullspace basis is empty because the space contains
only y D 0.
Row space basis D rows of U D .0; 1; 2; 3; 4/ and .0; 0; 0; 1; 2/; column space basis D
pivot columns (of A not U ) D .1; 1; 0/ and .3; 4; 1/; nullspace basis .1; 0; 0; 0; 0/,
.0; 2; 1; 0; 0/, .0; 2; 0; 2; 1/; left nullspace .1; 1; 1/ D last row of E 1 !
#
"
1 0
9
3
(a) 1 0
(b) Impossible: rC.n r/ must be 3
(c) Œ 1 1 
(d)
3
1
0 1
(e) Impossible Row space D column space requires m D n. Then m r D n r;
nullspaces have the same dimension. Section 4.1 will prove N .A/ and N .AT /
orthogonal to the row and column spaces respectively—here those are the same space.
1 1 1
2 1 has the
A D
has those rows spanning its row space B D 1
2 1 0
same rows spanning its nullspace and BAT D 0.
A: dim 2; 2; 2; 1: Rows .0; 3; 3; 3/ and .0; 1; 0; 1/; columns .3; 0; 1/ and .3; 0; 0/;
nullspace .1; 0; 0; 0/ and .0; 1; 0; 1/; N .AT / .0; 1; 0/. B: dim 1; 1; 0; 2 Row space
(1), column space .1; 4; 5/, nullspace: empty basis, N .AT / . 4; 1; 0/ and . 5; 0; 1/.
Solutions to Exercises
41
7 Invertible 3 by 3 matrix A: row space basis D column space basis D .1; 0;
0/, .0; 1; 0/,
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
.0; 0; 1/; nullspace basis and left nullspace basis are empty. Matrix B D A A : row
space basis .1; 0; 0; 1; 0; 0/, .0; 1; 0; 0; 1; 0/ and .0; 0; 1; 0; 0; 1/; column space basis
.1; 0; 0/, .0; 1; 0/, .0; 0; 1/; nullspace basis . 1; 0; 0; 1; 0; 0/ and .0; 1; 0; 0; 1; 0/ and
.0; 0; 1; 0; 0; 1/; left nullspace basis is empty.
I 0 and I I I 0 0 and 0 D 3 by 2 have row space dimensions D 3; 3; 0 D
column space dimensions; nullspace dimensions 2; 3; 2; left nullspace dimensions 0; 2; 3.
(a) Same row space and nullspace. So rank (dimension of row space) is the same
(b) Same column space and left nullspace. Same rank (dimension of column space).
For rand .3/, almost surely rankD 3, nullspace and left nullspace contain only .0; 0; 0/.
For rand .3; 5/ the rank is almost surely 3 and the dimension of the nullspace is 2.
(a) No solution means that r < m. Always r n. Can’t compare m and n here.
(b) Since m r > 0, the left nullspace must contain a nonzero vector.
"
#
"
#
1 1 2 2 1
1 0 1
A neat choice is 0 2
D 2 4 0 ; r C .n r/ D n D 3 does
1 2 0
1 0
1 0 1
not match 2 C 2 D 4. Only v D 0 is in both N .A/ and C .AT /.
(a) False: Usually row space ¤ column space (same dimension!) (b) True: A and A
have the same four subspaces (c) False (choose A and B same size and invertible: then
they have the same four subspaces)
Row space basis can be the nonzero rows of U : .1; 2; 3; 4/, .0; 1; 2; 3/, .0; 0; 1; 2/;
nullspace basis .0; 1; 2; 1/ as for U ; column space basis .1; 0; 0/, .0; 1; 0/, .0; 0; 1/
(happen to have C.A/ D C.U / D R3 ); left nullspace has empty basis.
After a row exchange, the row space and nullspace stay the same; .2; 1; 3; 4/ is in the
new left nullspace after the row exchange.
If Av D 0 and v is a row of A then v v D 0.
Row space D yz plane; column space D xy plane; nullspace D x axis; left nullspace
D z axis. For I C A: Row space D column space D R3 , both nullspaces contain only
the zero vector.
Row 3 2 row 2C row 1 D zero row so the vectors c.1; 2; 1/ are in the left nullspace.
The same vectors happen to be in the nullspace (an accident for this matrix).
(a) Elimination on Ax D 0 leads to 0 D b3 b2 b1 so . 1; 1; 1/ is in the left
nullspace. (b) 4 by 3: Elimination leads to b3 2b1 D 0 and b4 C b2 4b1 D 0, so
. 2; 0; 1; 0/ and . 4; 1; 0; 1/ are in the left nullspace. Why? Those vectors multiply the
matrix to give zero rows. Section 4.1 will show another approach: Ax D b is solvable
(b is in C .A/) when b is orthogonal to the left nullspace.
(a) Special solutions . 1; 2; 0; 0/ and . 14 ; 0; 3; 1/ are perpendicular to the rows of
R (and then ER).
(b) AT y D 0 has 1 independent solution D last row of E 1 .
(E 1 A D R has a zero row, which is just the transpose of AT y D 0).
(a) u and w
(b) v and z
(c) rank < 2 if u and w are dependent or if v and z
are dependent
(d) The rank of uvT C wzT is 2.
"
#
" 3 2 # has column space spanned
1 2 T
1 0
T
AD u w v z D 2 2
D 4 2 by u and w, row space
1 1
4 1
5 1 spanned by v and z:
Solutions to Exercises
42
23 As in Problem 22: Row space basis .3; 0; 3/; .1; 1; 2/; column space basis .1; 4; 2/,
24
25
26
27
28
29
30
31
32
.2; 5; 7/; the rank of (3 by 2) times (2 by 3) cannot be larger than the rank of either
factor, so rank 2 and the 3 by 3 product is not invertible.
AT y D d puts d in the row space of A; unique solution if the left nullspace (nullspace
of AT ) contains only y D 0.
(a) True (A and AT have the same rank)
(b) False A D Œ 1 0  and AT have very
different left nullspaces
(c) False (A can be invertible and unsymmetric even if
C .A/ D C .AT /)
(d) True (The subspaces for A and A are always the same. If
AT D A or AT D A they are also the same for AT )
The rows of C D AB are combinations of the rows of B. So rank C rank B. Also
rank C rank A, because the columns of C are combinations of the columns of A.
Choose d D bc=a to make ac db a rank-1 matrix. Then the row space has basis .a; b/
and the nullspace has basis . b; a/. Those two vectors are perpendicular !
B and C (checkers and chess) both have rank 2 if p ¤ 0. Row 1 and 2 are a basis for the
row space of C , B T y D 0 has 6 special solutions with 1 and 1 separated by a zero;
N.C T / has . 1; 0; 0; 0; 0; 0; 0; 1/ and .0; 1; 0; 0; 0; 0; 1; 0/ and columns 3; 4; 5; 6 of
I ; N.C / is a challenge.
a11 D 1; a12 D 0; a13 D 1; a22 D 0; a32 D 1; a31 D 0; a23 D 1; a33 D 0; a21 D 1.
The subspaces for A D uvT are pairs of orthogonal lines (v and v? , u and u? ).
If B has those same four subspaces then B D cA with c ¤ 0.
(a) AX D 0 if each column of X is a multiple of .1; 1; 1/; dim.nullspace/ D 3.
(b) If AX D B then all columns of B add to zero; dimension of the B’s D 6.
(c) 3 C 6 D dim.M 33 / D 9 entries in a 3 by 3 matrix.
The key is equal row spaces. First row of A D combination of the rows of B: only
possible combination (notice I ) is 1 (row 1 of B). Same for each row so F D G.
Problem Set 4.1, page 202
1 Both nullspace vectors are orthogonal to the row space vector in R3 . The column space
2
3
4
5
is perpendicular to the nullspace of AT (two lines in R2 because rank D 1).
The nullspace of a 3 by 2 matrix with rank 2 is Z (only zero vector) so x n D 0, and
row space D R2 . Column space D plane perpendicular to left nullspace D line in R3 .
"
#
" #
" #
" #
" #
1
2
3
2
1
1
1
2
3
1 (b) Impossible, 3 not orthogonal to 1 (c) 1 and 0 in
(a)
3
5
2
5
1
1
0 C .A/ and N .AT / is impossible: not perpendicular (d) Need A2 D 0; take A D 11 11
(e) .1; 1; 1/ in the nullspace (columns add to 0) and also row space; no such matrix.
If AB D 0, the columns of B are in the nullspace of A. The rows of A are in the left
nullspace of B. If rank D 2, those four subspaces would have dimension 2 which is
impossible for 3 by 3.
(a) If Ax D b has a solution and AT y D 0, then y is perpendicular to b. bT y D
.Ax/T y D x T .AT y/ D 0.
(b) If AT y D .1; 1; 1/ has a solution, .1; 1; 1/ is in the
row space and is orthogonal to every x in the nullspace.
Solutions to Exercises
43
6 Multiply the equations by y1 ; y2 ; y3 D 1; 1; 1. Equations add to 0 D 1 so no solution:
7
8
9
10
11
12
13
14
15
16
y D .1; 1; 1/ is in the left nullspace. Ax D b would need 0 D .y T A/x D y T b D 1.
Multiply the 3 equations by y D .1; 1; 1/. Then x1 x2 D 1 plus x2 x3 D 1 minus
x1 x3 D 1 is 0 D 1. Key point: This y in N .AT / is not orthogonal to b D .1; 1; 1/
so b is not in the column space and Ax D b has no solution.
x D x r C x n , where x r is in the row space and x n is in the nullspace. Then Ax n D 0
and Ax D Ax r C Ax n D Ax r . All Ax are in C .A/.
Ax is always in the column space of A. If AT Ax D 0 then Ax is also in the nullspace
of AT . So Ax is perpendicular to itself. Conclusion: Ax D 0 if AT Ax D 0.
(a) With AT D A, the column and row spaces are the same
(b) x is in the nullspace
and z is in the column space = row space: so these “eigenvectors” have x T z D 0.
For A: The nullspace is spanned by . 2; 1/, the row space is spanned by .1; 2/. The
column space is the line through .1; 3/ and N .AT / is the perpendicular line through
.3; 1/. For B: The nullspace of B is spanned by .0; 1/, the row space is spanned by
.1; 0/. The column space and left nullspace are the same as for A.
x splits into x r C x n D .1; 1/ C .1; 1/ D .2; 0/. Notice N .AT / is a plane .1; 0/ D
.1; 1/=2 C .1; 1/=2 D x r C x n .
V T W D zero makes each basis vector for V orthogonal to each basis vector for W .
Then every v in V is orthogonal to every w in W (combinations of the basis vectors).
x
Ax D Bb
x means that Œ A B 
D 0. Three homogeneous equations in four
b
x
unknowns always have a nonzero solution. Here x D .3; 1/ and b
x D .1; 0/ and
Ax D Bb
x D .5; 6; 5/ is in both column spaces. Two planes in R3 must share a line.
A p-dimensional and a q-dimensional subspace of Rn share at least a line if p C q > n.
(The p C q basis vectors of V and W cannot be independent.)
AT y D 0 leads to .Ax/T y D x T AT y D 0. Then y ? Ax and N .AT / ? C .A/.
17 If S is the subspace of R3 containing only the zero vector, then S ? is R3 . If S is
spanned by .1; 1; 1/, then S ? is the plane spanned by .1; 1; 0/ and .1; 0; 1/. If S is
spanned by .2; 0; 0/ and .0; 0; 3/, then S ? is the line spanned by .0; 1; 0/.
1 5 1
. Therefore S ? is a subspace even if S is not.
18 S ? is the nullspace of A D
2 2 2
19 L? is the 2-dimensional subspace (a plane) in R3 perpendicular to L. Then .L? /? is
a 1-dimensional subspace (a line) perpendicular to L? . In fact .L? /? is L.
20 If V is the whole space R4 , then V ? contains only the zero vector. Then .V ? /? D
R4 D V .
1 2 2 3
21 For example . 5; 0; 1; 1/ and .0; 1; 1; 0/ span S Dnullspace of A D
.
1 3 3 2
22 .1; 1; 1; 1/ is a basis for P ? . A D 1 1 1 1 has P as its nullspace and P ? as
row space.
?
23 x in V ? is perpendicular to any vector in V . Since V contains all the vectors in S ,
x is also perpendicular to any vector in S . So every x in V ? is also in S ? .
Solutions to Exercises
44
1
D I : Column 1 of A
nth rows of A.
24 AA
1
is orthogonal to the space spanned by the 2nd, 3rd, : : :,
25 If the columns of A are unit vectors, all mutually perpendicular, then AT A D I .
26 A D
"
2
1
2
#
1 This example shows a matrix with perpendicular columns.
2 , AT A D 9I is diagonal: .AT A/ij D .column i of A/ .column j of A/.
2 When the columns are unit vectors, then AT A D I .
2
2
1
27 The lines 3x C y D b1 and 6x C 2y D b2 are parallel. They are the same line if
b2 D 2b1 . In that case .b1 ; b2 / is perpendicular to . 2; 1/. The nullspace of the 2 by 2
matrix is the line 3x C y D 0. One particular vector in the nullspace is . 1; 3/.
28 (a) .1; 1; 0/ is in both planes. Normal vectors are perpendicular, but planes still in-
tersect! (b) Need three orthogonal vectors to span the whole orthogonal complement.
(c) Lines can meet at the zero vector without being orthogonal.
"
#
"
#
1 2 3
1
1
1
A has v D .1; 2; 3/ in row space and column space
1
0 ; B has v in its column space and nullspace.
29 A D 2 1 0 ; B D 2
3 0 1
3
0
1
v can not be in the nullspace and row space, or in
the left nullspace and column space. These spaces are orthogonal and vT v ¤ 0.
30 When AB D 0, the column space of B is contained in the nullspace of A. Therefore
the dimension of C .B/ dimension of N .A/. This means rank.B/ 4
rank.A/.
0
31 null.N / produces a basis for the row space of A (perpendicular to N.A/).
32 We need r T n D 0 and c T ` D 0. All possible examples have the form acr T with a ¤ 0.
33 Both r’s orthogonal to both n’s, both c’s orthogonal to both `’s, each pair independent.
All A’s with these subspaces have the form Œc 1 c 2 M Œr 1 r 2 T for a 2 by 2 invertible M .
Problem Set 4.2, page 214
1 (a) aT b=aT a D 5=3; p D 5a=3; e D . 2; 1; 1/=3 (b) aT b=aT a D 1; p D a; e D 0.
2 (a) The projection of b D .cos ; sin / onto a D .1; 0/ is p D .cos ; 0/
(b) The projection of b D .1; 1/ onto a D .1; 1/ is p D .0; 0/ since aT b D 0.
"
#
" #
"
#
" #
1
1 1 1 1
1 5
1 1 3 1
1 1 1 and P1 b D
5 . P2 D
3 9 3 and P2 b D 3 .
3 P1 D
3 1 1 1
3 5
11 1 3 1
1
1 1
1 0
1 P1 projects onto .1; 0/, P2 projects onto .1; 1/
4 P1 D
, P2 D
.
0 0
1
1
P1 P2 ¤ 0 and P1 C P2 is not a projection matrix.
2
"
#
"
#
1
2
2
4
4
2
1
1
2
4
4 , P2 D
4
4
2 . P1 and P2 are the projection
5 P1 D
9
9
2
4
4
2
2
1
matrices onto the lines through a1 D . 1; 2; 2/ and a2 D .2; 2; 1/ P1 P2 D zero
matrix because a1 ? a2 .
XXX Above solution does not fit in 3 lines.
6 p 1 D . 19 ;
2
;
9
2
/
9
and p 2 D . 94 ; 49 ;
2
/
9
and p3 D . 49 ;
2 4
; /.
9 9
So p1 C p2 C p3 D b.
Solutions to Exercises
7
8
9
10
45
"
#
"
#
"
#
1
2
2
4
4
2
4
2
4
1
1
1
2
4
4 C
4
4
2 C
2
1
2 D I.
P1 C P2 C P3 D
9
9
9
2
4
4
2
2
1
4
2
4
We can add projections onto orthogonal vectors. This is important.
The projections of .1; 1/ onto .1; 0/ and .1; 2/ are p1 D .1; 0/ and p2 D .0:6; 1:2/.
Then p1 C p2 ¤ b.
Since A is invertible, P D A.AT A/ 1 AT D AA 1 .AT / 1 AT D I : project on all of R2 .
0:2 0:4
0:2
1 0
0:2 This is not a1 D .1; 0/
P2 D
,P2 a1 D
,P1 D
,P1 P2 a1 D
.
0:4 0:8
0:4
0 0
0
No, P1 P2 ¤ .P1 P2 /2 .
11 (a) p D A.AT A/
"
1 T
A b D .2; 3; 0/, e D .0; 0; 4/, AT e D 0 (b) p D .4; 4; 6/, e D 0.
#
1 0 0
0 1 0 D projection matrix onto the column space of A (the xy plane)
0 0 0
"
#
0:5 0:5 0
Projection matrix onto the second column space.
P2 D 0:5 0:5 0 =
Certainly .P2 /2 D P2 .
0
0
1
2
3
2
3
2 3 2 3
1 0 0
1 0 0 0
1
1
60 1 07
60 1 0 07
627 627
13 A D 4
, P D square matrix D 4
, p D P 4 5 D 4 5.
0 0 15
0 0 1 05
3
3
0 0 0
0 0 0 0
4
0
14 The projection of this b onto the column space of A is b itself when b is in that space.
"
#
" #
5 8
4
0
1
8 17
2 and b D P b D p D 2 .
But P is not necessarily I . P D
21
4 2 20
4
12 P1 D
15 2A has the same column space as A. b
x for 2A is half of b
x for A.
16
1
.1; 2;
2
2
1/ C 32 .1; 0; 1/ D .2; 1; 1/. So b is in the plane. Projection shows P b D b.
P/2 D .I P /.I P / D I P I IP C P 2 D I P. When
P projects onto the column space, I P projects onto the left nullspace.
(a) I P is the projection matrix onto .1; 1/ in the perpendicular direction to .1; 1/
(b) I P projects onto the plane x C y C z D 0 perpendicular to .1; 1; 1/.
"
#
5=6
1=6
1=3
For any basis vectors in the plane x y 2z D 0,
1=6
5=6
1=3 .
say .1; 1; 0/ and .2; 0; 1/, the matrix P is
1=3
1=3
1=3
" #
"
#
"
#
1
1=6
1=6
1=3
5=6
1=6
1=3
T
ee
1 , Q D eTe D
1=6
1=6
1=3 , I Q D 1=6
5=6
1=3 .
eD
2
1=3
1=3 2=3
1=3
1=3
1=3
2
A.AT A/ 1 AT D A.AT A/ 1 .AT A/.AT A/ 1 AT D A.AT A/ 1 AT . So P 2 D P .
P b is in the column space (where P projects). Then its projection P .P b/ is P b.
P T D .A.AT A/ 1 AT /T D A..AT A/ 1 /T AT D A.AT A/ 1 AT D P . (AT A is symmetric!)
If A is invertible then its column space is all of Rn . So P D I and e D 0.
The nullspace of AT is orthogonal to the column space C .A/. So if AT b D 0, the projection of b onto C .A/ should be p D 0. Check P b D A.AT A/ 1 AT b D A.AT A/ 1 0.
17 If P D P then .I
18
19
20
21
22
23
24
Solutions to Exercises
46
25 The column space of P will be S . Then r D dimension of S D n.
26 A
1
exists since the rank is r D m. Multiply A2 D A by A
T
T
1
to get A D I .
27 If A Ax D 0 then Ax is in the nullspace of A . But Ax is always in the column space
of A. To be in both of those perpendicular spaces, Ax must be zero. So A and AT A
have the same nullspace.
28 P 2 D P D P T give P T P D P . Then the .2; 2/ entry of P equals the .2; 2/ entry of
P T P which is the length squared of column 2.
29 A D B T has independent columns, so AT A (which is BB T ) must be invertible.
aaT
1 9 12
3
so PC D T D
.
4
a a
25 12 25
(b) The row space is the line through v D .1; 2; 2/ and PR D vvT =vT v. Always
PC A D A (columns of A project to themselves) and APR D A. Then PC APR D A !
30 (a) The column space is the line through a D
31 The error e D b
p must be perpendicular to all the a’s.
32 Since P1 b is in C .A/; P2 .P1 b/ equals P1 b. So P2 P1 D P1 D aaT =aT a where
a D .1; 2; 0/.
33 If P1 P2 D P2 P1 then S is contained in T or T is contained in S .
34 BB T is invertible as in Problem 29. Then .AT A/.BB T / D product of r by r invertible
matrices, so rank r. AB can’t have rank < r, since AT and B T cannot increase the rank.
Conclusion: A (m by r of rank r) times B (r by n of rank r) produces AB of rank r.
Problem Set 4.3, page 226
3
2 3
0
0
17
4 8
36
687
T
T
and b D 4 5 give A A D
and A b D
.
35
8
8 26
112
4
20
2 3
2 3
1
1
1
657
6 37
T
T
A Ab
x D A b gives b
xD
and p D Ab
x D 4 5 and e D b p D 4 5
4
13
5
17 E D kek2 D 44
3
2
3
2 3
2 3
1 0 0
1
1
61 17 C
6 8 7 This Ax D b is unsolvable 6 5 7
2 4
D 4 5.
;b
xD
exactly solves
5
4
5
1 3
D
8
Change b to p D P b D
13
4
1 4
20
17
Ab
x D p.
2
1
61
1 AD4
1
1
3 In Problem 2, p D A.AT A/
1
AT b D .1; 5; 13; 17/ and e D b p Dp. 1; 3; 5; 3/.
e is perpendicular to both columns of A. This shortest distance kek is 44.
4 E D .C C 0D/2 C .C C 1D
8/2 C .C C 3D 8/2 C .C C 4D 20/2 . Then
@E=@C D 2C C 2.C C D 8/ C 2.C C 3D 8/ C 2.C C 4D 20/ D 0 and
@E=@D D 1 2.C C D 8/ C 3 2.C
C3D 8/ C
4 2.C C 4D 20/ D 0. These
4 8
C
36
normal equations are again
D
.
8 26 D
112
Solutions to Exercises
47
0/2 C.C 8/2 C.C 8/2 C.C 20/2 . AT D Œ 1 1 1 1  and AT A D Œ 4 .
AT b D Œ 36  and .AT A/ 1 AT b D 9 D best height C . Errors e D . 9; 1; 1; 11/.
5 E D .C
6 a D .1; 1; 1; 1/ and b D .0; 8; 8; 20/ give b
x D aT b=aT a D 9 and the projection is
T
T
b
x
pa D p D .9; 9; 9; 9/. Then e a D . 9; 1; 1; 11/ .1; 1; 1; 1/ D 0 and kek D
204.
7 A D Œ 0 1 3 4 T , AT A D Œ 26  and AT b D Œ 112 . Best D D 112=26 D 56=13.
8 b
x D 56=13, p D .56=13/.0; 1; 3; 4/. .C; D/ D .9; 56=13/ don’t match .C; D/ D .1; 4/.
Columns of A were not perpendicular so we can’t project separately to find C and D.
2
3
2 3
"
#" # "
#
1 0 0 " #
0
Parabola
C
4 8 26
C
36
61 1 17
6 87 T
D D 4 5. A Ab
D D 112 .
9 Project b 4
x D 8 26 92
1 3 95
8
4D to 3D
E
26 92 338
E
400
1 4 16
20
2
32 3 2 3
2 3
2
3
1 0 0 0
C
0
C
0 Exact cubic so p D b, e D 0.
6 1 1 1 1 76 D 7 6 8 7
6 D 7 1 6 47 7 This Vandermonde matrix
.
10 4
D
. Then 4 5 D 4
1 3 9 27 54 E 5 4 8 5
E
28 5 gives exact interpolation
3
1 4 16 64
F
20
F
5 by a cubic at 0; 1; 3; 4
11 (a) The best line x D 1 C 4t gives the center point b
b D 9 when b
t D 2.
(b) The first equation C m C D
T
P
ti D
bi divided by m gives C C Db
t Db
b.
12 (a) a D .1; : : : ; 1/ has a a D m, a b D b1 C C bm . Therefore
b
x D aT b=m is the
P
b
x a b D .1; 2; b/ kek2 D
"
#
1 1 1 1
p D .3; 3; 3/
T
1 1 1 .
p e D 0. P D
e D . 2; 1; 3/
3 1 1 1
mean of the b’s
(c)
T
P
13 .AT A/
1
AT .b
14 The matrix .b
x
(b) e D b
Ax/ D b
x
T
x. When e D b
T
1
T
m
iD1 .bi
b
x /2 D variance
Ax averages to 0, so does b
x
T
T
x.
1
x/.b
x x/ is .A A/ A .b Ax/.b Ax/ A.A A/ . When the
average of .b Ax/.b Ax/T is 2 I , the average of .b
x x/.b
x x/T will be the
T
1 T 2
T
1
output covariance matrix .A A/ A A.A A/ which simplifies to 2 .AT A/ 1 .
x/2 as
D =m. By taking m measurements, the variance drops from 2 to
15 When A has 1 column of ones, Problem 14 gives the expected error .b
x
2
T
1
2
.A A/
2 =m.
1
9
1
16
b10 C b
x9 D
.b1 C C b10 /. Knowing b
x 9 avoids adding all b’s.
10
10
10
"
#
" #
1
1 7
C
9
3 2
C
35
1
7 . The solution b
17 1
D
xD
comes from
D
.
D
4
2 6 D
42
1
2
21
18 p D Ab
x D .5; 13; 17/ gives the heights of the closest line. The error is b
.2; 6; 4/. This error e has P e D P b
Pp D p
p D 0.
p D
19 If b D error e then b is perpendicular to the column space of A. Projection p D 0.
20 If b D Ab
x D .5; 13; 17/ then b
x D .9; 4/ and e D 0 since b is in the column space
of A.
21 e is in N.AT /; p is in C.A/; b
x is in C.AT /; N.A/ D f0g D zero vector only.
Solutions to Exercises
48
5 0
C
5
22 The least squares equation is
D
. Solution: C D 1, D D
0 10 D
10
T
Line 1 t . Symmetric t ’s ) diagonal A A
23 e is orthogonal to p; then kek2 D e T .b
p/ D e T b D bT b
1.
bT p.
bk2 D x T AT Ax 2bT Ax C bT b (this term is constant) are
zero when 2A Ax D 2A b, or x D .AT A/ 1 AT b.
24 The derivatives of kAx
T
T
25 3 points on a line: Equal slopes .b2 b1 /=.t2 t1 / D .b3 b2 /=.t3 t2 /. Linear algebra:
Orthogonal to .1; 1; 1/ and .t1 ; t2 ; t3 / is y D .t2 t3 ; t3 t1 ; t1 t2 / in the left nullspace.
b is in the column space. Then y T b D 0 is the same equal slopes condition written as
.b2 b1 /.t3 t2 / D .b3 b2 /.t2 t1 /.
2
3
2 3
"
#
" # " #
1
1
0 " #
0
C
4 0 0
8
C
0
17
61
617
D D 4 5 has AT A D 0 2 0 ; AT b D
2 ; D D
26 4
5
1
1
0
3
E
0 0 2
3
E
1
0
1
4
"
#
2
1 . At x; y D 0; 0 the best plane 2 x 32 y has height C D 2 D average of
3=2
0; 1; 3; 4.
27 The shortest link connecting two lines in space is perpendicular to those lines.
28 Only 1 plane contains 0; a1 ; a2 unless a1 ; a2 are dependent. Same test for a1 ; : : : ; an .
29 There is exactly one hyperplane containing the n points 0; a1 ; : : : ; an
1 When the n 1
vectors a1 ; : : : ; an 1 are linearly independent. (For n D 3, the vectors a1 and a2 must
be independent. Then the three points 0; a1 ; a2 determine a plane.) The equation of the
plane in Rn will be aTn x D 0. Here an is any nonzero vector on the line (it is only a
line!) perpendicular to a1 ; : : : ; an 1 .
Problem Set 4.4, page 239
1 (a) Independent (b) Independent and orthogonal (c) Independent and orthonormal.
For orthonormal vectors, (a) becomes .1; 0/, .0; 1/ and (b) is .:6; :8/, .:8; :6/.
"
5=9 2=9
Divide by length 3 to get
1 0
T
T
2=9
8=9
2
Q
Q
D
but
QQ
D
0 1
q 1 D . 23 ; 23 ; 13 /. q 2 D . 31 ; 23 ; 23 /:
4=9 2=9
3 (a) AT A will be 16I
(b) AT A will be diagonal with entries 1, 4, 9.
"
#
"
#
1 0
1 0 0
T
4 (a) Q D 0 1 , QQ D 0 1 0 ¤ I . Any Q with n < m has QQT ¤
0 0
0 0 0
I . (b) .1; 0/ and .0; 0/ are orthogonal, not independent.pNonzero orthogonal vectors are independent.
(c) Starting p
from q 1 D .1; 1; 1/= 3 my favorite is q 2 D
p
.1; 1; 0/= 2 and q 3 D .1; 1; 2/= b.
5 Orthogonal vectors are .1; 1; 0/ and .1; 1; 1/.
. p1 ; p1 ;
3
3
p1 /.
3
Orthonormal are . p1 ;
2
p1 ; 0/,
2
#
4=9
2=9 .
5=9
Solutions to Exercises
49
6 Q1 Q2 is orthogonal because .Q1 Q2 /T Q1 Q2 D Q2T Q1T Q1 Q2 D Q2T Q2 D I .
7 When Gram-Schmidt gives Q with orthonormal columns, QT Qb
x D QT b becomes
b
x D QT b.
8 If q 1 and q 2 are orthonormal vectors in R5 then .q T1 b/q 1 C .q T2 b/q 2 is closest to b.
9
10
11
12
13
14
"
#
"
#
:8
:6
1 0 0
T
:8 has P D QQ D 0 1 0
(a) Q D :6
(b) .QQT/.QQT / D
0
0
0 0 0
Q.QTQ/QT D QQT .
(a) If q 1 , q 2 , q 3 are orthonormal then the dot product of q 1 with c1 q 1 Cc2 q 2 Cc3 q 3 D
0 gives c1 D 0. Similarly c2 D c3 D 0. Independent q’s
(b) Qx D 0 )
QT Qx D 0 ) x D 0.
1
1
(a) Two orthonormal vectors are q 1 D 10
.1; 3; 4; 5; 7/ and q 2 D 10
. 7; 3; 4; 5; 1/
(b) Closest in the plane: project QQT .1; 0; 0; 0; 0/ D .0:5; 0:18; 0:24; 0:4; 0/.
(a) Orthonormal a’s: aT1 b D aT1 .x1 a1 C x2 a2 C x3 a3 / D x1 .aT1 a1 / D x1
(b) Orthogonal a’s: aT1 b D aT1 .x1 a1 C x2 a2 C x3 a3 / D x1 .aT1 a1 /. Therefore x1 D
aT1 b=aT1 a1
(c) x1 is the first component of A 1 times b.
Tb
aT b
The multiple to subtract is a
aT a . Then B D b aT a a D .4; 0/ 2 .1; 1/ D .2; 2/.
p p
p p
kak q T b
1 4
1=p2
1=p2
2 2p2
1
D q1 q2
D
D QR.
1 0
0
kBk
1= 2
1= 2
0 2 2
1
.1; 2;
3
2/, q 2 D 31 .2; 1; 2/, q 3 D 13 .2; 2; 1/
(b) The nullspace
T
T
1 T
of A contains q 3
(c) b
x D .A A/ A .1; 2; 7/ D .1; 2/.
16 The projection p D .aT b=aT a/a D 14a=49 D 2a=7 is closest to b; q 1 D a=kak D
a=7 is .4; 5; 2; 2/=7. B D b p D . 1; 4; 4; 4/=7 has kBk D 1 so q 2 D B.
p
T
17 p D .aT b=a
a/a
D
.3;
3;
3/
and
e
D
.
2;
0;
2/.
q
D
.1;
1;
1/=
3 and q 2 D
1
p
. 1; 0; 1/= 2.
18 A D a D .1; 1; 0; 0/I B D b p D . 21 ; 12 ; 1; 0/I C D c pA p B D . 13 ; 13 ; 13 ; 1/.
Notice the pattern in those orthogonal A; B; C . In R5 , D would be . 14 ; 14 ; 14 ; 14 ; 1/.
15 (a) q 1 D
19 If A D QR then AT A D RT QT QR D RT R D lower triangular times upper triangular
(this Cholesky factorization of AT A uses the same R as Gram-Schmidt!). The example
"
#
"
#
1 1
1
2 3 3
1
2 1 D 3
2
1
has A D
D QR and the same R appears in
0 3
2
4
2
2
9 9
3 0 3 3
AT A D
D
D RT R.
9 18
3 3 0 3
(b) True. Qx D x1 q 1 C x2 q 2 . kQxk2 D x12 C x22 because q 1 q 2 D 0.
p
21 The orthonormal vectors are q 1 D .1; 1; 1; 1/=2 and q 2 D . 5; 1; 1; 5/= 52. Then
b D . 4; 3; 3; 0/ projects to p D . 7; 3; 1; 3/=2. And b p D . 1; 3; 7; 3/=2
is orthogonal to both q 1 and q 2 .
22 A D .1; 1; 2/, B D .1; 1; 0/, C D . 1; 1; 1/. These are not yet unit vectors.
20 (a) True
Solutions to Exercises
50
23
24
25
26
27
" #
" #
" #
"
#"
#
1
0
0
1 0 0
1 2 4
0 3 6 D
You can see why q 1 D 0 , q 2 D 0 , q 3 D 1 . A D 0 0 1
0
1
0
0 1 0
0 0 5
QR.
(a) One basis for the subspace S of solutions to x1 C x2 C x3 x4 D 0 is v1 D
.1; 1; 0; 0/, v2 D .1; 0; 1; 0/, v3 D .1; 0; 0; 1/
(b) Since S contains solutions to
.1; 1; 1; 1/T x D 0, a basis for S ? is .1; 1; 1; 1/
(c) Split .1; 1; 1; 1/ D b1 C b2
by projection on S ? and S : b2 D . 12 ; 12 ; 12 ; 12 / and b1 D . 21 ; 21 ; 12 ; 23 /.
This question
for
shows 2 by 2 formulas
QR; breakdown R22 D
0 when A is sin
1 2
1 5 3
1 1
2 1
1
1 1
1
gular.
D p
p
. Singular
D p
1 1
2
1 1
1
5 1
5 0 1
2 1
1 2 2
p
. The Gram-Schmidt process breaks down when ad bc D 0.
2 0 0
T
.q T2 C /q 2 D BT c B because q 2 D B and the extra q 1 in C is orthogonal to q 2 .
kB k
B B
When a and b are not orthogonal, the projections onto these lines do not add to the projection onto the plane of a and b. We must use the orthogonal A and B (or orthonormal
q 1 and q 2 ) to be allowed to add 1D projections.
28 There are mn multiplications in (11) and 12 m2 n multiplications in each part of (12).
29 q 1 D
1
.2; 2;
3
1/, q 2 D 13 .2; 1; 2/, q 3 D 31 .1; 2; 2/.
31
32
33
34
35
1
D W T . See
Section 7.2 for more about wavelets : a useful orthonormal basis with many zeros.
(a) c D 21 normalizes all the orthogonal columns to have unit length
(b) The projection .aT b=aT a/a of b D .1; 1; 1; 1/ onto the first column is p1 D 21 . 1; 1; 1; 1/.
(Check e D 0.) To project onto the plane, add p2 D 12 .1; 1; 1; 1/ to get .0; 0; 1; 1/.
"
#
1
0
0
1
0
0
1 across plane y C z D 0.
Q1 D
reflects across x axis, Q2 D 0
0
1
0
1
0
Orthogonal and lower triangular ) ˙1 on the main diagonal and zeros elsewhere.
(a) Qu D .I
2uuT /u D u 2uuT u. This is u, provided that uT u equals 1
(b) Qv D .I 2uuT /v D u 2uuT v D u, provided that uT v D 0.
Starting from A D .1; 1; 0; 0/, the orthogonal (not orthonormal) vectors B D
.1; 1; 2; 0/ and C D .1; 1; 1; 3/ and D D .1; 1; 1; 1/ are in the directions of q 2 ; q 3 ; q 4 .
The 4 by 4 and 5 by 5 matrices with integer orthogonal columns
2 (not orthogonal3rows,
2
3
1
1
1 1
1
1
1 17
6
since not orthonormal Q!) are 4 A B C D 5 D 4
and
0
2
1 15
0
0
3 1
2
3
1
1
1
1 1
6 1
1
1
1 17
6
7
2
1
1 17
6 0
4 0
0
3
1 15
0
0
0
4 1
30 The columns of the wavelet matrix W are orthonormal. Then W
Solutions to Exercises
51
36 ŒQ; R
D
q r.A/ produces from A (m by n of rank n) a “full-size” square Q D Œ Q1 Q2 
R
. The columns of Q1 are the orthonormal basis from Gram-Schmidt of the
0
column space of A. The m n columns of Q2 are an orthonormal basis for the left
nullspace of A. Together the columns of Q D Œ Q1 Q2  are an orthonormal basis
for Rm .
and
37 This question describes the next q nC1 in Gram-Schmidt using the matrix Q with the
columns q 1 ; : : : ; q n (instead of using those q’s separately). Start from a, subtract its
projection p D QT a onto the earlier q’s, divide by the length of e D a QT a to get
q nC1 D e=kek.
Problem Set 5.1, page 251
1 det.2A/ D 8I det. A/ D . 1/4 det A D
2
det. 12 A/
1
D
/D
det.A
. 12 /3
1.
det A D
1
8
1
I
2
det.A2 / D 14 I det.A
3
1
/ D 2 D det.AT /
1
2
and det. A/ D . 1/ det A D 1; det.A / D 1;
3 (a) False: det.I C I / is not 1 C 1
(b) True: The product rule
extends
to ABC
(use
0 0
0 1
it twice) (c) False: det.4A/ is 4 det A (d) False: A D
,B D
,
0 1
1 0
0
1
AB BA D
is invertible.
1
0
n
4 Exchange rows 1 and 3 to show jJ3 j D
1. Exchange rows 1 and 4, then 2 and 3 to
show jJ4 j D 1.
5 jJ5 j D 1, jJ6 j D 1, jJ7 j D 1. Determinants 1; 1; 1; 1 repeat so jJ101 j D 1.
6 To prove Rule 6, multiply the zero row by t D 2. The determinant is multiplied by 2
(Rule 3) but the matrix is the same. So 2 det.A/ D det.A/ and det.A/ D 0.
7 det.Q/ D 1 for rotation and det.Q/ D
1 for reflection .1 2 sin2 2 cos2 D 1/.
8 QT Q D I ) jQj2 D 1 ) jQj D ˙1; Qn stays orthogonal so det can’t blow up.
9 det A D 1 from two row exchanges . det B D 2 (subtract rows 1 and 2 from row 3, then
columns 1 and 2 from column 3). det C D 0 (equal rows) even though C D A C B!
10 If the entries in every row add to zero, then .1; 1; : : : ; 1/ is in the nullspace: singular
A has det D 0. (The columns add to the zero column so they are linearly dependent.)
If every row adds to one, then rows of A I add to zero (not necessarily det A D 1).
11 CD D DC ) det CD D . 1/n det DC and not det DC . If n is even we can have
an invertible CD.
12 det.A
1
1
.
ad bc
/ divides twice by ad
ad bc
bc (once for each row). This gives .ad
D
bc/2
13 Pivots 1; 1; 1 give determinant D 1; pivots 1; 2; 3=2 give determinant D 3.
14 det.A/ D 36 and the 4 by 4 second difference matrix has det D 5.
15 The first determinant is 0, the second is 1
2t 2 C t 4 D .1
t 2 /2 .
.
Solutions to Exercises
52
16 A singular rank one matrix has determinant D 0. The skew-symmetric K also det K D
0 (see #17).
17 Any 3 by 3 skew-symmetric K has det.K T / D det. K/ D . 1/3 det.K/. This is
det.K/. But always det.K T / D det.K/. So we must have det.K/ D 0 for 3 by 3.
ˇ
ˇ
ˇ
ˇ
ˇ
ˇ 1 a a2 ˇ
ˇ 1
ˇ
ˇ
a
a2
ˇ
ˇ
ˇ
ˇ
ˇ b a b 2 a2 ˇ
ˇ (to reach 2 by 2,
18 ˇˇ 1 b b 2 ˇˇ D ˇˇ 0 b a b 2 a2 ˇˇ D ˇˇ
c a c 2 a2 ˇ
ˇ 1 c c2 ˇ
ˇ 0 c a c 2 a2 ˇ
eliminate a and a2 in row ˇ1 by column ˇoperations). Factor out b a and c
ˇ 1 bCa ˇ
ˇ D .b a/.c a/.c b/.
the 2 by 2: .b a/.c a/ ˇˇ
1 cCa ˇ
/ D 16 ,
and det.U / D 36. 2 by 2 matrix: det.U / D ad; det.U / D a d . If ad ¤ 0 then
det.U 1 / D 1=ad .
a Lc b Ld
det
reduces to .ad bc/.1 L`/. The determinant changes if you
c `a d `b
do two row operations at once.
Rules 5 and 3 give Rule 2. (Since Rules 4 and 3 give 5, they also give Rule 2.)
det.A/ D 3; det.A 1 / D 31 ; det.A I / D 2 4 C 3. The numbers D 1 and
D 3 give det.A I / D 0. Note to instructor: If you discuss this exercise, you can
explain that this is the reason determinants come before eigenvalues. Identify D 1
and D 3 as the eigenvalues of A.
3
1
18 7
1
1
2
2
1
has det 10
.
det.A/ D 10, A D
, det.A / D 100, A D 10
14 11
2
4
2
det.A I / D 7 C 10 D 0 when D 2 or D 5; those are eigenvalues.
Here A D LU with det.L/ D 1 and det.U / D 6 product of pivots, so also det.A/ D
6. det.U 1 L 1 / D 16 D 1= det.A/ and det.U 1 L 1 A/ is det I D 1.
When the i , j entry is ij , row 2 D 2 times row 1 so det A D 0.
When the ij entry is i C j , row 3 row 2 D row2 row 1 so A is singular: det A D 0.
det A D abc, det B D abcd , det C D a.b a/.c b/ by doing elimination.
(a) True: det.AB/ D det.A/ det.B/ D 0 (b) False: A row exchange gives det D
product of pivots. (c) False: A D 2I and B D I have A B D I but the determinants have 2n 1 ¤ 1 (d) True: det.AB/ D det.A/ det.B/ D det.BA/.
A is rectangular so det.AT A/ ¤ .det AT /.det A/: these determinants are not defined.
2
3
d
b
@f =@a @f =@c
6
7
Derivatives of f D ln.ad bc/:
D 4 ad c bc ad a bc 5 D
@f =@b @f =@d
ad bc ad bc
d
b
1
1
DA .
ad bc
c
a
19 For triangular matrices, just multiply the diagonal entries: det.U / D 6; det.U
2
20
21
22
23
24
25
26
27
28
29
30
a from
2
1
2
, 4:610 4 , 1:610 7, 3:710 12, 5:410 18 ,
4:8 10 , 2:7 10 , 9:7 10 , 2:2 10 53 . Pivots are ratios of determinants so the 10th pivot is near 10 10 . The Hilbert matrix is numerically difficult (illconditioned).
31 The Hilbert determinants are 1, 810
25
33
2
2
43
Solutions to Exercises
53
32 Typical determinants of rand.n/ are 106 ; 1025 ; 1079 ; 10218 for n D 50; 100; 200; 400.
randn.n/ with normal distribution gives 1031 ; 1078 ; 10186 , Inf which means 21024 .
MATLAB allows 1:999999999999999 21023 1:8 10308 but one more 9 gives Inf!
33 I now know that maximizing the determinant for 1,
1 matrices is Hadamard’s problem (1893): see Brenner in American Math. Monthly volume 79 (1972) 626-630. Neil
Sloane’s wonderful On-Line Encyclopedia of Integer Sequences (research.att.com/
njas) includes the solution for small n (and more references) when the problem is
changed to 0; 1 matrices. That sequence A003432 starts from n D 0 with 1, 1, 1, 2, 3,
5, 9. Then the 1; 1 maximum for size n is 2n 1 times the 0; 1 maximum for size n 1
(so .32/.5/ D 160 for n D 6 in sequence A003433).
To reduce the 1; 1 problem from 6 by 6 to the 0; 1 problem for 5 by 5, multiply the
six rows by ˙1 to put C1 in column 1. Then subtract row 1 from rows 2 to 6 to get a 5
by 5 submatrix S of 2; 0 and divide S by 2.
Here is an advanced MATLAB code and a 1; 1 matrix with largest det A D 48 for
n D 5:
n D 5I p D .n 1/^2I A0 Dones.n/; maxdetD 0;
for k D 0 W 2^p 1
Asub D rem(floor(k: 2:^. p C 1 W 0//; 2/I A D A0I A.2 W n; 2 W n/ D 1
reshape(Asub, n 1; n 1/;
if abs(det(A// > maxdet, maxdet D abs(det(A)); maxA D A;
2
end
end
Output: maxA =
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
maxdet = 48.
34 Reduce B by row operations to Œ row 3I row 2I row 1 . Then det B D
6 (odd per-
mutation).
Problem Set 5.2, page 263
1 det A D 1C18C12 9 4 6 D 12, rows are independent; det B D 0, row 1Crow 2 D
row 3; det C D 1, independent rows (det C has one term, odd permutation)
2, independent; det B D 0, dependent; det C D 1, independent.
All cofactors of row 1 are zero. A has rank 2. Each of the 6 terms in det A is zero.
Column 2 has no pivot.
a11 a23 a32 a44 gives 1, because 2 $ 3, a14 a23 a32 a41 gives C1, det A D 1 1 D 0;
det B D 2 4 4 2 1 4 4 1 D 64 16 D 48.
Four zeros in the same row guarantee det D 0. A D I has 12 zeros (maximum with
det ¤ 0).
(a) If a11 D a22 D a33 D 0 then 4 terms are sure zeros (b) 15 terms must be zero.
2 det A D
3
4
5
6
54
Solutions to Exercises
7 5Š=2 D 60 permutation matrices have det D C1. Move row 5 of I to the top; starting
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
from .5; 1; 2; 3; 4/ elimination will do four row exchanges.
Some term a1˛ a2ˇ an! in the big formula is not zero! Move rows 1, 2, . . ., n into
rows ˛, ˇ, . . ., !. Then these nonzero a’s will be on the main diagonal.
To get C1 for the even permutations, the matrix needs an even number of 1’s. To get
C1 for the odd P ’s, the matrix needs an odd number of 1’s. So all six terms D C1 in
the big formula and det D 6 are impossible: max.det/ D 4.
The 4Š=2 D 12 even permutations are .1; 2; 3; 4/; .2; 1; 4; 3/; .3; 1; 4; 2/; .4; 3; 2; 1/,
and 8 P’s with one number in place and even permutation of the other three numbers.
det.I C Peven ) D 16 or 4 or 0 (16 comes from I C I ).
"
#
0
42
35
det B D 1.0/ C 2.42/ C 3. 35/ D 21.
d
b
0
21
14 .
C D
.DD
c
a
Puzzle: det D D 441 D . 21/2 . Why?
3
6
3
"
#
"
#
3 2 1
4 0 0
T
C D 2 4 2 and AC D 0 4 0 . Therefore A 1 D 14 C T D C T = det A.
1 2 3
0 0 4
(a) C1 D 0, C2 D 1, C3 D 0, C4 D 1
(b) Cn D Cn 2 by cofactors of row
1 then cofactors of column 1. Therefore C10 D C8 D C6 D C4 D C2 D 1.
We must choose 1’s from column 2 then column 1, column 4 then column 3,and so on.
Therefore n must be even to have det An ¤ 0. The number of row exchanges is n=2 so
Cn D . 1/n=2 .
The 1; 1 cofactor of the n by n matrix is En 1 . The 1; 2 cofactor has a single 1 in its
first column, with cofactor En 2 : sign gives En 2 . So En D En 1 En 2 . Then E1
to E6 is 1, 0, 1, 1, 0, 1 and this cycle of six will repeat: E100 D E4 D 1.
The 1; 1 cofactor of the n by n matrix is Fn 1 . The 1; 2 cofactor has a 1 in column
1, with cofactor Fn 2 . Multiply by . 1/1C2 and also . 1/ from the 1; 2 entry to find
Fn D Fn 1 C Fn 2 (so these determinants are Fibonacci numbers).
"
#
"
#
1
1
1
1
1
1
1
2
1 C det
1
2
jB4 j D 2 det
D 2jB3 j det
D
1
2
1
2
1
1
2jB3 j jB2 j. jB3 j and jB2 j are cofactors of row 4 of B4 .
Rule 3 (linearity in row 1) gives jBn j D jAn j jAn 1 j D .n C 1/ n D 1.
Since x, x 2 , x 3 are all in the same row, they are never multiplied in det V4 . The determinant is zero at x D a or b or c, so det V has factors .x a/.x b/.x c/. Multiply
by the cofactor V3 . The Vandermonde matrix Vij D .xi /j 1 is for fitting a polynomial
p.x/ D b at the points xi . It has det V D product of all xk xm for k > m.
G2 D 1, G3 D 2, G4 D 3, and Gn D . 1/n 1 .n 1/ D (product of the ’s ).
S1 D 3; S2 D 8; S3 D 21. The rule looks like every second number in Fibonacci’s
sequence : : : 3; 5; 8; 13; 21; 34; 55; : : : so the guess is S4 D 55. Following the solution
to Problem 30 with 3’s instead of 2’s confirms S4 D 81C1 9 9 9 D 55. Problem 33
directly proves Sn D F2nC2 .
Changing 3 to 2 in the corner reduces the determinant F2nC2 by 1 times the cofactor
of that corner entry. This cofactor is the determinant of Sn 1 (one size smaller) which
is F2n . Therefore changing 3 to 2 changes the determinant to F2nC2 F2n which is
F2nC1 .
Solutions to Exercises
55
23 (a) If we choose an entry from B we must choose an entry from the zero block; re-
sult zero. This leaves entries
from
from A times entries
Dleadingto .detA/.det D/
1 0
0 0
0 1
0 0
(b) and (c) Take A D
,B D
,C D
,DD
. See #25.
0 0
1 0
0 0
0 1
24 (a) All L’s have det D 1I det Uk D det Ak D 2; 6; 6 for k D 1; 2; 3
26
27
28
29
30
1
.
3
0
A B
D
1
and
det
D jAj times jD CA 1 Bj
C D
CA 1
I
which is jAD ACA 1 Bj. If AC D CA this is jAD CAA 1 Bj D det.AD CB/.
If A is a row and B is a column then det M D det AB D dot product of A and B. If
A is a column and B is a row then AB has rank 1 and det M D det AB D 0 (unless
m D n D 1). This block matrix is invertible when AB is invertible which certainly
requires m n.
(a) det A D a11 C11 C C a1n C1n . Derivative with respect to a11 D cofactor C11 .
Row 1 2 row 2 C row 3 D 0 so this matrix is singular.
There are five nonzero products, all 1’s with a plus or minus sign. Here are the (row,
column) numbers and the signs: C .1; 1/.2; 2/.3; 3/.4; 4/ C .1; 2/.2; 1/.3; 4/.4; 3/
.1; 2/.2; 1/.3; 3/.4; 4/ .1; 1/.2; 2/.3; 4/.4; 3/ .1; 1/.2; 3/.3; 2/.4; 4/. Total 1.
The 5 products in solution 29 change to 16 C 1 4 4 4 since A has 2’s and -1’s:
25 Problem 23 gives det
(b) Pivots 2; 32 ;
I
.2/.2/.2/.2/ C . 1/. 1/. 1/. 1/ . 1/. 1/.2/.2/
.2/. 1/. 1/.2/:
.2/.2/. 1/. 1/
1 because the cofactor of P14 D 1 in row one has sign . 1/1C4 . The big
formula for det P has only one term .1111/ with minus sign because
three exchanges
0 I
D
take 4; 1; 2; 3 into 1; 2; 3; 4; det.P 2 / D .det P /.det P / D C1 so det
I 0
0 1
is not right.
det
1 0
31 det P D
32 The problem is to show that F2nC2 D 3F2n
F2nC2 D F2nC1 CF2n D F2n CF2n
F2n 2 . Keep using Fibonacci’s rule:
1 CF2n D 2F2n C.F2n F2n 2 / D 3F2n F2n
2:
33 The difference from 20 to 19 multiplies its 3 by 3 cofactor D 1: then det drops by 1.
34 (a) The last three rows must be dependent
(b) In each of the 120 terms: Choices
from the last 3 rows must use 3 columns; at least one of those choices will be zero.
35 Subtracting 1 from the n; n entry subtracts its cofactor Cnn from the determinant. That
cofactor is Cnn D 1 (smaller Pascal matrix). Subtracting 1 from 1 leaves 0.
Problem Set 5.3, page 279
ˇ
ˇ
ˇ
ˇ
ˇ
ˇ 1 5 ˇ
ˇ
ˇ
5 ˇˇ
ˇ
ˇ D 6; ˇ 2 1 ˇ D 3 so x1 D 6=3 D 2 and x2 D
D
3;
ˇ
ˇ
ˇ
ˇ
4
2 4
1 2 ˇ
3=3 D 1 (b) jAj D 4; jB1 j D 3; jB2 j D 2; jB3 j D 1: Therefore x1 D 3=4 and
x2 D 1=2 and x3 D 1=4.
ˇ
ˇ 2
1 (a) ˇˇ
1
Solutions to Exercises
56
ˇ
ˇ ˇ
ˇ
ˇ
ˇ ˇ
ˇ
2 (a) y D ˇ ac 10 ˇ = ˇ ac db ˇ D c=.ad
bc/
(b) y D det B2 = det A D .fg
id /=D.
3 (a) x1 D 3=0 and x2 D
2=0: no solution
(b) x1 D x2 D 0=0: undetermined.
4 (a) x1 D det Œ b a2 a3  = det A, if det A ¤ 0
(b) The determinant is linear in
its first column so x1 ja1 a2 a3 jCx2 ja2 a2 a3 jCx3 ja3 a2 a3 j. The last two determinants
are zero because of repeated columns, leaving x1 ja1 a2 a3 j which is x1 det A.
5 If the first column in A is also the right side b then det A D det B1 . Both B2 and B3 are
singular since a column is repeated. Therefore x1 D jB1 j=jAj D 1 and x2 D x3 D 0.
2
3
2
3
2
1
0
3 2 1
3
1
An invertible symmetric matrix
6
7
1
(b) 4 2 4 2 5.
6 (a) 4 0
05
3
has a symmetric inverse.
4
7
1 2 3
0
1
3
7 If all cofactors D 0 then A
1
would be the zero
matrix
if it existed; cannot exist. (And
1 1
the cofactor formula gives det A D 0.) A D
has no zero cofactors but it is not
1 1
invertible.
"
#
"
#
6
3
0
3 0 0
This is .det A/I and det A D 3.
T
3
1
1 and AC D 0 3 0 . The 1; 3 cofactor of A is 0.
8 C D
6
2
1
0 0 3
Multiplying by 4 or 100: no change.
and also det A 1 D 1.
Now A is the inverse of C , so A can be found from the cofactor matrix for C .
9 If we know the cofactors and det A D 1, then C T D A
1
T
10 Take the determinant of AC T D .det A/I . The left side gives det AC T D .det A/.det C /
while the right side gives .det A/n . Divide by det A to reach det C D .det A/n
1
.
11 The cofactors of A are integers. Division by det A D ˙1 gives integer entries in A
12 Both det A and det A
1
1
are integers since the matrices contain only integers. But det A
1= det A so det A must be 1 or 1.
"
#
"
#
0 1 3
1
2
1
1
3
6
2 and A 1 D C T .
13 A D 1 0 1 has cofactor matrix C D
5
2 1 0
1
3
1
.
1
14 (a) Lower triangular L has cofactors C21 D C31 D C32 D 0
(b) C12 D C21 ;
C31 D C13 ; C32 D C23 make S 1 symmetric.
(c) Orthogonal Q has cofactor
matrix C D .det Q/.Q 1 /T D ˙Q also orthogonal. Note det Q D 1 or 1.
15 For n D 5, C contains 25 cofactors and each 4 by 4 cofactor has 24 terms. Each term
needs 3 multiplications: total 1800 multiplications vs.125 for Gauss-Jordan.
ˇ
ˇ
16 (a) Area ˇ 31 24 ˇ D 10
(b) and (c) Area 10=2 D 5, these triangles are half of the
parallelogram in (a).
ˇ
ˇ
ˇ
ˇ
ˇ3 1 1ˇ
ˇi j kˇ
2i 2j p
C 8k
Area of faces D
ˇ
ˇ
ˇ
17 VolumeD ˇ 1 3 1 ˇD 20.
D ˇ 3 1 1 ˇˇ D
length
of
cross
product
lengthD
6
2
1 13
1 3 1
ˇ
ˇ
ˇ
ˇ
ˇ2 1 1ˇ
ˇ 2 1 1ˇ
18 (a) Area 21 ˇˇ 3 4 1 ˇˇ D 5
(b) 5 C new triangle area 12 ˇˇ 0 5 1 ˇˇ D 5 C 7 D 12.
ˇ
19 ˇ 22
0 51
ˇ
ˇ
1ˇ D 4 D ˇ2
3
1 0 1
ˇ
2 ˇ because the transpose has the same determinant. See #22.
1 3
D
Solutions to Exercises
57
p
1 C 1 C 1 C 1 D 2. The volume det H
is 24 D 16. (H=2 has orthonormal columns. Then det.H=2/ D 1 leads again to
det H D 16:)
The maximum volume L1 L2 L3 L4 ispreached when the edges are orthogonal in R4 .
4
With entries 1 and 1 all lengths are 4 D 2. The maximum
p 3 determinant is 2 D 16,
achieved in Problem 20. For a 3 by 3 matrix, det A D . 3/ can’t be achieved by ˙1.
This question is still waiting for a solution! An 18:06 student showed me how to transform the parallelogram for A to the parallelogram for AT , without changing its area.
(Edges slide along themselves, so no change in baselength or height or area.)
2 T3
2 T
3
a a a
0
0
det AT A D .kakkbkkck/2
AT A D 4 bT 5 a b c D 4 0
bT b
0 5 has
det A
D ˙kakkbkkck
cT
0
0
cTc
"
#
1 0 0
The box has height 4 and volume D det 0 1 0 D 4. i j D k and .k w/ D 4.
2 3 4
20 The edges of the hypercube have length
21
22
23
24
25 The n-dimensional cube has 2n corners, n2n
1
edges and 2n .n 1/-dimensional faces.
Coefficients from .2 C x/ in Worked Example 2.4A. Cube from 2I has volume 2n .
1
1
The pyramid has volume 16 . The 4-dimensional pyramid has volume 24
(and nŠ
in Rn )
x D r cos ; y D r sin give J D r. The columns are orthogonal and their lengths are
1 and r.
ˇ
ˇ
ˇ sin ' cos cos ' sin sin ' sin ˇ
ˇ
ˇ
J D ˇ sin ' sin cos ' sin sin ' cos ˇ D 2 sin '. This Jacobian is needed
ˇ
ˇ
cos '
sin '
for triple integrals inside spheres.
ˇ
ˇ ˇ
ˇ ˇ
ˇ
ˇ @r=@x @r=@y ˇ ˇ x=r
y=r ˇˇ ˇˇ cos sin ˇˇ
ˇ
ˇ
ˇ
From x; y to r; : ˇ
D
D
@=@x @=@y ˇ ˇ y=r 2 x=r 2 ˇ ˇ . sin /=r .cos /=r ˇ
1
1
.
D D
r
Jacobian in 27
The triangle with corners .0; 0/; .6; 0/; .1; 4/ has area 24. Rotatedˇ by D 60ı the area
ˇ
ˇ cos sin ˇˇ
ˇ
is unchanged. The determinant of the rotation matrix is J D ˇ
D
sin cos ˇ
ˇ
ˇ
p
ˇ 1=2
3=2 ˇˇ
ˇ p
D 1.
ˇ 3=2
1=2 ˇ
n
26
27
28
29
30
31 Base area 10, height 2, volume 20.
"
2
1
32 The volume of the box is det
1
ˇ
ˇ
ˇ
ˇ
ˇ u1 u2 u3 ˇ
ˇ v2 v3 ˇ
ˇ
ˇ
ˇ
33 ˇ v1 v2 v3 ˇ D u1 ˇˇ
w2 w3 ˇ
ˇw w w ˇ
1
2
3
#
4 0
3 0 D 20.
2 2
ˇ
ˇ
ˇ
ˇ v1 v3 ˇ
ˇ
ˇ C u3 ˇ v1
u2 ˇˇ
ˇ
ˇ w1
w1 w3
ˇ
v2 ˇˇ
. This is u .v w/.
w2 ˇ
34 .w u/ v D .v w/ u D .u v/ w W Even permutation of .u; v; w/ keeps the same
determinant. Odd permutations reverse the sign.
Solutions to Exercises
58
35 S D .2; 1; 1/, area kPQ PS k D k. 2; 2; 1/k D 3. The other four corners
can be .0; 0; 0/, .0; 0; 2/, .1; 2; 2/, .1; 1; 0/. The volume of the tilted box is j det j D 1.
2
3
xyz
36 If .1; 1; 0/, .1; 2; 1/, .x; y; z/ are in a plane the volume is det 4 1 1 0 5 D x yCz D 0.
The “box” with those edges is flattened to zero height.
121
"
#
x y z
37 det 2 3 1 D 7x 5y Cz will be zero when .x; y; z/ is a combination of .2; 3; 1/
1 2 3
and .1; 2; 3/. The plane containing those two vectors has equation 7x 5y C z D 0.
38 Doubling each row multiplies the volume by 2n .
Then 2 det A D det.2A/ only if n D 1.
D .det A/I gives .det A/.det C / D .det A/n . Then det A D .det C /1=3 with
n D 4. With det A 1 D 1= det A, construct A 1 using the cofactors. Invert to find A.
39 AC
T
40 The cofactor formula adds 1 by 1 determinants (which are just entries) times their co-
factors of size n 1. Jacobi discovered that this formula can be generalized. For n D 5,
Jacobi multiplied each 2 by 2 determinant from rows 1-2 (with columns a < b) times
a 3 by 3 determinant from rows 3-5 (using the remaining columns c < d < e).
The key question is C or sign (as for cofactors). The product is given a C
sign when a, b, c, d , e is an even permutation of 1, 2, 3, 4, 5. This gives the correct
determinant C1 for that permutation matrix. More than that, all other P that permute a,
b and separately c, d , e will come out with the correct sign when the 2 by 2 determinant
for columns a; b multiplies the 3 by 3 determinant for columns c; d; e.
41 The Cauchy-Binet formula gives the determinant of a square matrix AB (and AAT in
particular) when the factors A, B are rectangular. For (2 by 3) times (3 by 2) there are
3 products of 2 by 2 determinants from A and B (printed in boldface):
"g j #
"g j #
a b c
h k
h k
d e f
i `
i `
#
"
1 1
14 30
AB D
BD 2 4
Check
30 66
3 7
Cauchy-Binet: .4 2/.4 2/ C .7 3/.7 3/ C .14 12/.14 12/ D 24
.14/.66/ .30/.30/ D 24
a
d
b
e
"g j #
h k
i `
1 2 3
AD
1 4 7
c
f
a
d
b
e
c
f
Problem Set 6.1, page 293
1 The eigenvalues are 1 and 0:5 for A, 1 and 0:25 for A2 , 1 and 0 for A1 . Exchanging
the rows of A changes the eigenvalues to 1 and 0:5 (the trace is now 0:2 C 0:3/.
Singular matrices stay singular during elimination, so D 0 does not change.
2 A has 1 D
1 and 2 D 5 with eigenvectors x1 D . 2; 1/ and x2 D .1; 1/. The
matrix A C I has the same eigenvectors, with eigenvalues increased by 1 to 0 and 6.
That zero eigenvalue correctly indicates that A C I is singular.
3 A has 1 D 2 and 2 D
x 2 D .2; 1/. A
1
1 (check trace and determinant) with x 1 D .1; 1/ and
has the same eigenvectors, with eigenvalues 1= D 12 and 1.
Solutions to Exercises
59
4 A has 1 D
3 and 2 D 2 (check trace D 1 and determinant D 6) with x1 D
.3; 2/ and x2 D .1; 1/. A2 has the same eigenvectors as A, with eigenvalues 21 D 9
and 22 D 4.
5 A and B have eigenvalues 1 and 3. A C B has 1 D 3, 2 D 5. Eigenvalues of A C B
are not equal to eigenvalues of A plus eigenvalues of B.
6 A and B have 1 D 1 and 2 D 1. AB and BA have D 2 ˙
p
3. Eigenvalues of AB
are not equal to eigenvalues of A times eigenvalues of B. Eigenvalues of AB and BA
are equal (this is proved in section 6.6, Problems 18-19).
7 The eigenvalues of U (on its diagonal) are the pivots of A. The eigenvalues of L (on
its diagonal) are all 1’s. The eigenvalues of A are not the same as the pivots.
8 (a) Multiply Ax to see x which reveals (b) Solve .A
I /x D 0 to find x.
9 (a) Multiply by A: A.Ax/ D A.x/ D Ax gives A x D 2 x
2
A 1 : x D A 1 Ax D A
.A C I /x D . C 1/x.
1
x D A
1
x gives A
1
x D
1
x
(b) Multiply by
(c) Add I x D x:
10 A has 1 D 1 and 2 D :4 with x 1 D .1; 2/ and x 2 D .1; 1/. A1 has 1 D 1 and
2 D 0 (same eigenvectors). A100 has 1 D 1 and 2 D .:4/100 which is near zero.
So A100 is very near A1 : same eigenvectors and close eigenvalues.
11 Columns of A 1 I are in the nullspace of A 2 I because M D .A 2 I /.A 1 I /
D zero matrix [this is the Cayley-Hamilton Theorem in Problem 6.2.32]. Notice that
M has zero eigenvalues .1 2 /.1 1 / D 0 and .2 2 /.2 1 / D 0.
12 The projection matrix P has D 1; 0; 1 with eigenvectors .1; 2; 0/, .2; 1; 0/, .0; 0; 1/.
Add the first and last vectors: .1; 2; 1/ also has D 1. Note P 2 D P leads to 2 D so D 0 or 1.
13 (a) P u D .uuT /u D u.uT u/ D u so D 1
(b) P v D .uuT /v D u.uT v/ D 0
(c) x 1 D . 1; 1; 0; 0/, x 2 D . 3; 0; 1; 0/, x 3 D . 5; 0; 0; 1/ all have P x D 0x D 0.
14 Two eigenvectors of this rotation matrix are x 1 D .1; i / and x 2 D .1; i / (more
generally cx 1 , and d x 2 with cd ¤ 0).
15 The other two eigenvalues are D
1
.
2
16 Set D 0 in det.A
p
1 ˙ i 3/; the three eigenvalues are 1; 1; 1.
I / D .1 / : : : .n / to find det A D .1 /.2 / .n /.
p
p
17 1 D
C d C .a d /2 C 4bc/ and 2 D 12 .a C d
/ add to a C d .
2
If A has 1 D 3 and 2 D 4 then det.A I / D . 3/. 4/ D 7 C 12.
4 0
3 2
2 2
18 These 3 matrices have D 4 and 5, trace 9, det 20:
;
;
.
0 5
1 6
3 7
1
.a
2
19 (a) rank D 2
(d) eigenvalues of .B 2 C I /
1
are 1; 21 ; 15 .
0 1
has trace 11 and determinant 28, so D 4 and 7. Moving to a 3 by 3
28 11
"
#
0
1 0
0 1 has det.C I / D 3 C 62 11 C 6 D
companion matrix, C D 0
6
11 6
.1 /.2 /.3 /. Notice the trace 6 D 1 C 2 C 3, determinant 6 D .1/.2/.3/, and
also 11 D .1/.2/ C .1/.3/ C .2/.3/.
20 A D
(b) det.B T B/ D 0
Solutions to Exercises
60
I / has the same determinant as .A I /T
1 0
1 1 have different
and
0 0 eigenvectors.
because every square matrix has det M D det M T . 1 0
21 .A
22 D 1 (for Markov), 0 (for singular),
23
0
1
1
2
(so sum of eigenvalues D trace D 12 /.
0
0 1
1 1
Always A2 is the zero matrix if D 0 and 0,
,
,
.
0
0 0
1 1
by the Cayley-Hamilton Theorem in Problem 6.2.32.
24 D 0; 0; 6 (notice rank 1 and trace 6) with x 1 D .0; 2; 1/, x 2 D .1; 2; 0/, x 3 D
.1; 2; 1/.
25 With the same n ’s and x’s, Ax D c1 1 x 1 C C cn n x n equals Bx D c1 1 x 1 C
C cn n x n for all vectors x. So A D B.
26 The block matrix has D 1, 2 from B and 5, 7 from D. All entries of C are multiplied
by zeros in det.A
I /, so C has no effect on the eigenvalues.
27 A has rank 1 with eigenvalues 0; 0; 0; 4 (the 4 comes from the trace of A). C has rank
2 (ensuring two zero eigenvalues) and .1; 1; 1; 1/ is an eigenvector with D 2. With
trace 4, the other eigenvalue is also D 2, and its eigenvector is .1; 1; 1; 1/.
28 B has D
1, 1, 1, 3 and C has D 1; 1; 1; 3. Both have det D 3.
p
p
3; Rank-1 matrix: .C / D
29 Triangular matrix: .A/ D 1; 4; 6; .B/ D 2, 3,
0; 0; 6.
a b
1
aCb
1
30
D
D .a C b/
; 2 D d b to produce the correct trace
c d
1
cCd
1
.a C b/ C .d b/ D a C d .
31 Eigenvector .1; 3; 4/ for A with D 11 and eigenvector .3; 1; 4/ for PAP T . Eigenvec-
tors with ¤ 0 must be in the column space since Ax is always in the column space,
and x D Ax=.
32 (a) u is a basis for the nullspace, v and w give a basis for the column space
(b) x D .0; 13 ; 15 / is a particular solution. Add any cu from the nullspace
(c) If Ax D u had a solution, u would be in the column space: wrong dimension 3.
33 If vT u D 0 then A2 D u.vT u/vT is the zero matrix and 2 D 0; 0 and D 0; 0
and trace .A/ D 0. This zero trace also comes from adding the diagonal entries of
A D uvT :
u1 u1 v1 u1 v2
v1 v2 D
AD
has trace u1 v1 C u2 v2 D vT u D 0
u2
u2 v1 u2 v2
I / D 0 gives the equation 4 D 1. This reflects the fact that P 4 D I .
The solutions of 4 D 1 are D 1; i; 1; i: The real eigenvector x 1 D .1; 1; 1; 1/
is not changed by the permutation P . Three more eigenvectors are .i; i 2 ; i 3 ; i 4 / and
.1; 1; 1; 1/ and . i; . i /2 ; . i /3 ; . i /4 /:
34 det.P
35 3 by 3 permutation matrices: Since P T P D I gives .det P /2 D 1, the determinant is 1
or 1. The pivots are always 1 (but there may be row exchanges). The trace of P can
be 3 (for P D I ) or 1 (for row exchange) or 0 (for double exchange). The possible
eigenvalues are 1 and 1 and e 2i=3 and e 2i=3 .
Solutions to Exercises
61
36 1 D e 2i=3 and 2 D e
2i=3
give det 1 2 D 1 and trace 1 C 2 D 1.
2
sin with D
has this trace and det. So does every M 1 AM !
cos 3
37 (a) Since the columns of A add to 1, one eigenvalue is D 1 and the other is c :6
(to give the correct trace c C :4).
cos AD
sin (b) If c D 1:6 then both eigenvalues are 1, and all solutions to .A
multiples of x D .1; 1/.
I / x D 0 are
(c) If c D :8, the eigenvectors for D 1 are multiples
of (1, 3). Since all powers An
1 1 1
also have column sums D 1, An will approach
D rank-1 matrix A1 with
4 3 3
eigenvalues 1; 0 and correct eigenvectors. .1; 3/ and .1; 1/.
Problem Set 6.2, page 307
1 2
1 1 1 0 1
D
0 3
0 1 0 3 0
1
1 1
1
1
;
D
1
3 3
1
Put the eigenvectors in S
1 1 2
2
A D SƒS 1 D
and eigenvalues in ƒ.
0 1 0
3 If A D SƒS
1
3
0
5
1
0 0
0 4
1
0
" 3
4
1
4
1
4
1
4
#
.
1
2 3
D
.
1
0 5
then the eigenvalue matrix for A C 2I is ƒ C 2I and the eigenvector
matrix is still S . A C 2I D S.ƒ C 2I /S 1 D SƒS 1 C S.2I /S 1 D A C 2I .
4 (a) False: don’t know ’s
5 With S D I; A D SƒS
triangular, so SƒS
1
(b) True
(c) True
(d) False: need eigenvectors of S
1
1
D ƒ is a diagonal matrix. If S is triangular, then S
is also triangular.
6 The columns of S are nonzero multiples of .2;1/ and .0;1/: either order. Same for A
7 A D SƒS
1
D
1
1
1
1
1
2
1
1
1
=2 D
1
1 C 2
1 2
3
3
3
1
.
1 2
=2 D
1 C 2
b
for any a and b.
a
1
1 1
1 2
1 0
1
2
1
8 A D SƒS
D
D
. Sƒk S
1 0
1
0 1
1
1 2 1 k
2
1
2nd component is Fk
1 2
1 0
1
2
1
D
.
1
1
1
0
.k1 k2 /=.1 2 /
0 k2
1 2 1
:5 :5
9 (a) A D
has 1 D 1, 2 D 12 with x 1 D .1; 1/, x 2 D .1; 2/
1 0
#
"
#
n
" 2
1
2
1
1
1
1
0
3
3
(b) An D
! A1 D 23 31
1
1
1
2
0 . :5/n
a
b
is
1
D
3
10 The rule FkC2 D FkC1 C Fk produces the pattern: even, odd, odd, even, odd, odd, : : :
(b) False (repeated D 2 may have only one line of
(c) False (repeated may have a full set of eigenvectors)
11 (a) True (no zero eigenvalues)
eigenvectors)
Solutions to Exercises
62
12 (a) False: don’t know 13 A D
(b) True: an eigenvector is missing
(c) True.
8 3
9 4
10 5
only eigenvectors
(or other), A D
, AD
;
3 2
4 1
5 0
are x D .c; c/:
14 The rank of A
3I is r D 1. Changing any entry except a12 D 1 makes A
diagonalizable (A will have two different eigenvalues)
15 Ak D Sƒk S
16
17
18
19
k
approaches zero if and only if every jj < 1; Ak1 ! A1
1 ; A2 ! 0.
1 1
1 0
1
1
1 0
ƒD
and S D
I ƒk !
and Sƒk S 1 ! 12 21 : steady
0 :2
1
1
0 0
2
2
state.
:9 0
3
3
3
3
3
10 3
10
10
ƒD
, S D
; A10
D
.:9/
,
A
D
.:3/
,
2
2
0 :3
1
1
1
1
1
1
6
3
3
6
3
3
A10
D .:9/10
C .:3/10
because
is the sum of
C
.
2
0
1
1
0
1
1
1 1
1 1
2
1
1 1 0
1 1
1 1 0
k
D
and A D
1
2
1 0 3
1 1
1
1 0 3k
2 1
2 k
1 1C3
1 3k
1 1
.
. Multiply those last three matrices to get Ak D
1 1
2 1 3k 1 C 3k
k k
1
1 5 0
1
1
5
5k 4k
k
B D
D
.
0
1 0 4
0
1
0
4k
1
20 det A D .det S /.det ƒ/.det S
diagonalizable.
1
/ D det ƒ D 1 n . This proof works when A is
21 trace S T D .aq C bs/ C .cr C dt / is equal to .qa C rc/ C .sb C t d / D trace T S .
Diagonalizable case: the trace of SƒS
1
D trace of .ƒS
22 AB BA D I is impossible since trace AB
1
/S D ƒ: sum of the ’s.
AB BAD
trace BA
D zero ¤ trace I . 1 0
1 0
C is possible when trace .C / D 0, and E D
has EE T E T E D
.
1 1
0 1
1
A 0
S 0
ƒ 0
S
0
1
23 If A D SƒS then B D
D
. So B has
0 2A
0 S
0 2ƒ
0
S 1
the additional eigenvalues 21 ; : : : ; 2n .
24 The A’s form a subspace since cA and A1 C A2 all have the same S . When S D I
the A’s with those eigenvectors give the subspace of diagonal matrices. Dimension 4.
25 If A has columns x 1 ; : : : ; x n then column by column, A2 D A means every Ax i D x i .
All vectors in the column space (combinations of those columns x i ) are eigenvectors
with D 1. Always the nullspace has D 0 (A might have dependent columns, so
there could be less than n eigenvectors with D 1). Dimensions of those spaces add
to n by the Fundamental Theorem, so A is diagonalizable (n independent eigenvectors
altogether).
26 Two problems: The nullspace and column space can overlap, so x could be in both.
There may not be r independent eigenvectors in the column space.
Solutions to Exercises
63
p
p
p
p
2 1
1
27 R D S ƒS D
has R2 D A. B needs D 9 and
1, trace is not real.
1 2
p
1
0
0 1
Note that
can have
1 D i and i , trace 0, real square root
.
1 0
0
1
28 AT D A gives x T ABx D .Ax/T .Bx/ kAxkkBxk by the Schwarz inequality.
29
30
31
32
B T D B gives x T BAx D .Bx/T .Ax/ kAxkkBxk. Add to get Heisenberg’s
Uncertainty Principle when AB BA D I . Position-momentum, also time-energy.
The factorizations of A and B into SƒS 1 are the same. So A D B. (This is
the same as Problem 6.1.25, expressed in matrix form.)
A D Sƒ1 S 1 and B D Sƒ2 S 1 . Diagonal matrices always give ƒ1 ƒ2 D ƒ2 ƒ1 .
Then AB D BA from Sƒ1 S 1 Sƒ2 S 1 D S ƒ1 ƒ2 S 1 D S ƒ2 ƒ1 S 1 D Sƒ2 S 1
Sƒ1 S 1 D BA.
a b
0
b
a d b
(a) A D
has D a and D d : .A aI /.A dI / D
0 d
0 d a
0
0
0 0
1 1
2
1
D
. (b) A D
has A2 D
and A2 A I D 0 is true, match0 0
1 0
1 1
ing 2 1 D 0 as the Cayley-Hamilton Theorem predicts.
When A D SƒS 1 is diagonalizable, the matrix A j I D S.ƒ j I /S 1 will have
0 in the j; j diagonal entry of ƒ j I . In the product p.A/ D .A 1 I / .A n I /,
each inside S 1 cancels S . This leaves S times (product of diagonal matrices ƒ j I )
times S 1 . That product is the zero matrix because the factors produce a zero in each
diagonal position. Then p.A/ D zero matrix, which is the Cayley-Hamilton Theorem.
(If A is not diagonalizable, one proof is to take a sequence of diagonalizable matrices
approaching A.)
Comment I have also seen this reasoning but I am not convinced:
Apply the formula AC T D .det A/I from Section 5.3 to A I with variable . Its
cofactor matrix C will be a polynomial in , since cofactors are determinants:
.A
I / cof .A
I /T D det.A
I /I D p./I:
“For fixed A, this is an identity between two matrix polynomials.” Set D A to find
the zero matrix on the left, so p.A/ D zero matrix on the right—which is the CayleyHamilton Theorem.
I am not certain about the key step of substituting a matrix for . If other matrices
B are substituted, does the identity remain true? If AB ¤ BA, even the order of
multiplication seems unclear : : :
33 D 2; 1; 0 are in ƒ and the eigenvectors are in S (below). Ak D S ƒk S
"
2
1
1
1
1
1
#
0
1
1 ƒk
6
1
"
2
2
0
1
2
3
#
1
2k
2 D
6
3
"
4
2
2
#
2 2
. 1/k
1 1 C
3
1 1
"
1
1
1
1
1
1
1
is
1
1
1
#
Check k D 4. The .2; 2/ entry of A4 is 24 =6 C . 1/4 =3 D 18=6 D 3. The 4-step paths
that begin and end at node 2 are 2 to 1 to 1 to 1 to 2, 2 to 1 to 2 to 1 to 2, and 2 to 1 to
3 to 1 to 2. Much harder to find the eleven 4-step paths that start and end at node 1.
Solutions to Exercises
64
34 If AB D BA, then B has the same eigenvectors .1; 0/ and .0; 1/ as A. So B is also
diagonal b D c D 0.
The nullspace
forthe
following
equation
is 2-dimensional:
1 0 a b
a b
1 0
0
b
0 0
AB BA D
D
D
. The
0 2 c d
c d
0 2
c
0
0 0
coefficient matrix has rank 4 2 D 2.
p
35 B has D i and i , so B 4 has 4 D 1 and 1 and B 4 D I . C has D .1 ˙ 3i /=2.
This is exp.˙ i=3/ so 3 D 1 and 1. Then C 3 D I and C 1024 D C .
cos sin 36 The eigenvalues of A D
are D e i and e i (trace 2 cos and
sin cos det D 1). Their eigenvectors are .1; i / and .1; i /:
1 1 e i n
i
1
An D Sƒn S 1 D
=2i
i i
i
1
e i n
i n
.e
C e i n /=2 cos n
sin n
D
D
:
sin n
cos n
.e i n e i n /=2i Geometrically, n rotations by give one rotation by n .
37 Columns of S times rows of ƒS
1
will give r rank-1 matrices .r D rank of A/.
38 Note that ones.n/ ones.n/ D n ones.n/. This leads to C D 1=.n C 1/.
AA
1
D .eye.n/ C ones.n// .eye.n/ C C ones.n//
D eye.n/ C .1 C C C C n/ ones.n/ D eye.n/:
Problem Set 6.3, page 325
1 u1 D e
4t
1
1
1
t
4t 1
t
, u2 D e
. If u.0/ D .5; 2/, then u.t / D 3e
C 2e
.
0
1
0
1
2 z.t / D 2e t ; then dy=dt D 4y
Problem 1.
6e t with y.0/ D 5 gives y.t / D 3e 4t C 2e t as in
3 (a) If every column of A adds to zero, this means that the rows add to the zero row. So
the rows are dependent, and A is singular,
and D 0 is an eigenvalue.
2
3
(b) The eigenvalues of A D
are 1 D 0 with eigenvector x 1 D .3; 2/ and
2
3
2 D 5 (to give trace
2 D .1; 1/. Then the usual 3 steps:
D 5)
with
x
4
3
1
1. Write u.0/ D
as
C
D x1 C x2
1
2
1
2. Follow those eigenvectors by e 0t x 1 and e 5t x 2
3. The solution u.t / D x 1 C e 5t x 2 has steady state x 1 D .3; 2/.
1
1
4 d.v C w/=dt D .w v/ C .v w/ D 0, so the total v C w is constant. A D
1
1
1 D 0
1
1
v.1/ D 20 C 10e 2
v.1/ D 20
has
with x 1 D
, x2 D
;
2 D 2
1
1
w.1/ D 20 10e 2 w.1/ D 20
Solutions to Exercises
65
v
1
1
D
has D 0 and C2: v.t / D 20 C 10e 2t 1 as t ! 1.
w
1
1
a 1
6 AD
has real eigenvalues a C 1 and a 1. These are both negative if a < 1,
1 a
b
1
0
and the solutions of u D Au approach zero. B D
has complex eigenvalues
1 b
b C i and b i . These have negative real parts if b < 0, and all solutions of v0 D Bv
approach zero.
d
5
dt
7 A projection matrix has eigenvalues D 1 and D 0. Eigenvectors P x D x fill
the subspace that P projects onto: here x D .1; 1/. Eigenvectors P x D 0 fill the
perpendicular subspace: here x D .1; 1/. For the solution to u0 D P u,
3
2
1
1
1
t 2
0t
u.0/ D
D
C
u.t / D e
Ce
approaches
:
1
2
1
2
1
1
8
9
10
11
12
6
2
2
1
has 1 D 5, x 1 D
, 2 D 2, x 2 D
; rabbits r.t / D 20e 5t C 10e 2t ,
2
1
1
2
w.t / D 10e 5t C20e 2t . The ratio of rabbits to wolves approaches 20=10; e 5t dominates.
4
1
1
1
4 cos t
it 1
it
(a)
D2
C2
.
(b) Then u.t / D 2e
C2e
D
.
0
i
i
i
i
4 sin t
0 d y
y
0 1 y
0 1
D
.AD
has det.A I / D 2 5 4 D 0.
0 D
y 00
4 5 y0
4 5
dt y
Directly substituting y D e t into y 00 D 5y 0 C 4y also gives 2 D 5 C 4 and the same
p
two values of . Those values are 12 .5 ˙ 41/ by the quadratic formula.
0 1
1 t
y.t /
1 t
y.0/
e At D I C t
C zeros D
. Then
D
0 0
0 1
y 0 .t /
0 1 y 0 .0/
0
y.0/ C y .0/t
. This y.t / D y.0/ C y 0 .0/t solves the equation.
y 0 .0/
0 1
AD
has trace 6, det 9, D 3 and 3 with one independent eigenvector .1; 3/.
9 6
13 (a) y.t / D cos 3t and sin 3t solve y 00D
9y. It is 3 cos 3t that starts with y.0/ D 3
0
1
and y 0 .0/ D 0.
(b) A D
has det D 9: D 3i and 3i with x D .1; 3i /
9 0
1
1
3 cos 3t
3 3it
3 3it
C 2e
D
.
and .1; 3i /. Then u.t / D 2 e
3i
3i
9 sin 3t
14 When A is skew-symmetric, ku.t /k D ke At u.0/k is ku.0/k. So e At is orthogonal.
4
4
t 1
t 0
15 up D 4 and u.t / D ce C 4; up D
and u.t / D c1 e
C c2 e
C
.
2
t
1
2
t
16 Substituting u D e ct v gives ce ct v D Ae ct v
.A
cI /
1
e ct b or .A cI /v D b or v D
b D particular solution. If c is an eigenvalue then A cI is not invertible.
Solutions to Exercises
66
0
1 0
1 1
17 (a)
(b)
(c)
. These show the unstable cases
1
0 1
1 1
(a) 1 < 0 and 2 > 0 (b) 1 > 0 and 2 > 0 (c) D a ˙ i b with a > 0
1
0
1
1
1
1
18 d=dt .e At / D A C A2 t C 2 A3 t 2 C 6 A4 t 3 C D A.I C At C 2 A2 t 2 C 6 A3 t 3 C /.
This is exactly Ae At , the derivative we expect.
1
Bt
2
19 e D I CBt (short series with B D 0) D
0
4t
0
. Derivative D
1
0
4
D B.
0
20 The solution at time t C T is also e A.tCT / u.0/. Thus e At times e AT equals e A.tCT / .
21
1
0
4
1
D
0
0
4
1
1 0
0 0
1
0
4
1
;
1
0
4
1
et
0
0
1
22 A2 D A gives e At D I C At C 12 At 2 C 16 At 3 C D I C .e t
t
4
e 4e t 4
D
.
1
0
1
t
e et 1
1/A D
.
0
1
e 4.e 1/
1
4
B
23 e D
from 21 and e D
from 19. By direct multiplication
0
1
0
1
e 0
e A e B ¤ e B e A ¤ e ACB D
.
0 1
t 1 3t
1
.e
et /
e
1 1
1 1 1 0 1
At
2
2
.
24 A D
D
D
1 . Then e
0 3
0 2 0 3 0
0
e 3t
2
2
1 3
1 3
2
25 The matrix has A D
D
D A. Then all An D A. So e At D
0 0
0 0
t
e 3.e t 1/
2
t
I C .t C t =2Š C /A D I C .e
1/A D
as in Problem 22.
0
0
A
1
0
26 (a) The inverse of e At is e
At
At
To see e x, write .I C At C
(b) If Ax D x then e At x D e t x and e t ¤ 0.
C /x D .1 C t C 12 2 t 2 C /x D e t x.
1 2 2
A t
2
27 .x; y/ D.e 4t ; e 4t/ is a growing solution. The correct matrix for the exchanged u D
2
. It does have the same eigenvalues as the original matrix.
0
1
t
1 1
28 Centering produces U nC1 D
U n . At t D 1,
has D
1 0
t 1 .t /2
e i=3 and e i=3 . Both eigenvalues have 6 D 1 so A6 D I . Therefore U 6 D A6 U 0
comes exactly back to U 0 .
1 2n
2n
First A has D ˙i and A4 D I . n
29
A D . 1/n
Linear growth.
2n
2n C 1
Second A has D 1; 1 and
1
1 a2
2a
U n.
30 With a D t =2 the trapezoidal step is U nC1 D
2a 1 a2
1 C a2
.y; x/ is
2
4
That matrix has orthonormal columns ) orthogonal matrix ) kU nC1 k D kU n k
31 (a) .cos A/x D .cos /x
(b) .A/ D 2 and 0 so cos D 1; 1 and cos A D I
(c) u.t / D 3.cos 2 t /.1; 1/C1.cos 0t /.1; 1/ Œ u 0 D Au has exp; u 00 D Au has cos 
Solutions to Exercises
67
Problem Set 6.4, page 337
Note A way to complete the proof at the end of page 334, (perturbing the matrix to produce distinct eigenvalues) is now on the course website: “Proofs of the Spectral Theorem.”
math.mit.edu/linearalgebra.
"
# "
#
1 3 6
0
1
2 D 1 .A C AT / C 1 .A AT /
2
2
0
3
1 AD 3 3 3 C 1
D
symmetric
C
skew-symmetric:
6 3 5
2
3
0
2 .AT CA/T D AT C T .AT /T D AT CA. When A is 6 by 3, C will be 6 by 6 and the triple
product AT CA is 3 by 3.
p
p
p
3 D 0; 4; 2; unit vectors ˙.0; 1; 1/= 2 and ˙.2; 1; 1/= 6 and ˙.1; 1; 1/= 3.
10
0
1
2
5 in ƒ D
,xD
and
have to be normalized to unit
0
5
2
1
1 1
2
vectors in Q D p
.
2
1
5
"
#
2
1
2
1
The columns of Q are unit eigenvectors of A
2
2
1 .
QD
Each unit eigenvector could be multiplied by 1
3
1
2
2
9 12
A D
has D 0 and 25 so the columns of Q are the two eigenvectors:
12 16
:8 :6
QD
or we can exchange columns or reverse the signs of any column.
:6 :8
1 2
(a)
has D 1 and 3 (b) The pivots have the same signs as the ’s (c) trace
2 1
D 1 C 2 D 2, so A can’t have two negative eigenvalues.
0 1
3
3
If A D 0 then all D 0 so all D 0 as in A D
. If A is symmetric then
0 0
A3 D Qƒ3 QT D 0 requires ƒ D 0. The only symmetric A is Q 0 QT D zero matrix.
4 D 10 and
5
6
7
8
9 If is complex then is also an eigenvalue .Ax D x/. Always C is real. The
10
11
12
13
trace is real so the third eigenvalue of a 3 by 3 real matrix must be real.
If x is not real then D x T Ax=x T x is not always real. Can’t assume real eigenvectors!
"
#
"
# 1
1
1
1
3 1
9 12
:64
:48
:36 :48
2
2
2
2
D2
C4
;
D
0
C25
1
1
1
1
1 3
12 16
:48
:36
:48 :64
2
2
2
2
" #
x T1
Œ x 1 x 2  is an orthogonal matrix so P1 CP2 D x 1 x T1 Cx 2 x T2 D Œ x 1 x 2 
D I;
x T2
P1 P2 D x 1 .x1T x2 /x2T D 0. Second proof: P1 P2 D P1 .I P1 / D P1 P1 D 0 since
P12 D P1 .
0 b
A 0
0 A
AD
has D i b and i b. The block matrices
and
are
b 0
0 A
A 0
also skew-symmetric with D i b (twice) and D i b (twice).
Solutions to Exercises
68
14 M is skew-symmetric and orthogonal; ’s must be i , i ,
15 A D
i
1
i , i to have trace zero.
1
has D 0; 0 and only one independent eigenvector x D .i; 1/. The
i
T
good property for complex matrices is not AT D A (symmetric) but A D A (Hermitian
with real eigenvalues and orthogonal eigenvectors: see Problem 20 and Section 10:2).
16 (a) If Az D y and AT y D z then BΠyI
z  D Œ AzI AT y  D Œ yI z . So
is also an eigenvalue of B. (b) AT Az D AT .y/ D 2 z. (c) D 1, 1, 1, 1;
x 1 D .1; 0; 1; 0/, x 2 D .0; 1; 0; 1/, x 3 D .1; 0; 1; 0/, x 4 D .0; 1; 0; 1/.
"
#
" #
0 0 1
1
p
p
1 ,
17 The eigenvalues of B D 0 0 1 are 0; 2;
2 by Problem 16 with x 1 D
1
1
0
0
2
3
2
3
1
1
1 5.
x 2 D 4 p1 5, x 3 D 4 p
2
2
18 1. y is in the nullspace of A and x is in the column space D row space because A D
AT . Those spaces are perpendicular so y T x D 0.
2. If Ax D x and Ay D ˇy then shift by ˇ: .A ˇI /x D . ˇ/x and .A ˇI /y D
0 and again x?y.
"
#
"
#
Perpendicular for A
1
1 0
1 0
1
1 0 ; B has S D 0 1
0 . Not perpendicular for B
19 A has S D 1
0
0 1
0 0 2d
since B T ¤ B
T
1
3 C 4i
20 A D
is a Hermitian matrix .A D A/. Its eigenvalues 6 and 4 are
3 4i
1
T
real. Adjust equations .1/–.2/ in the text to prove that is always real when A D A:
T
Ax D x leads to Ax D x: Transpose to x T A D x T using A D A:
Then x T Ax D x T x and also x T Ax D x T x: So D is real:
1 2
21 (a) False. A D
0 1
(b) True from AT D QƒQT
(c) True from A 1 D Qƒ 1 QT
(d) False!
0 1
has
1 0
1 D i and 2 D i with x 1 D .1; i / first for A but x 1 D .1; i / first for AT .
22 A and AT have the same ’s but the order of the x’s can change. A D
23 A is invertible, orthogonal, permutation, diagonalizable, Markov; B is projection, di-
agonalizable, Markov. A allows QR; SƒS
1
; QƒQT ; B allows SƒS
24 Symmetry gives QƒQT if b D 1; repeated and no S if b D
1
and QƒQT .
1; singular if b D 0.
25 Orthogonal andsymmetric requires
jj D 1 and real, so D ˙1.
Then A D ˙I or
A D QƒQT D
cos sin sin cos 1
0
0
1
cos sin sin cos 2
D
cos sin 2
26 Eigenvectors .1; 0/ and .1; 1/ give a 45ı angle even with AT very close to A.
sin 2
.
cos 2
Solutions to Exercises
69
p
b 2 4ac/. Then 1 2 is b 2 4c.
For det.A C tB I / we have b D 3 8t andp
c D 2 C 16t t 2 . The minimum of
2
b
4c is 1=17 at t D 2=17. Then 2 1 D 1= 17.
T
4
2Ci
28 A D
D A has real eigenvalues D 5 and 1 with trace D 4 and
2 i
0
1
27 The roots of 2 C b C c D 0 are 2 . b ˙
p
T
det D 5. The solution to 20 proves that is real when A D A is Hermitian; I did
not intend to repeat this part.
29 (a) A D QƒQ T times A T D Qƒ T Q T equals A T times A because ƒƒ T D ƒ T ƒ
(diagonal!) (b) step 2: The 1; 1 entries of T T T and T T T are jaj2 and jaj2 C jbj2 .
This makes b D 0 and T D ƒ.
T
1 q 11 : : : n q 1n max jq11 j2 C C jq1n j2 D max .
30 a11 is q11 : : : q1n
x T Ax: (b) zT Az is pure imaginary, its real
part is x Ax C y Ay D 0 C 0 (c) det A D 1 : : : n 0 W pairs of ’s D i b; i b.
32 Since A is diagonalizable with eigenvalue matrix ƒ D 2I , the matrix A itself has to be
SƒS 1 D S.2I /S 1 D 2I . (The unsymmetric matrix Œ2 1 I 0 2 also has D 2; 2.)
31 (a) x T .Ax/ D .Ax/T x D x T AT x D
T
T
Problem Set 6.5, page 350
1 Suppose a > 0 and ac > b 2 so that also c > b 2 =a > 0.
(i) The eigenvalues have
the same sign because 1 2 D det D ac b 2 > 0. (ii) That sign is positive because
1 C 2 > 0 (it equals the trace a C c > 0).
1 10
2 Only A4 D
has two positive eigenvalues. x T A1 x D 5x12 C 12x1 x2 C 7x22
10 101
is negative for example when x1 D 4 and x2 D 3: A1 is not positive definite as its
determinant confirms.
b
0
Positive definite
1 0 1
1 0 1
1 b
3
D
D LDLT
for 3 < b < 3
b 1 0 9 b2
b 1 0 9 b2
0 1
Positive definite
1 0 2
4
1 0 2
0
1 2
D
D LDLT .
for c > 8
2 1 0 c 8
2 1 0 c 8 0 1
4 f .x; y/ D x 2 C 4xy C 9y 2 D .x C 2y/2 C 5y 2 ; x 2 C 6xy C 9y 2 D .x C 3y/2 .
5 x 2 C 4xy C 3y 2 D .x C 2y/2
y 2 D difference of squares is negative at x D 2,
y D 1, where the first square is zero.
0 1 x
0 1
6 A D
produces f .x; y/ D x y
D 2xy. A has D 1 and
1 0
1 0 y
1. Then A is an indefinite matrix and f .x; y/ D 2xy has a saddle point.
"
#
2 3 3
1
2
6
5
7 RT R D
and RT R D
are positive definite; RT R D 3 5 4 is
2 13
5 6
3 4 5
singular (and positive semidefinite). The first two R’s have independent columns. The
2 by 3 R cannot have full column rank 3, with only 2 rows.
Pivots 3; 4 outside squares, `ij inside.
1 0 3 0 1 2
3 6
. T
D
8 AD
6 16
2 1 0 4 0 1
x Ax D 3.x C 2y/2 C 4y 2
Solutions to Exercises
70
9 AD
"
4
4
8
4
4
8
8
8
16
#
has only one pivot D 4, rank A D 1,
eigenvalues are 24; 0; 0; det A D 0.
10 A D
"
2
1
0
1
2
1
0
1
2
#
has pivots
BD
2; 32 ; 43 ;
"
2
1
1
1
2
1
#
" # " #
1
1
0
1 is singular; B 1 D 0 .
2
1
0
11 Corner determinants jA1 j D 2, jA2 j D 6, jA3 j D 30. The pivots are 2=1; 6=2; 30=6.
12 A is positive definite for c > 1; determinants c; c 2
1, and .c 1/2 .c C 2/ > 0.
4 and 4d C 12 are never both positive).
B is never positive definite (determinants d
1 5
13 A D
is an example with a C c > 2b but ac < b 2 , so not positive definite.
5 10
14 The eigenvalues of A
1
are positive because they are 1=.A/. And the entries of A
pass the determinant tests. And x T A 1 x D .A 1 x/T A.A 1 x/ > 0 for all x ¤ 0.
1
15 Since x T Ax > 0 and x T Bx > 0 we have x T .A C B/x D x T Ax C x T Bx > 0 for
all x ¤ 0. Then A C B is a positive definite matrix. The second proof uses the test
A D RT R (independent
columns in R): If A D RT R and B D S T S pass this test, then
T R
ACB D R S
also passes, and must be positive definite.
S
16 x T Ax is zero when .x1 ; x2 ; x3 / D .0; 1; 0/ because of the zero on the diagonal. Actu-
ally x T Ax goes negative for x D .1; 10; 0/ because the second pivot is negative.
17 If ajj were smaller than all ’s, A
definite). But A
ajj I would have all eigenvalues > 0 (positive
ajj I has a zero in the .j; j / position; impossible by Problem 16.
18 If Ax D x then x T Ax D x T x. If A is positive definite this leads to D x T Ax=x T x >
0 (ratio of positive numbers). So positive energy ) positive eigenvalues.
19 All cross terms are x Ti x j D 0 because symmetric matrices have orthogonal eigenvec-
tors. So positive eigenvalues ) positive energy.
20 (a) The determinant is positive; all > 0
(b) All projection matrices except I
are singular (c) The diagonal entries of D are its eigenvalues (d) A D I has
det D C1 when n is even.
21 A is positive definite when s > 8; B is positive definite when t > 5 by determinants.
22 R D
2
4
1
1
p
2
3 2p
15 4 9
1
p
1
32
54
3
1 15
2 1
4 0
3 1
1 1
T
D
;RDQ
Q D
.
p
1 2
0 2
1 3
2
23 x 2 =a2 C y 2 =b 2 p
is x T Ax when A p
D diag.1=a2 ; 1=b 2 /. Then 1 D 1=a2 and 2 D
2
2
2
1=b so a D 1= 1 and b D 1= 2 . The ellipse 9x C 16y D 1 has axes with
half-lengths a D 31 and b D 14 . The points . 13 ; 0/ and .0; 14 / are at the ends of the axes.
p
p
p
24 The ellipse x 2 C xy C y 2 D 1 has axes with half-lengths 1= D 2 and 2=3.
9 3
4 8
1 0 4 0 1 2
2 4
T
25 A D C C D
;
D
and C D
3 5
8 25
2 1 0 9 0 1
0 3
Solutions to Exercises
71
2
#
1
3 0 0
0 1 2 and C D 4 0
0 0 2
0
square roots of the pivots from D. Note again C T C D LDLT D A.
p T
26 The Cholesky factors C D L D
D
"
3
1 1
1 p1 5 have
0
5
2
27 Writing out x T Ax D x T LDLT x gives ax 2 C 2bxy C cy 2 D a.x C ab y/2 C ac a b y 2 .
28
29
30
31
32
33
So the LDLT from elimination is exactly the same as completing the square. The
example 2x 2 C 8xy C 10y 2 D 2.x C 2y/2 C 2y 2 with pivots 2; 2 outside the squares
and multiplier 2 inside.
det A D .1/.10/.1/ D 10; D 2 and 5; x 1 D .cos ; sin /, x 2 D . sin ; cos /; the
’s are positive. So A is positive definite.
2
6x
2x
H1 D
is semidefinite; f1 D . 21 x 2 C y/2 D 0 on the curve 12 x 2 C y D 0;
2x
2
6x 1
0 1
H2 D
D
is indefinite at .0; 1/ where 1st derivatives D 0. This is a
1 0
1 0
saddle point of the function f2 .x; y/.
ax 2 C 2bxy C cy 2 has a saddle point if ac < b 2 . The matrix is indefinite ( < 0 and
> 0) because the determinant ac b 2 is negative.
If c > 9 the graph of z is a bowl, if c < 9 the graph has a saddle point. When c D 9
the graph of z D .2x C 3y/2 is a “trough” staying at zero along the line 2x C 3y D 0.
Orthogonal matrices, exponentials e At , matrices with det D 1 are groups. Examples
of subgroups are orthogonal matrices with det D 1, exponentials e An for integer n.
Another subgroup: lower triangular elimination matrices E with diagonal 1’s.
A product AB of symmetric positive definite matrices comes into many applications.
The “generalized” eigenvalue problem Kx D M x has AB D M 1 K. (often we use
eig.K; M / without actually inverting M .) All eigenvalues are positive:
ABx D x gives .Bx/T ABx D .Bx/T x: Then D x T B T ABx=x T Bx > 0:
p
p
D2
3; 2 1; 2; 2 C 1; 2 C 3. The
2 cos k
6
product of those eigenvalues is 6 D det K.
35 Put parentheses in x T AT CAx D .Ax/T C.Ax/. Since C is assumed positive definite,
this energy can drop to zero only when Ax D 0. Sine A is assumed to have independent
columns, Ax D 0 only happens when x D 0. Thus AT CA has positive energy and is
positive definite.
34 The five eigenvalues of K are 2
My textbooks Computational Science and Engineering and Introduction to Applied Mathematics start with many examples of AT CA in a wide range of applications.
I believe this is a unifying concept from linear algebra.
Problem Set 6.6, page 360
so M D F G 1 . C similar to A and B ) A similar to B.
1 0
3 0
0 1
1
2 AD
is similar to B D
D M AM with M D
.
0 3
0 1
1 0
1 B D GC G
1
D GF
1
AF G
1
Solutions to Exercises
72
1
3 BD
0
1
BD
1
4
BD
2
1
0
1 0
1 0 1 0
D
DM
0
1 1
1 0 1 1
1
1
1
0
1 1 1 0
D
;
1
0
1
1 1 0
1
1
3
0 1
1 2 0 1
D
.
1
1 0
3 4 1 0
4 A has no repeated so it can be diagonalized: S
1
5
0
1
0
1
0 0
1 0
0
,
,
,
0
1 1
1 0
0
0
0
is by itself and also
1
1
1
AM ;
1
AS D ƒ makes A similar to ƒ.
1
are similar (they all have eigenvalues 1 and 0).
1
1
is by itself with eigenvalues 1 and 1.
0
6 Eight families of similar matrices: six matrices have D 0, 1 (one family); three
matrices have D 1, 1 and three have D 0, 0 (twop
families each!); one has D
1, 1; one has D 2, 0; two matrices have D 21 .1 ˙ 5/ (they are in one family).
1
AM /.M 1 x/ D M 1 .Ax/ D M 1 0 D 0
(b) The nullspaces of A and
of M 1 AM have the same
dimension.
Different
vectors
and
different bases.
Same ƒ
0 1
0 2 have the same line of eigenvectors
But A D
and B D
Same S
0 0
0 0 and the same eigenvalues D 0; 0.
1 2
1 3
1 k
1 0
1 1
A2 D
, A3 D
, every Ak D
. A0 D
and A 1 D
.
0 1
0 1
0 1
0 1
0 1
2
k
1
c
2c
c
kc k 1
c
c 2
2
k
0
1
J D
and J D
; J D I and J D
.
0 c2
0
c 1
0
ck
du
dv
5
v.0/
1
u.0/ D
D
. The equation
D
u has
D v C w and
2
w.0/
0
dt
dt
dw
D w. Then w.t / D 2e t and v.t / must include 2t e t (this comes from the
dt
repeated ). To match v.0/ D 5, the solution is v.t / D 2t e t C 5e t .
2
3
2
3
m21 m22 m23 m24
0 m12 m13 0
6 0
6 0 m22 m23 0 7
0
0
0 7
7
6
7
If M 1 JM D K then JMD6
4 m41 m42 m43 m44 5 D MKD4 0 m32 m33 0 5.
0
0
0
0
0 m42 m43 0
That means m21 D m22 D m23 D m24 D 0. M is not invertible, J not similar to K.
The five 4 by 4 Jordan forms with D 0; 0; 0; 0 are J1 D zero matrix and
2
3
2
3
0 1 0 0
0 1 0 0
60 0 0 07
60 0 1 07
J2 D 4
J3 D 4
0 0 0 05
0 0 0 05
0 0 0 0
0 0 0 0
2
3
2
3
0 1 0 0
0 1 0 0
60 0 0 07
60 0 1 07
J4 D 4
J5 D 4
0 0 0 15
0 0 0 15
0 0 0 0
0 0 0 0
7 (a) .M
8
9
10
11
12
13
Solutions to Exercises
73
Problem 12 showed that J3 and J4 are not similar, even with the same rank. Every
matrix with all D 0 is “nilpotent” (its nth power is An D zero matrix). You see
J 4 D 0 for these matrices. How many possible Jordan forms for n D 5 and all D 0?
1
Ji Mi D MiT in each block
(2) M0 has those diagonal blocks Mi to get M0 JM0 D J T . (3) AT D .M 1 /T J T M T
equals .M 1 /T M0 1 JM0 M T D .MM0 M T / 1 A.MM0 M T /, and AT is similar to A.
14 (1) Choose Mi D reverse diagonal matrix to get Mi
1
AM I / D det.M 1 AM M 1 IM /. This is det.M 1 .A I /M /.
By the product rule, the determinants of M and M 1 cancel to leave det.A I /.
a b
d c
b a
c d
16
is similar to
;
is similar to
. So two pairs of similar
c d
b a
d c
a b
1 0
0 1
matrices but
is not similar to
: different eigenvalues!
0 1
1 0
15 det.M
1
17 (a) False: Diagonalize a nonsymmetric A D SƒS 1. Then ƒ
is symmetric
and similar
0 1
0
1
and
are similar
1 0
1
0
(d) True: Adding I increases all eigenvalues by 1
(b) True: A singular matrix has D 0. (c) False:
(they have D ˙1)
18 AB D B
1
.BA/B so AB is similar to BA. If ABx D x then BA.Bx/ D .Bx/.
19 Diagonal blocks 6 by 6, 4 by 4; AB has the same eigenvalues as BA plus 6
20 (a) A D M
to B2 .
3
(c)
0
3
(d)
0
1
2
1
1
1
2
4 zeros.
2
BM ) A D .M BM /.M BM / D M B M . So A is similar
(b) A2 equals . A/2 but
not be similar to B D A (it could be!).
A may
3 0
1
because1 ¤ 2 , sothesematrices are similar.
is diagonalizableto
0 4
4
1
has only one eigenvector, sonot diagonalizable (e) PAP T is similar to A.
3
21 J 2 has three 1’s down the second superdiagonal, and two independent eigenvectors for
#
0 1 0
0 1
.
D 0. Its 5 by 5 Jordan form is
with J3 D 0 0 1 and J2 D
0 0
J2
0 0 0
Note to professors: An interesting question: Which matrices A have (complex) square
roots R2 D A? If A is invertible, no problem. But any Jordan blocks for D 0 must
have sizes n1 n2 : : : nk nkC1 D 0 that come in pairs like 3 and 2 in this
example: n1 D (n2 or n2 C1) and n3 D (n4 or n4 C1) and so on.
# "
#
"
a 1 0
a 0 0
A list of all 3 by 3 and 4 by 4 Jordan forms could be 0 b 0 , 0 a 0 ,
0 0 c
0 0 b
3
2
"
#
a 1
a 1 0
(for any numbers a; b; c)
a
6
7
0 a 1
5,
b
with 3; 2; 1 eigenvectors; diag.a; b; c; d / and 4
0 0 a
c
3
3 2
3 2
2
a 1
a 1
a 1
a 1
a 1
a
7 6
7
6
7 6
with 4; 3; 2; 1 eigenvectors.
,
5, 4
4
a 15
a
b 15 4
b
a
b
J3
"
Solutions to Exercises
74
I / must be just n . The CayleyHamilton Theorem in Problem 6.2.32 immediately says that An D zero matrix. The
key example is a single n by n Jordan block (with n 1 ones above the diagonal):
Check directly that J n D zero matrix.
23 Certainly Q1 R1 is similar to R1 Q1 D Q1 1 .Q1 R1 /Q1 . Then A1 D Q1 R1 cs 2 I is
similar to A2 D R1 Q1 cs 2 I:
24 A could have eigenvalues D 2 and D 12 (A could be diagonal). Then A 1 has the
same two eigenvalues (and is similar to A).
22 If all roots are D 0, this means that det.A
Problem Set 6.7, page 371
1 A D U †V T D u1
u2
1
0
v1
v2
T
32
3
3 2p
1
3 5 4 50 0 5 4 1
25
0 0 2
1
3
1
D p
p
10
5
2
4
1 2
2 This A D
is a 2 by 2 matrix of rank 1. Its row space has basis v1 , its nullspace
3 6
has basis v2 , its column space has basis u1 , its left nullspace has basis u2 :
1 1
1
2
Row space p
Nullspace p
2
1
5
5
1
1
1
3
T
Column space p
; N .A / p
:
3
1
10
10
3 If A has rank 1 then so does AT A. The only nonzero eigenvalue of AT A is its trace,
2
2
which is the sum of all aij
. (Each diagonal entry of AT A is the sum of aij
down one
column, so the trace is the sum down all columns.) Then 1 D square root of this sum,
2
and 12 D this sum of all aij
.
p
p
5 But A is
3C 5 2
3
2 1
T
T
2
4 A A D AA D
has eigenvalues 1 D
, 2 D
.
1 1
indefinite
2
2
p
p
1 D .1 C 5/=2 D 1 .A/; 2 D . 5 1/=2 D 2 .A/; u1 D v1 but u2 D v2 .
5 A proof that eigshow finds the SVD. When V 1 D .1; 0/; V 2 D .0; 1/ the demo finds
AV 1 and AV 2 at some angle . A 90ı turn by the mouse to V 2 ; V 1 finds AV 2 and
AV 1 at the angle . Somewhere between, the constantly orthogonal v1 and v2
must produce Av1 and Av2 at angle =2. Those orthogonal directions give u1 and u2 .
p p 2 1
1=p2
1=p2
6 AAT D
has 12 D 3 with u1 D
and 22 D 1 with u2 D
.
1 2
1= 2
1= 2
p
2
3
2 p 3
"
#
1=p6
1= 2
1 1 0
AT A D 1 2 1 has 12 D 3 with v1 D 4 2=p6 5, 22 D 1 with v2 D 4 0p 5;
0 1 1
1= 2
1= 6
p 3
2
p
1=p3
1 1 0
3 0 0
and v3 D 4 1=p3 5. Then
D Œ u1 u2 
Œ v1 v2 v3 T .
0 1 1
0 1 0
1= 3
Solutions to Exercises
75
p
3 and 2 D 1 in †. The smallest change to
rank 1 is to make 2 D 0. In the factorization
7 The matrix A in Problem 6 had 1 D
A D U †V T D u1 1 vT1 C u2 2 vT2
this change 2 ! 0 will leave the closest rank–1 matrix as u1 1 vT1 . See Problem 14
for the general case of this problem.
8 The number max .A 1 /max .A/ is the same as max .A/=min .A/. This is certainly 1.
It equals 1 if all ’s are equal, and A D U †V T is a multiple of an orthogonal matrix.
The ratio max =min is the important condition number of A studied in Section 9:2.
9 A D U V T since all j D 1, which means that † D I .
10 A rank–1 matrix with Av D 12u would have u in its column space, so A D uwT
for some vector w. I intended (but didn’t say) that w is a multiple of the unit vector
v D 12 .1; 1; 1; 1/ in the problem. Then A D 12uvT to get Av D 12u when vT v D 1.
11 If A has orthogonal columns w1 ; : : : ; wn of lengths 1 ; : : : ; n , then AT A will be di-
agonal with entries 12 ; : : : ; n2 . So the ’s are definitely the singular values of A (as
expected). The eigenvalues of that diagonal matrix AT A are the columns of I , so
V D I in the SVD. Then the ui are Avi =i which is the unit vector wi =i .
The SVD of this A with orthogonal columns is A D U †V T D .A†
1
/.†/.I /:
12 Since AT D A we have 12 D 21 and 22 D 22 . But 2 is negative, so 1 D 3 and
13
14
15
16
17
2 D 2. The unit eigenvectors of A are the same u1 D v1 as for AT A D AAT and
u2 D v2 (notice the sign change because 2 D 2 , as in Problem 4).
Suppose the SVD of R is R D U †V T . Then multiply by Q to get A D QR. So the
SVD of this A is .QU /†V T . (Orthogonal Q times orthogonal U D orthogonal QU .)
The smallest change in A is to set its smallest singular value 2 to zero. See # 7.
The singular values of A C I are not j C 1. They come from eigenvalues of
.A C I /T .A C I /.
This simulates the random walk used by Google on billions of sites to solve Ap D p.
It is like the power method of Section 9:3 except that it follows the links in one “walk”
where the vector pk D Ak p0 averages over all walks.
p
p
A D U †V T D Œcosines including u4  diag.sqrt.2
2; 2; 2 C 2// Œsine matrixT .
AV D U † says that differences of sines in V are cosines in U times ’s.
The SVD of the derivative on Œ0;  with f .0/ D 0 has u D sin nx, D n, v D cos nx!
Problem Set 7.1, page 380
1 With w D 0 linearity gives T .v C 0/ D T .v/ C T .0/. Thus T .0/ D 0. With c D
1
linearity gives T . 0/ D T .0/. This is a second proof that T .0/ D 0.
2 Combining T .cv/ D cT .v/ and T .d w/ D d T .w/ with addition gives T .cv C d w/ D
cT .v/ C d T .w/. Then one more addition gives cT .v/ C d T .w/ C eT .u/.
3 (d) is not linear.
Solutions to Exercises
76
4 (a) S.T .v// D v
(b) S.T .v1 / C T .v2 // D S.T .v 1 // C S.T .v2 //.
5 Choose v D .1; 1/ and w D . 1; 0/. Then T .v/ C T .w/ D .v C w/ but T .v C w/ D
.0; 0/.
6 (a) T .v/ D v=kvk does not satisfy T .v C w/ D T .v/ C T .w/ or T .cv/ D cT .v/
(d) satisfies T .cv/ D cT .v/.
(b) and (c) are linear
7 (a) T .T .v// D v (b) T .T .v// D v C .2; 2/ (c) T .T .v// D v (d) T .T .v// D T .v/.
8 (a) The range of T .v1 ; v2 / D .v1
v2 ; 0/ is the line of vectors .c; 0/. The nullspace
is the line of vectors .c; c/.
(b) T .v1 ; v2 ; v3 / D .v1 ; v2 / has Range R2 , kernel
{(0; 0; v3 )}
(c) T .v/ D 0 has Range f0g, kernel R2
(d) T .v1 ; v2 / D .v1 ; v1 /
has Range = multiples of .1; 1/, kernel = multiples of .1; 1/.
9 If T .v1 ; v2 ; v3 / D .v2 ; v3 ; v1 / then T .T .v// D .v3 ; v1 ; v2 /; T 3 .v/ D v; T 100 .v/ D
T .v/.
10 (a) T .1; 0/ D 0
(b) .0; 0; 1/ is not in the range
n
m
(c) T .0; 1/ D 0.
11 For multiplication T .v/ D Av: V D R , W D R ; the outputs fill the column space;
v is in the kernel if Av D 0.
12 T .v/ D .4; 4/I .2; 2/I .2; 2/; if v D .a; b/ D b.1; 1/C a 2 b .2; 0/ then T .v/ D b.2; 2/C
.0; 0/.
13 The distributive law (page 69) gives A.M1 C M2 / D AM1 C AM2 . The distributive
law over c’s gives A.cM / D c.AM /.
14 This A is invertible. Multiply AM D 0 and AM D B by A
1
to get M D 0 and
B. The kernel contains only the zero matrix M D 0.
2
2
0 0
15 This A is not invertible. AM D I is impossible. A
D
. The range
1
1
0 0
contains only matrices AM whose columns are multiples of .1; 3/.
0 0
0 1
16 No matrix A gives A
D
. To professors: Linear transformations on
1 0
0 0
matrix space come from 4 by 4 matrices. Those in Problems 13–15 were special.
M DA
1
17 For T .M / D M T (a) T 2 D I is True
(d) False.
0 b
a 0
18 T .I / D 0 but M D
D T .M /; these M ’s fill the range. Every M D
0 0
c d
is in the kernel. Notice that dim (range) C dim (kernel) D 3 C 1 D dim (input space of
2 by 2 M ’s).
19 T .T
1
.M // D M so T
(b) True
1
.M / D A
1
MB
1
(c) True
.
20 (a) Horizontal lines stay horizontal, vertical lines stay vertical
(b) House squashes
onto a line
(c) Vertical lines stay vertical because T .1; 0/ D .a11 ; 0/.
:7 :7
2 0
projects the house (since
21 D D
doubles the width of the house. A D
0 1
:3 :3
A2 D A from trace D 1 and D 0; 1).
The projection is onto the column space of
1 1
A D line through .:7; :3/. U D
will shear the house horizontally: The point
0 1
at .x; y/ moves over to .x C y; y/.
Solutions to Exercises
77
0
with d > 0 leaves the house AH sitting straight up
(b) A D 3I
d
cos sin expands the house by 3
(c) A D
rotates the house.
sin cos a
22 (a) A D
0
23 T .v/ D
T .v/ D
v rotates the house by 180ı around the origin. Then the affine transformation
v C .1; 0/ shifts the rotated house one unit to the right.
24 A code to add a chimney will be gratefully received!
10 10 10 10
1 0
:5 :5
26
compresses vertical distances by 10 to 1.
projects onto the 45ı line.
0 :1
:5 :5
p
:5 :5
rotates by 45ı clockwise and contracts by a factor of 2 (the columns have
:5 :5
p
1 1
length 1= 2).
has determinant 1 so the house is “flipped and sheared.” One
1 0
way to see this is to factor the matrix as LDLT :
1 1
1 0 1
1 1
D
D (shear) (flip left-right) (shear):
1 0
1 1
1 0 1
25 This code needs a correction: add spaces between
27 Also 30 emphasizes that circles are transformed to ellipses (see figure in Section 6.7).
28 A code that adds two eyes and a smile will be included here with public credit given!
29 (a) ad
bc D 0
(b) ad bc > 0
(c) jad bcj D 1.
If vectors to two
corners transform to themselves then by linearity T D I . (Fails if one corner is .0; 0/.)
v1
30 The circle
v2
transforms to the ellipse by rotating 30ı and stretching the first
axis by 2.
31 Linear transformations keep straight lines straight! And two parallel edges of a square
(edges differing by a fixed v) go to two parallel edges (edges differing by T .v/). So the
output is a parallelogram.
Problem Set 7.2, page 395
2
0
For S v D d 2 v=dx 2
60
2
3
1 v1 , v2 , v3 , v4 D 1, x, x , x
The matrix for S is B D 4
0
Sv1 D Sv2 D 0, Sv3 D 2v1 , Sv4 D 6v2 ;
0
0
0
0
0
2
0
0
0
3
0
67
.
05
0
2 S v D d 2 v=dx 2 D 0 for linear functions v.x/ D a C bx. All .a; b; 0; 0/ are in the
nullspace of the second derivative matrix B.
3 (Matrix A)2 D B when (transformation T )2 D S and output basis = input basis.
Solutions to Exercises
78
4 The third derivative matrix has 6 in the .1; 4/ position; since the third derivative of x 3
is 6. This matrix also comes from AB. The fourth derivative of a cubic is zero, and B 2
is the zero matrix.
5 T .v1 C v2 C v3 / D 2w1 C w2 C 2w3 ; A times .1; 1; 1/ gives .2; 1; 2/.
6 v D c.v2
v 3 / gives T .v/ D 0; nullspace is .0; c; c/; solutions .1; 0; 0/ C .0; c; c/.
7 .1; 0; 0/ is not in the column space of the matrix A, and w1 is not in the range of
the linear transformation T . Key point: Column space of matrix matches range of
transformation.
8 We don’t know T .w/ unless the w’s are the same as the v’s. In that case the matrix is
A2 .
9 Rank of A D 2 D dimension of the range of T . The outputs Av (column space) match
the outputs T .v/ (the range of T ). The “output space” W is like Rm : it contains all
outputs but may not be filled up.
"
#
" #
" #
1 0 0
1
1
1 D
10 The matrix for T is A D 1 1 0 . For the output 0 choose input v D
1 1 1
0
0
" #
1
A 1 0 . This means: For the output w1 choose the input v1 v2 .
0
"
#
1
0 0
1
1
1 0 so T 1 .w1 / D v1 v2 ; T 1 .w2 / D v2 v3 ; T 1 .w3 / D v3 .
11 A D
0
1 1
The columns of A 1 describe T 1 from W back to V . The only solution to T .v/ D 0
is v D 0.
12 (c) T
1
.T .w1 // D w1 is wrong because w1 is not generally in the input space.
13 (a) T .v1 / D v2 ; T .v2 / D v1 is its own inverse
(b) T .v1 / D v1 ; T .v2 / D 0 has
T 2 D T (c) If T 2 D I for part (a) and T 2 D T for part (b), then T must be I .
1
2
3
1
2 1
.
must be 2A
D inverse of (a)
(c) A
(b)
14 (a)
6
3
5
2
5 3
0
r
s
1
r s
to
and
; this is the “easy” direcand
15 (a) M D
transforms
u
0
1
t
t u
1
a b
tion. (b) N D
transforms in the inverse direction, back to the standard
c d
basis vectors. (c) ad D bc will make the forward matrix singular and the inverse
impossible.
1 3
1
1 0 2 1
D
.
16 M W D
7
3
1 2 5 3
17 Recording basis vectors is done by a Permutation matrix. Changing lengths is done by
a positive diagonal matrix.
18 .a; b/ D .cos ;
sin /. Minus sign from Q
1
D QT .
Solutions to Exercises
1 1
a
5
19 M D
;
D
D first column of M
4 5
b
4
1 1
.
4 5
79
1
1
D coordinates of
in basis
0
x 2 ; w3 .x/ D 21 .x 2 x/; y D 4w1 C 5w2 C 6w3 .
#
"
#
0
1
0
1
1 1
0
:5 : v’s to w’s: inverse matrix D 1
0 0 . The key
w’s to v’s: :5
:5
1
:5
1
1 1
idea: The matrix multiplies the coordinates in the v basis to give the coordinates in the
w basis.
2
32 3 2 3
A
4
1 a a2
254B 5
4
4
The 3 equations to match 4; 5; 6 at x D a; b; c are 1 b b
D 5 5. This
2
1 c c
C
6
Vandermonde determinant equals .b a/.c a/.c b/. So a; b; c must be distinct to
have det ¤ 0 and one solution A; B; C .
The matrix M with these nine entries must be invertible.
Start from A D QR. Column 2 is a2 D r12 q 1 C r22 q 2 . This gives a2 as a combination
of the q’s. So the change of basis matrix is R.
Start from A D LU . Row 2 of A is `21 (row 1 of U / C `22 (row 2 of U ). The change
of basis matrix is always invertible, because basis goes to basis.
The matrix for T .vi / D i vi is ƒ D diag.1 ; 2 ; 3 /.
If T is not invertible, T .v1 /; : : : ; T .v n / is not a basis. We couldn’t choose wi D T .v i /.
0 3
1 0
(a)
gives T .v1 / D 0 and T .v2 / D 3v1 . (b)
gives T .v1 / D v1 and
0 0
0 0
T .v1 C v2 / D v1 (which combine into T .v2 / D 0 by linearity).
T .x; y/ D .x; y/ is reflection across the x-axis. Then reflect across the y-axis to get
S.x; y/ D . x; y/. Thus S T D I .
S takes .x; y/ to . x; y/. S.T .v// D . 1; 2/. S.v/ D . 2; 1/ and T .S.v// D .1; 2/.
cos 2. ˛/
sin 2. ˛/
which is rotation
Multiply the two reflections to get
sin 2. ˛/
cos 2. ˛/
by 2. ˛/. In words: .1; 0/ is reflected to have angle 2˛, and that is reflected again to
angle 2 2˛.
False: We will not know T .v/ for energy v unless the n v’s are linearly independent.
2 1
1
1
1 3
20 w2 .x/ D 1
21
22
23
24
25
26
27
28
29
30
31
32
"
33 To find coordinates in the wavelet basis, multiply by W
1
6
6
D6
6
4
4
1
4
1
2
4
1
4
1
2
4
1
4
0
1
2
4
1
4
7
7
7.
0 7
5
1
0
0
2
Then e D 14 w1 C 14 w2 C 21 w3 and v D w3 C w4 . Notice again: W tells us how the
bases change, W 1 tells us how the coordinates change.
34 The last step writes 6, 6, 2, 2 as the overall average 4, 4, 4, 4 plus the difference 2, 2,
2, 2. Therefore c1 D 4 and c2 D 2 and c3 D 1 and c4 D 1.
Solutions to Exercises
80
35 The wavelet basis is .1; 1; 1; 1; 1; 1; 1; 1/ and the long wavelet and two medium wavelets
.1; 1; 1; 1; 0; 0; 0; 0/; .0; 0; 0; 0; 1; 1; 1; 1/ and 4 wavelets with a single pair 1; 1.
36 If V b D W c then b D V 1 W c. The change of basis matrix is V 1 W .
a b
1 0
a 0
37 Multiplying by
gives T .v1 / D A
D
D av1 Ccv3 . Similarly
c d
0 0
c 0
T .v2 / D av2 C cv42and T .v 3 / D bv
3 1 C d v3 and T .v 4 / D bv2 C d v4 . The matrix
a 0 b 0
60 a 0 b 7
for T in this basis is 4
.
c 0 d 05
0 c 0 d
"
#
1 0 0 0
38 The matrix for T in this basis is A D 0 1 0 0 .
0 0 0 0
Problem Set 7.3, page 406
p
1
1 1
2
10 20
, v2 D p
; 1 D 50.
has D 50 and 0, v1 D p
2
1
20 40
5
5
Orthonormal bases: v1 for row space, v2 for nullspace, u1 for column space, u2 for
N.AT /. All matrices with those four subspaces are multiples cA, since the subspaces
are just lines. Normally many more matrices share the same 4 subspaces. (For example,
all n by n invertible matrices share Rn .)
1
1
7
1
10 20
p
A D QH D p
. H is semidefinite because A is singular.
1
7
50 20 40
p50
1 3
:2 :4
:1 :3
1
AC D V 1= 50 0 U T D 50
; AC A D
, AAC D
.
2 6
:4 :8
:3 :9
0
0
p
1 1
1
10 8
1
AT A D
has D 18 and 2, v1 D p
, v2 D p
, 1 D 18
1
8 10
2 1
2
p
and 2 D 2. p
p
18 0
1
0
AAT D
has u1 D
, u2 D
. The same 18 and 2 go into †.
0 2
0
1
vT1
1 u1 2 u2
D 1 u1 vT1 C 2 u2 vT2 . In general this is 1 u1 vT1 C C r ur vTr .
vT2
1 1
1
T
T
A D U †V splits into QK (polar): Q D U V D p
and K D V †V T D
1
1
2
p
18 p0
.
0
2
AC is A 1 because A is invertible. Pseudoinverse equals inverse when A 1 exists!
"
#
" #
"
#
" #
9 12 0
:6
:8
0
T
:6 , v3 D 0 .
A A D 12 16 0 has D 25; 0; 0 and v1 D :8 , v2 D
0 0 0
0
0
1
Here A D Œ 3 4 0  has rank 1 and AAT D Œ 25  and 1 D 5 is the only singular value
in † D Œ 5 0 0 .
1 AT A D
2
3
4
5
6
7
8
9
10
Solutions to Exercises
11
12
13
14
15
16
17
18
19
20
21
81
" # " #
"
#
:2
:12
:36 :48 0
A D Œ 1  Œ 5 0 0 V T and ACDV 0 D :16 ; AC AD :48 :64 0 I AACDŒ 1 
0
0
0
0 0
The zero matrix has no pivots or singular values. Then † D same 2 by 3 zero matrix
and the pseudoinverse is the 3 by 2 zero matrix.
If det A D 0 then rank.A/ < n; thus rank.AC / < n and det AC D 0.
A must be symmetric and positive definite, if † D ƒ and U D V D eigenvector matrix
Q is orthogonal.
(a) AT A is singular (b) This x C in the row space does give AT Ax C D AT b (c) If
.1; 1/ in the nullspace of A is added to x C , we get another solution to AT Ab
x D AT b.
C
C
But this b
x is longer than x because the added part is orthogonal to x in the row
space.
x C in the row space of A is perpendicular to b
x x C in the nullspace of AT A D
nullspace of A. The right triangle has c 2 D a2 C b 2 .
AAC p D p, AAC e D 0, AC Ax r D x r , AC Ax n D 0.
:36 :48
AC D V †C U T is 15 Œ :6 :8  D Œ :12 :16  and AC A D Œ 1  and AAC D
D
:48 :64
projection.
L is determined by `21 . Each eigenvector in S is determined by one number. The
counts are 1 C 3 for LU , 1 C 2 C 1 for LDU , 1 C 3 for QR, 1 C 2 C 1 for U †V T ,
2 C 2 C 0 for SƒS 1 .
LDLT and QƒQT are determined by 1 C 2 C 0 numbers because A is symmetric.
P
Column times row multiplication gives A D U †V T D
i ui vTi and also AC D
P
C T
1
T
C
V† U D
i vi ui . Multiplying A A and using orthogonality
of each ui to all
P
C
C
T
other
u
leaves
the
projection
matrix
A
A:
A
A
D
1v
v
.
Similarly
AAC D
j
i i
P
1ui uTi from V V T D I .
22 Keep only the r by r corner †r of † (the rest is all zero). Then A D U †V T has the
b M1 †r M T V
b T with an invertible M D M1 †r M T in the middle.
required form A D U
2
2
0 A u
Av
u
The singular values of A are
23
D
D
.
v
v
eigenvalues of this block matrix.
AT 0
AT u
Problem Set 8.1, page 418
"
#
c1 C c2
c2
0
c2
c2 C c3
c3
1 Det AT0 C0 A0 D
is by direct calculation. Set c4 D 0 to
0
c3
c3 C c4
find det AT1 C1 A1 D c1 c2 c3 .
3"
"
#2 1
#
c1
1 0 0
1 1 1
5 0 1 1 D
2 .AT1 C1 A1 / 1 D 1 1 0 4
c2 1
1
1 1 1
0 0 1
c
2 1
3 3
1
1
c1
c1
c1
1
1
5.
4c 1 c 1 C c 1
c
1
1
2
1 C c2
1
1
1
1
1
1
c1
c1 C c2
c1 C c2 C c3
Solutions to Exercises
82
3 The rows of the free-free matrix in equation (9) add to Œ 0 0 0  so the right side needs
f1 Cf2 Cf3 D 0. f D . 1; 0; 1/ gives c2 u1 c2 u2 D 1; c3 u2 c3 u3 D 1; 0 D 0.
Then uparticular D . c2 1 c3 1 ; c3 1 ; 0/. Add any multiple of unullspace D .1; 1; 1/.
Z
Z
d
du
du 1
4
c.x/
dx D
c.x/
D 0 (bdry cond) so we need f .x/ dx D 0.
dx
dx
dx 0
Z x
Z 1
dy
5
D f .x/ gives y.x/ D C
f .t /dt . Then y.1/ D 0 gives C D
f .t /dt
dx
0
0
Z 1
and y.x/ D
f .t /dt . If the load is f .x/ D 1 then the displacement is y.x/ D 1 x.
x
as columns of AT1 times c’s times rows of A1 . The first 3 by 3
“element matrix” c1 E1 D Œ 1 0 0 T c1 Œ 1 0 0  has c1 in the top left corner.
6 Multiply
AT1 C1 A1
7 For 5 springs and 4 masses, the 5 by 4 A has two nonzero diagonals: all ai i D 1
and aiC1;i D 1. With C D diag.c1 ; c2 ; c3 ; c4 ; c5 / we get K D AT CA, symmetric
tridiagonal with diagonal entries Ki i D ci C ciC1 and off-diagonals KiC1;i D ciC1 .
With C D I this K is the 1; 2; 1 matrix and K.2; 3; 3; 2/ D .1; 1; 1; 1/ solves Ku D
ones.4; 1/. (K 1 will solve Ku D ones.4/.)
u00 D 1 with u.0/ D u.1/ D 0 is u.x/ D 21 .x x 2 /. At x D 15 ; 25 ; 53 ; 45
this gives u D 2; 3; 3; 2 (discrete solution in Problem 7) times .x/2 D 1=25.
8 The solution to
9
u 00 D mg has complete solution u.x/ D A C Bx 12 mgx 2 . From u.0/ D 0 we
get A D 0. From u 0 .1/ D 0 we get B D mg. Then u.x/ D 12 mg.2x x 2 / at
x D 31 ; 32 ; 33 equals mg=6; 4mg=9; mg=2. This u.x/ is not proportional to the discrete
u D .3mg; 5mg; 6mg/ at the meshpoints. This imperfection is because the discrete
problem uses a 1-sided difference, less accurate at the free end. Perfect accuracy is
recovered by a centered difference (discussed on page 21 of my CSE textbook).
10 (added in later printing, changing 10-11 below into 11-12). The solution in this fixed-
fixed case is .2:25; 2:50; 1:75/ so the second mass moves furthest.
11 The two graphs of 100 points are “discrete parabolas” starting at .0; 0/: symmetric
around 50 in the fixed-fixed case, ending with slope zero in the fixed-free case.
12 Forward/backward/centered for du=dx has a big effect because that term has the large
coefficient. MATLAB: E D diag(ones.6; 1/; 1/; K D 64 .2 eye.7/ E E 0 /;
D D 80 .E eye.7//; .K C D/nones.7; 1/; % forward; .K D 0 /nones.7; 1/;
% backward; .K C D=2 D 0 =2/nones.7; 1/I % centered is usually the best: more
accurate
Problem Set 8.2, page 428
1 AD
"
1
1
0
1
0
1
#
" # " #
0
c
1
1 ; nullspace contains c ; 0 is not orthogonal to that nullspace.
1
c
0
2 AT y D 0 for y D .1; 1; 1/; current along edge 1, edge 3, back on edge 2 (full loop).
Solutions to Exercises
3 Elimination on b1 ΠA
"
83
b
"
D
1
1
0
1 0 b1
0 1 b2
1 1 b3
#
leads to ΠU
c
D
#
1
1 0 b1
0
1 1 b2 b1
. The nonzero rows of U come from edges 1 and 3
0
0 0 b3 b2 C b1
in a tree. The zero row comes from the loop (all 3 edges).
4 For the matrix in Problem 3, Ax D b is solvable for b D .1; 1; 0/ and not solvable
for b D .1; 0; 0/. For solvable b (in the column space), b must be orthogonal to
y D .1; 1; 1/; that combination of rows is the zero row, and b1 b2 C b3 D 0 is the
third equation after elimination.
5 Kirchhoff’s Current Law AT y D f is solvable for f D .1; 1; 0/ and not solvable
for f D .1; 0; 0/; f must be orthogonal to .1; 1; 1/ in the nullspace: f1 Cf2 Cf3 D 0.
"
#
" #
" # " #
2
1
1
3
1
c
T
1
2
1 xD
3 D f produces x D
1 C c ; potentials
6 A Ax D
1
1
2
0
0
c
x D 1; 1; 0 and currents Ax D 2, 1, 1; f sends 3 units from node 2 into node 1.
#
"
#
" #
"
#
" #
"
3
1
2
1
5=4
c
1
2
1
3
2 ; f D
0 yields x D
1 C any c ;
7 AT
AD
2
2
2
4
1
7=8
c
potentials x D 54 ; 1; 78 and currents CAx D 14 ; 43 ; 14 .
2
3
2 3
2 3
2 3
1
1
0 0
1
0
1
6 1
6 17
6 07
0
1 07
6
7
617
6 7
6 7
1
1 0 7 leads to x D 4 5 and y D 6 1 7 and 6 1 7 solving
8 AD 6 0
1
4 0
5
4
5
4 15
1
0 1
0
1
0
0
1 1
0
1
AT y D 0.
9 Elimination on Ax D b always leads to y T b D 0 in the zero rows of U and R:
b1 C b2 b3 D 0 and b3 b4 C b5 D 0 (those y’s are from Problem 8 in the left
nullspace). This is Kirchhoff’s Voltage Law around the two loops.
2
3
1 1 0 0
The nonzero rows of U keep
6 0 1 1 07
6
7 edges 1, 2, 4. Other spanning trees
10 The echelon form of A is U D 6 0 0 1 1 7
4 0 0 0 0 5 from edges, 1, 2, 5; 1, 3, 4; 1, 3, 5;
1, 4, 5; 2, 3, 4; 2, 3, 5; 2, 4, 5.
0 0 0 0
2
3
diagonal entry D number of edges into the node
2
1
1
0
3
1
1 7 the trace is 2 times the number of nodes
6 1
T
11 A A D 4
1
1
3
1 5 off-diagonal entry D 1 if nodes are connected
0
1
1
2
AT A is the graph Laplacian, AT CA is weighted by C
12 (a) The nullspace and rank of AT A and A are always the same
(b) AT A is always
positive semidefinite because x A Ax D kAxk 0. Not positive definite because
rank is only 3 and .1; 1; 1; 1/ is in the nullspace (c) Real eigenvalues all 0 because
positive semidefinite.
T
T
2
Solutions to Exercises
84
2
4
6 2
T
13 A CAx D 4
2
0
2
8
3
3
2
3
8
3
3
2
0
37
6
xD4
35
6
3
5 1 1
1
; 6 ; 6 ; 0/
gives four potentials x D . 12
07
I grounded x4 D 0 and solved for x
5
0
currents y D CAx D . 23 ; 32 ; 0; 12 ; 12 /
1
14 AT CAx D 0 for x D c.1; 1; 1; 1/ D .c; c; c; c/. If AT CAx D f is solvable, then f in
15
16
17
18
the column space (D row space by symmetry) must be orthogonal to x in the nullspace:
f1 C f2 C f3 C f4 D 0.
The number of loops in this connected graph is n m C 1 D 7 7 C 1 D 1.
What answer if the graph has two separate components (no edges between)?
Start from (4 nodes) (6 edges) C (3 loops) D 1. If a new node connects to 1 old
node, 5 7 C 3 D 1. If the new node connects to 2 old nodes, a new loop is formed:
5 8 C 4 D 1.
(a) 8 independent columns (b) f must be orthogonal to the nullspace so f ’s add
to zero (c) Each edge goes into 2 nodes, 12 edges make diagonal entries sum to 24.
A complete graph has 5 C 4 C 3 C 2 C 1 D 15 edges. With n nodes that count is
1 C C .n 1/ D n.n 1/=2. Tree has 5 edges.
Problem Set 8.3, page 437
1 Eigenvalues D 1 and .75; (A
Ax D x.
:6
2 AD
:4
1
1
1
:75
1
:4
3 D 1 and :8, x D .1; 0/; 1 and
I /x D 0 gives the steady state x D .:6; :4/ with
1
:6
; A1 D
:6
:4
1
1
1
0
0
0
1
:4
1
:6 :6
D
.
:6
:4 :4
:8, x D . 95 ; 49 /; 1; 14 , and 14 , x D . 31 ; 13 ; 31 /.
4 AT always has the eigenvector .1; 1; : : : ; 1/ for D 1, because each row of AT adds
to 1. (Note again that many authors use row vectors multiplying Markov matrices.
So they transpose our form of A.)
5 The steady state eigenvector for D 1 is .0; 0; 1/ D everyone is dead.
6 Add the components of Ax D x to find sum s D s. If ¤ 1 the sum must be s D 0.
:6 C :4a :6 :6a
a1
k
k
1
7 .:5/ ! 0 gives A ! A ; any A D
with
:4 :4a :4 C :6a
:4 C :6a 0
8 If P D cyclic permutation and u0 D .1; 0; 0; 0/ then u1 D .0; 0; 1; 0/; u2 D .0; 1; 0; 0/;
9
10
11
12
13
u3 D .1; 0; 0; 0/; u4 D u0 . The eigenvalues 1; i; 1; i are all on the unit circle. This
Markov matrix contains zeros; a positive matrix has one largest eigenvalue D 1.
M 2 is still nonnegative; Œ 1 1 M D Œ 1 1  so multiply on the right by M to
find Œ 1 1 M 2 D Œ 1 1  ) columns of M 2 add to 1.
D 1 and a C d 1 from the trace; steady state is a multiple of x 1 D .b; 1 a/.
Last row :2; :3; :5 makes A D AT ; rows also add to 1 so .1; : : : ; 1/ is also an eigenvector
of A.
B has D 0 and :5 with x 1 D .:3; :2/ and x 2 D . 1; 1/; A has D 1 so A I has
D 0. e :5t approaches zero and the solution approaches c1 e 0t x 1 D c1 x 1 .
x D .1; 1; 1/ is an eigenvector when the row sums are equal; Ax D .:9; :9; :9/
Solutions to Exercises
85
2
3
14 .I A/.I CACA2 C / D .I CACA2 C / .ACA
CA C / D I. This says that
0 :5
I C A C A2 C is .I A/ 1 . When A D
; A2 D 12 I; A3 D 12 A; A4 D 14 I
1 0
1 C 21 C 21 C 14 C 2 1
D
D .I A/ 1 .
and the series adds to
2 2
1 C 12 C 1 C 12 C 8
130
:5 1
15 The first two A’s have max < 1; p D
and
; I
has no inverse.
6
32
:5 0
16 D 1 (Markov), 0 (singular), :2 (from trace). Steady state .:3; :3; :4/ and .30; 30; 40/.
17 No, A has an eigenvalue D 1 and .I
A/
1
does not exist.
"
#
F1 F2 F3
P1
0 D 3 C
18 The Leslie matrix on page 435 has det.A I / D det
0
P2
2
F1 C F2 P1 C F3 P1 P2 . This is negative for large . It is positive at D 1
provided that F1 C F2 P1 C F3 P1 P2 > 1. Under this key condition, det.A I / must
be zero at some between 1 and 1. That eigenvalue means that the population grows
(under this condition connecting F ’s and P ’s reproduction and survival rates).
19 ƒ times S 1 S has the same diagonal as S 1 S times ƒ because ƒ is diagonal.
20 If B > A > 0 and Ax D max .A/x > 0 then Bx > max .A/x and max .B/ > max .A/.
Problem Set 8.4, page 446
1 Feasible set D line segment .6; 0/ to .0; 3/; minimum cost at .6; 0/, maximum at .0; 3/.
2 Feasible set has corners .0; 0/; .6; 0/; .2; 2/; .0; 6/. Minimum cost 2x
3
4
5
6
y at .6; 0/.
Only two corners .4; 0; 0/ and .0; 2; 0/; let xi ! 1, x2 D 0, and x3 D x1 4.
From .0; 0; 2/ move to x D .0; 1; 1:5/ with the constraint x1 C x2 C 2x3 D 4. The new
cost is 3.1/ C 8.1:5/ D $15 so r D 1 is the reduced cost. The simplex method also
checks x D .1; 0; 1:5/ with cost 5.1/ C 8.1:5/ D $17; r D 1 means more expensive.
Cost D 20 at start .4; 0; 0/; keeping x1 Cx2 C2x3 D 4 move to .3; 1; 0/ with cost 18 and
r D 2; or move to .2; 0; 1/ with cost 17 and r D 3. Choose x3 as entering variable
and move to .0; 0; 2/ with cost 14. Another step will reach .0; 4; 0/ with minimum cost
12.
If we reduce the Ph.D. cost to $1 or $2 (below the student cost of $3), the job will
go to the Ph.D. with cost vector c D .2; 3; 8/ the Ph.D. takes 4 hours .x1 C x2 C 2x3 D
4/ and charges $8.
The teacher in the dual problem now has y 2; y 3; 2y 8 as constraints
AT y c on the charge of y per problem. So the dual has maximum at y D 2. The
dual cost is also $8 for 4 problems and maximum D minimum.
7 x D .2; 2; 0/ is a corner of the feasible set with x1 C x2 C 2x3 D 4 and the new
constraint 2x1 C x2 C x3 D 6. The cost of this corner is c T x D .5; 3; 8/ .2; 2; 0/ D
16. Is this the minimum cost?
Compute the reduced cost r if x3 D 1 enters (x3 was previously zero). The two
constraint equations now require x1 D 3 and x2 D 1. With x D .3; 1; 1/ the new
Solutions to Exercises
86
cost is 3:5
optimal.
1:3 C 1:8 D 20. This is higher than 16, so the original x D .2; 2; 0/ was
Note that x3 D 1 led to x2 D 1 and a negative x2 is not allowed. If x3 reduced
the cost (it didn’t) we would not have used as much as x3 D 1.
8 y T b y T Ax D .AT y/T x c T x. The first inequality needed y 0 and Ax
b 0.
Problem Set 8.5, page 451
1
R 2
0
h
i2
sin..j Ck/x/
j Ck
0
R 2
D 0 and similarly 0 cos..j k/x/ dx D 0
R 2
k ¤ 0 in the denominator. If j D k then 0 cos2 jx dx D .
cos..j Ck/x/ dx D
Notice j
2 Three integral tests show that 1; x; x 2
1
3
are orthogonal on the interval Œ 1; 1 :
R1
1
.1/.x/
dx
D
0;
.1/.x
dx
D 0; 1 .x/.x 2
/ dx D 0: Then
1
1
3
1
2
2
2
2
2x D 2.x
/ C 0.x/ C 3 .1/. Those coefficients 2; 0; 3 can come from integrating
3
f .x/ D 2x 2 times the 3 basis functions and dividing by their lengths squared—in other
words using aT b=aT a for functions (where b is f .x/ and a is 1 or x or x 2 13 ) exactly
as for vectors.
p
3 One example orthogonal to v D .1; 21 ; : : :/ is w D .2; 1; 0; 0; : : :/ with kwk D 5.
R1
R1
3
4
cx/ dx D 0 and 1 .x 2 31 /.x 3 cx/ dx D 0 for all c (odd functions).
1 .1/.x
R1
Choose c so that 1 x.x 3 cx/ dx D Œ 15 x 5 c3 x 3 1 1 D 25 c 23 D 0. Then c D 35 .
R1
R1
2
1
/
3
5 The integrals lead to the Fourier coefficients a1 D 0, b1 D 4=, b2 D 0.
6 From eqn. (3) ak D 0 and bk D 4=k (odd k). The square wave has kf k2 D 2.
Then eqn. (6) is 2 D .16= 2 /. 112 C 312 C 512 C /. That infinite series equals 2 =8.
1; 1 odd square wave is f .x/ D x=jxj for 0 < jxj < . Its Fourier series in
equation (8) is 4= times Œsin x C.sin 3x/=3C.sin 5x=5/C . The sum of the first N
terms has an interesting shape, close to the square wave except where the wave jumps
between 1 and 1. At those jumps, the Fourier sum spikes the wrong way to ˙1:09
(the Gibbs phenomenon) before it takes the jump with the true f .x/:
7 The
This happens for the Fourier sums of all functions with jumps. It makes shock
waves hard to compute. You can see it clearly in a graph of the sum of 10 terms.
p
8 kvk2 D 1C 12 C 14 C 18 C D 2 so kvk D 2; kvk2 D 1Ca2 Ca4 C D 1=.1 a2 /
p
p
R 2
so kvk D 1= 1 a2 ; 0 .1 C 2 sin x C sin2 x/ dx D 2 C 0 C so kf k D 3.
9 (a) f .x/ D .1 C square wave/=2 so the a’s are
2=3, 0, 2=5, . . .
(b) a0 D
R 2
0
1
,
2
0, 0; : : : and the b’s are 2=, 0,
x dx=2 D , all other ak D 0, bk D 2=k.
to or from 0 to 2 (or from any a to a C 2) is over one
R 2
complete period of the function. If f .x/ is periodic this changes 0 f .x/ dx to
R
R0
R
0 f .x/ dxC f .x/ dx. If f .x/ is odd, those integrals cancel to give f .x/ dx D 0
over one period.
10 The integral from
11 cos2 x D
1
2
C 12 cos 2x; cos.x C 3 / D cos x cos 3
sin x sin 3 D
1
2
cos x
p
3
2
sin x.
Solutions to Exercises
87
2
3 2
3 2
32
3
1
0
0 0 0 0 0
1
6 cos x 7 6 sin x 7 6 0 0 1 0 0 76 cos x 7
7 6
7 6
76
7
d 6
6 sin x 7D6 cos x 7D6 0 1 0 0 0 76 sin x 7. This shows the
12
6
7
6
7
6
7
6
7 differentiation matrix.
dx 4
cos 2x 5 4 2 sin 2x 5 4 0 0 0 0 2 54 cos 2x 5
sin 2x
2 cos 2x
0 0 0 2 0
sin 2x
13 The square pulse with F .x/ D 1= h for x h=2 x is an even function, so all sine
coefficients bk are zero. The average a0 and the cosine coefficients ak are
Z h=2
1
1
a0 D
.1= h/dx D
2 h=2
2
Z
2
kh
1
kh
1 h=2
.1= h/ cos kxdx D
sin
which is sinc
ak D
h=2
kh
2
2
(introducing the sinc function .sin x/=x). As h approaches zero, the number x D kh=2
approaches zero, and .sin x/=x approaches 1. So all those ak approach 1=.
The limiting “delta function” contains an equal amount of all cosines: a very irregular function.
Problem Set 8.6, page 458
T
1 The diagonal matrix C D W W is †
1
D
"
1
1
#
with no covariances (inde1=2
pendent trials). Then solve AT CAb
x D AT C b for this weighted least squares problem
(notice C t C D instead of C C Dt ):
"
#
" #
0C C D D 1
0 1 1
C
Ax D b
b is 1C C D D 2 or 1 1
D 2 :
D
2C C D D 4
2 1
4
3 2
6
C
10=7
T
T
A CA D
A Cb D
b
xD
D
:
2 2:5
5
D
6=7
2 If the measurement b3 is totally unreliable and 32 D 1, then the best line will not
use b3 . In this example, the system Ax D b becomes square (first two equations from
Problem 1):
0 1
1 1
C
1
C
1
D
gives
D
: The line b D t C 1 fits exactly:
D
2
D
1
3 If 3 D 0 the third equation is exact. Then the best line has C t C D D b3 which is
2C C D D 4. The errors C t C D b in the measurements at t D 0 and 1 are D 1
and C C D 2. Since D D 4 2C from the exact b3 D 4, those two errors are
D 1 D 3 2C and C C D 2 D 2 C . The sum of squares .3 2C /2 C .2 C /2
is a minimum at 8 D 5C (calculus or linear algebra in 1D). Then C D 8=5 and
D D 4 2C D 4=5.
Solutions to Exercises
88
4 0; 1; 2 have probabilities 14 ; 12 ;
1
4
and 2 D .0
1/2 14 C .1
1/2 21 C .2
1/2 14 D 12 .
2
5 Mean . 12 ; 12 /. Independent flips lead to † D diag. 14 ; 14 /. Trace D total
D 21 .
6 Mean m D p0 and variance 2 D .1
p0 /2 p0 C .0 p0 /2 .1 p0 / D p0 .1 p0 /.
7 Minimize P D
a/
at P D 2a12 2.1 a/22 D 0I a D 22 =.12 C22 /
recovers equation (2) for the statistically correct choice with minimum variance.
a2 12 C.1
2
22
0
A/ 1 AT † 1 †† 1 A.AT † 1 A/ 1 D P D .AT † 1 A/ 1 .
9 The new grade matrix A has row 3 D row 1 and row 4 D row 2, so the rank is 7.
The nullspace of A now includes .1; 1; 1; 1/ as well as .1; 1; 1; 1/. Compare to the
grade matrix in Example 6 (not Example 5). The other two singular vectors v1 and v2
for Example 6 are still correct for this new A.Av1 is still orthogonal to Av2 ):
2
32
3 2
3
3
1
1
3
1
1
8
4
6 1
3
3
17 6 1
17 6 8
47
A 2v1 2v2 D 4
D
:
3
1
1
35 4 1
15 4 8
45
1
3
3
1
1
1
8
4
8 Multiply L†LT D .AT †
1
Those last orthogonal columns are multiples of the orthonormal u1 and u2 . This matrix
A has 1 D 8 and 2 D 4 (only two singular values since the rank is 2). If you
compute AT A to find those singular vectors v1 and v2 from scratch, notice that its trace
is 12 C 22 D 64 C 16 D 80:
2
3
20
12
20
12
20
12
20 7
6 12
AT A D 4
:
20
12
20
12 5
12
20
12
20
Problem Set 8.7, page 463
1 .x; y; z/ has homogeneous coordinates .cx; cy; cz; c/ for c D 1 and all c ¤ 0.
2 For an affine transformation we also need T (origin), because T .0/ need not be 0 for
3
4
5
6
affine T . Including this translation by T .0/, .x; y; z; 1/ is transformed to xT .i / C
yT .j / C zT .k/ C T .0/.
2
32
3 2
3
1
1
1
1
1
1
6
76
7 6
7
T T1 D 4
54
5D4
5 is translation along .1; 6; 8/.
1
1
1
1 4 3 1
0 2 5 1
1 6 8 1
S D diag .c; c; c; 1/; row 4 of S T and T S is 1; 4; 3; 1 and c; 4c; 3c; 1; use vT S !
#
"
1=8:5
1=11
SD
for a 1 by 1 square, starting from an 8:5 by 11 page.
1
2
32
3
2
3
1
2
2
1
2
2
6
76
7
6
7
Œx y z 14
54
5 D Œx y z 14
5.
1
2
2
1
1
2 1
1
2
2
4 1
The first matrix translates by . 1; 1; 2/. The second matrix rescales by 2.
Solutions to Exercises
89
cos /aaT and sin .a/.
Then Qa D a because aaT a D a(unit vector) and a a D 0.
If aT b D 0 and those three parts of Q (Problem 7) multiply b, the results in Qb are
.cos /b and aaT b D 0 and . sin /a b. The component along b is .cos /b.
"
#
5
4
2
2 2 1
1
4
5
2 . Notice knk D 1.
nD
; ;
has P D I nnT D
3 3 3
9
2
2
8
2
3
5
4
2
0
5
2
07
6 4
.
We can choose .0; 0; 3/ on the plane and multiply T P TC D 91 4
2
2
8
05
6
6
3
9
.3; 3; 3/ projects to 13 . 1; 1; 4/ and .3; 3; 3; 1/ projects to . 13 ; 13 ; 53 ; 1/. Row vectors!
The projection of a square onto a plane is a parallelogram (or a line segment). The
sides of the square are perpendicular, but their projections may not be (x T y D 0 but
.P x/T .P y/ D x T P T P y D x T P y may be nonzero).
That projection of a cube onto a plane produces a hexagon.
# " 1
8
4
1 1 1
11 11 1
T
8
1
4 D
.3; 3; 3/.I 2nn / D
; ;
;
;
.
3 3 3
3
3
3
4
4
7
7
7
.3; 3; 3; 1/ ! .3; 3; 0; 1/ !
; 73 ; 83 ; 1 !
; 73 ; 31 ; 1 .
3
3
Just subtracting vectors would give v D .x; y; z; 0/ ending in 0 (not 1). In homogeneous coordinates, add a vector to a point.
Space is rescaled by 1=c because .x; y; z; c/ is the same point as .x=c; y=c; z=c; 1/.
7 The three parts of Q in equation (1) are .cos /I and .1
8
9
10
11
12
13
14
15
16
17
Problem Set 9.1, page 472
1 Without exchange, pivots :001 and 1000; with exchange, 1 and
2
3
4
5
6
1. When the pivot is
"
#
1
1
1
0
1
1 .
larger than the entries below it, all j`ij j D jentry/pivotj 1. A D
1
1
1
"
#
9
36
30
1
36
192
180 .
The exact inverse of hilb(3) is A D
30
180
180
" # "
# "
#
"
# "
#
1
11=6
1:833
0
1:80
6
A 1 D 13=12 D 1:083 compares with A
D 1:10 . kbk < :04 but
1
47=60
0:783
3:6
0:78 kxk > 6:
The difference .1; 1; 1/ .0; 6; 3:6/ is in a direction x that has Ax near zero.
The largest kxk D kA 1 bk is kA 1 k D 1=min since AT D A; largest error 10 16 =min .
Each row of U has at most w entries. Then w multiplications to substitute components
of x (already known from below) and divide by the pivot. Total for n rows < wn.
The triangular L 1 , U 1 , R 1 need 12 n2 multiplications. Q needs n2 to multiply the
right side by Q 1 D QT . So QRx D b takes 1.5 times longer than LU x D b.
Solutions to Exercises
90
D I : Back substitution needs 12 j 2 multiplications on column j , using the j
by j upper left block. Then 12 .12 C 22 C C n2 / 12 . 13 n3 / D total to find U 1 .
1 0
2 2
2
2
0 1
1 0
8
!
!
D U with P D
and L D
;
2 2
1 0
0
1
1 0
:5 1
"
#
"
#
"
#
"
#
2 2 0
2
2 0
2
2 0
2 2 0
1 1 ! 0
2 0 ! 0 2 0 D U with
A ! 1 0 1 ! 0
0 2 0
0
2 0
0
1 1
0 0 1
"
#
"
#
0 1 0
1
0 0
1 0 .
P D 0 0 1 and L D 0
1 0 0
:5
:5 1
2
3
1 1 0 0
61 1 1 07
9 AD4
has cofactors C13 D C31 D C24 D C42 D 1 and C14 D C41 D
0 1 1 15
0 0 1 1
1. A 1 is a full matrix!
1
7 UU
x computedk for " D 10 3 , 10 6 ,
10 , 10 , 10
are of order 10 , 10 , 10 , 10 4 , 10 3 .
p
p
1 3 1
1
10 14
p1
D
.
11 (a) cos D 1= 10, sin D 3= 10, R D p1
10
10
3 1 3
5
0 8
(b) A haseigenvalues
4 and 2. Put one
of the unit eigenvectors
in row
1 of Q: either
1
1
2
4
1
3
1
1
1
QD p
and QAQ D
or Q D p
and QAQ 1 D
2 1
10 3
1
0
4
1
4
4
.
0
2
10 With 16-digit floating point arithmetic the errors kx
9
12
15
16
11
7
12 When A is multiplied by a plane rotation Qij , this changes the 2n (not n2 ) entries in
rows i and j . Then multiplying on the right by .Qij /
entries in columns i and j .
1
D .Qij /T changes the 2n
13 Qij A uses 4n multiplications (2 for each entry in rows i and j /. By factoring out cos ,
the entries 1 and ˙ tan need only 2n multiplications, which leads to 32 n3 for QR.
14 The .2; 1/ entry of Q21 A is
tan D 2. Then the 2; 1;
p
1
.
3
sin C 2 cos /. This is zero if sin D 2 cos or
p
p
5 right triangle has sin D 2= 5 and cos D 1= 5.
Every 3 by 3 rotation with det Q D C1 is the product of 3 plane rotations.
15 This problem shows how elimination is more expensive (the nonzero multipliers are
counted by nnz(L) and nnz(LL)) when we spoil the tridiagonal K by a random permutation.
If on the other hand we start with a poorly ordered matrix K, an improved ordering
is found by the code symamd discussed in this section.
16 The “red-black ordering” puts rows and columns 1 to 10 in the odd-even order 1; 3; 5; 7,
9; 2; 4; 6; 8; 10. When K is the 1; 2; 1 tridiagonal matrix, odd points are connected
Solutions to Exercises
91
only to even points (and 2 stays on the diagonal, connecting every point to itself):
2
2
6 1
KD4
2
1
6 1
6
DD6 0
4
1
2
1
1
3
2I
1
7
and PKP T D
5
DT
1 2
3
1 to 2
7 3 to 2; 4
7
1
7 5 to 4; 6
5 7 to 6; 8
1
1
1
1 9 to 8; 10
D
2I
with
17 Jeff Stuart’s Shake a Stick activity has long sticks representing the graphs of two linear
equations in the x-y plane. The matrix is nearly singular and Section 9:2 shows how to
compute its condition number c D kAkkA 1 k D max =min 80; 000:
"
#
"
#
kA 1 k 20000
1 1:0001
1
1:0001
1
AD
kAk 2 A D 10000
1 1:0000
1
1
c 40000:
Problem Set 9.2, page 478
p
k D 2, c D 4; kAk D 3, kA 1 k D 1,pc D 3; kAk
D 2C 2 D
p
max for positive definite A, kA 1 k D 1=min , c D .2 C 2/=.2
2/ D 5:83.
p
p
2 kAk D 2; c D 1I kAk D 2; c D infinite (singular matrix); AT A D 2I , kAk D 2,
c D 1.
1 kAk D 2, kA
1
3 For the first inequality replace x by Bx in kAxk kAkkxk; the second inequality is
just kBxk kBkkxk. Then kABk D max.kABxk=kxk/ kAkkBk.
4 1 D kI k D kAA
1
k kAkkA 1k D c.A/.
5 If ƒmax D ƒmin D 1 then all ƒi D 1 and A D SIS
kAk D kA
1
k D 1 are orthogonal matrices.
1
D I . The only matrices with
6 All orthogonal matrices have norm 1, so kAk kQkkRk D kRk and in reverse
kRk kQ 1 kkAk D kAk, then kAk D kRk. Inequality is usual in kAk < kLkkU k
when AT A ¤ AAT . Use norm on a random A.
7 The triangle inequality gives kAx C Bxk kAxk C kBxk. Divide by kxk and take
the maximum over all nonzero vectors to find kA C Bk kAk C kBk.
8 If Ax D x then kAxk=kxk D jj for that particular vector x. When we maximize
the ratio over all vectors we get kAk jj.
0 1
0 0
0 1
9 ACB D
C
D
has .A/ D 0 and .B/ D 0 but .A C B/ D 1.
0 0
1 0
1 0
1 0
The triangle inequality kA C Bk kAk C kBk fails for .A/. AB D
also has
0 0
.AB/ D 1; thus .A/ D max j.A/j = spectral radius is not a norm.
Solutions to Exercises
92
1
is kA 1 kk.A 1 / 1 k which is kA 1 kkAk D c.A/.
(b) Since AT A and AAT have the same nonzero eigenvalues, A and AT have the same
norm.
10 (a) The condition number of A
11 Use the quadratic formula for max =min , which is c D max =min since this A D AT is
positive definite:
p
c.A/ D 1:00005 C .1:00005/2
:0001 = 1:00005
p
40; 000:
12 det.2A/ is not 2 det AI det.A C B/ is not always less than det A C det B; taking j det Aj
does not help. The only reasonable property is det AB D .det A/.det B/. The condition
number should not change when A is multiplied by 10.
Ay D .10 7 ; 0/ is much smaller than b Az D .:0013; :0016/. But
z is much closer to the solution than y.
659
563
14 det A D 10 6 so A 1 D 103
: kAk > 1, kA 1 k > 106 , then c > 106 .
913
780
p
15 x D .1; 1; 1; 1; 1/ has kxk D 5; kxk1 D 5; kxk1 D 1. x D .:1; :7; :3; :4; :5/ has
kxk D 1; kxk1 D 2 (sum) kxk1 D :7 (largest).
13 The residual b
16 x12 C Cxn2 is not smaller than max.xi2 / and not larger than .jx1 jC Cjxn j/2 D kxk21 .
p
x12 C C xn2 n max.xi2 / so kxk nkxk1 . Choose yi D sign xi D ˙1 to get
p
p
kxk1 D x y kxkkyk D nkxk. x D .1; : : : ; 1/ has kxk1 D n kxk.
17 For the `1 norm, the largest component of x plus the largest component of y is not
less than kx C yk1 D largest component of x C y.
For the `1 norm, each component has jxi C yi j jxi j C jyi j. Sum on i D 1 to n:
kx C yk1 kxk1 C kyk1 .
18 jx1 j C 2jx2 j is a norm but min.jx1 j; jx2 j/ is not a norm. kxk C kxk1 is a norm;
kAxk is a norm provided A is invertible (otherwise a nonzero vector has norm zero;
for rectangular A we require independent columns to avoid kAxk D 0).
19 x T y D x1 y1 C x2 y2 C .max jyi j/.jx1 j C jx2 j C / D jjxjj1 jjyjj1 .
20 With j D 2
2 cos.j=nC1/, the largest eigenvalue is n 2 C 2 D 4. The smallest
2
is 1 D 2 2 cos.=nC1/ nC1
, using 2 cos 2 2 . So the condition number
2
2
is c D max =min .4= / n , growing with n.
1 1
1
q
n
21 A D
has A D
with q D 1 C 1:1 C C .1:1/n 1 D
0 1:1
0 .1:1/n
n
n
n
n 0 10
.1:1
1/=.1:1 1/ 1:1 =:1. So the growing part of A is 1:1
with
0 1
p
jjAn jj 101 times 1:1n for larger n.
Problem Set 9.3, page 489
1 The iteration x kC1 D .I
A/x k C b has S D I and T D I
A and S
1
T DI
A.
Solutions to Exercises
93
2 If Ax D x then .I
A/x D .1 /x. Real eigenvalues of B D I A have j1 j < 1
provided is between 0 and 2.
1
1
3 This matrix A has I A D
which has jj D 2. The iteration diverges.
1
1
4 Always kABk kAkkBk. Choose A D B to find kB 2 k kBk2 . Then choose A D
5
6
7
8
9
10
11
12
B 2 to find kB 3 k kB 2 kkBk kBk3 . Continue (or use induction) to find kB k k kBkk . Since kBk max j.B/j it is no surprise that kBk < 1 gives convergence.
Ax D 0 gives .S T /x D 0. Then S x D T x and S 1 T x D x. Then D 1 means
that the errors do not approach zero. We can’t expect convergence when A is singular
and Ax D b is unsolvable!
1 0 1
1
with jjmax D 13 . Small problem, fast convergence.
Jacobi has S T D 3
1 0
"
#
0 31
1
Gauss-Seidel has S T D
with jjmax D 91 which is .jjmax for Jacobi)2 .
0 19
1
a
0
b
0
b=a
1
Jacobi has S T D
D
with jj D jbc=ad j1=2 .
d
c
0
c=d
0
1
a 0
0
b
0
b=a
Gauss-Seidel has S 1 T D
D
with jj D jbc=ad j.
c d
0
0
0
bc=ad
So Gauss-Seidel is twice as fast to converge .or to explode if jbcj > jad j/.
p
3/ 1:07.
Set the trace 2 2! C 14 ! 2 equal to .! 1/ C.! 1/ to find !opt D 4.2
The eigenvalues ! 1 are about .07, a big improvement.
Gauss-Seidel will converge for the 1; 2; 1 matrix. jjmax D cos2 .=n C 1/ is given
on page 485, with the improvement from successive over relaxation.
If the iteration gives all xinew D xiold then the quantity in parentheses is zero, which
means Ax D b. For Jacobi change x new on the right side to x old .
A lot of energy went into SOR in the 1950’s! Now incomplete LU is simpler and
preferred.
13 uk =k1 D c1 x 1 C c2 x 2 .2 =1 /k C C cn x n .n =1 /k ! c1 x 1 if all ratios j
i =1 j <
1. The largest ratio controls the rate of convergence (when k is large). A D
0 1
1 0
has j2 j D j1 j and no convergence.
14 The eigenvectors of A and also A 1 are x 1 D .:75; :25/ and x 2 D .1; 1/. The inverse
power method converges to a multiple of x 2 , since j1=2 j > j1=1 j.
j
j
= 2 sin nC1
nC1
j
cos nC1
. Then
nC1
1/
C1/
sin .jnC1
sin .jnC1
.
The last two terms combine into 2 sin
1 D 2 2 cos nC1 .
2
1
1
2
5
14
16 A D
produces u0 D
, u1 D
, u2 D
, u3 D
. This
1
2
0
1
4
13
1
is converging to the eigenvector direction
with largest eigenvalue D 3. Divide
1
uk by kuk k.
15 In the j th component of Ax 1 ; 1 sin
Solutions to Exercises
94
1 2 1
1 2
1 5
1 14
1=2
17 A D
gives u1 D
, u2 D
, u3 D
! u1 D
.
1=2
3 1 2
3 1
9 4
27 13
1 cos sin cos .1 C sin2 /
sin3 18 R D QT A D
and
A
D
RQ
D
.
1
0
sin2 sin3 cos sin2 1
19 If A is orthogonal then Q D A and R D I . Therefore A1 D RQ D A again, and the
“QR method” doesn’t move from A. But shift A slightly and the method goes quickly
to ƒ.
20 If A
cI D QR then A1 D RQ C cI D Q
in eigenvalues because A1 is similar to A.
1
.QR C cI /Q D Q
1
AQ. No change
C bj q j C1 by q Tj to find q Tj Aq j D aj (because the
q’s are orthonormal). The matrix form (multiplying by columns) is AQ D QT where
T is tridiagonal. The entries down the diagonals of T are the a’s and b’s.
21 Multiply Aq j D bj
1 q j 1 C aj q j
22 Theoretically the q’s are orthonormal. In reality this important algorithm is not very
stable. We must stop every few steps to reorthogonalize—or find another more stable
way to orthogonalize q; Aq; A2 q; : : :
1
AQ D QT AQ is also symmetric. A1 D RQ D
R.QR/R D RAR has R and R 1 upper triangular, so A1 cannot have nonzeros
on a lower diagonal than A. If A is tridiagonal and symmetric then (by using symmetry
for the upper part of A1 / the matrix A1 D RAR 1 is also tridiagonal.
P
P
24 The proof of jj < 1 when every absolute row sum < 1 uses j aij xj j jaij jjxi j <
jxi j. (Here xi is the largest component.) The application to the Gershgorin circle theorem (very useful) is printed after its statement in this problem.
23 If A is symmetric then A1 D Q
1
1
25 For A and K, the maximum row sums give all jj 1 and all jj 4. The circles
j :5j :5 and j :4j :6 around diagonal entries of A give tighter bounds. The
circlepj 2j 2 p
for K contains the circle j 2j 1 and all three eigenvalues
2 C 2; 2, and 2
2.
26 With diagonal dominance ai i > ri , the circles j
ai i j ri don’t include D 0
(so A is invertible!). Notice that the 1; 2; 1 matrix is also invertible even though its
diagonals are only weakly dominant. They equal the off-diagonal row sums, 2 D 2
except in the first and last rows, and more care is needed to prove invertibility.
27 From the last line of code, q 2 is in the direction of v D Aq 1
h11 q 1 D Aq 1
.q T1 Aq 1 /q 1 . The dot product with q 1 is zero. This is Gram-Schmidt with Aq 1 as the
second input vector.
28 Note
The five lines in Solutions to Selected Exercises prove two key properties of
conjugate gradients—the residuals r k D b Ax k are orthogonal and the search directions are A-orthogonal .p Ti Ap i D 0/. Then each new guess x kC1 is the closest vector
to x among all combinations of b, Ab, Ak b. Ordinary iteration S x kC1 D T x k C b
does not find this best possible combination x kC1 .
The solution to Problem 28 in this Fourth Edition is straightforward and important.
Since H D Q 1 AQ D QT AQ is symmetric if A D AT , and since H has only one
lower diagonal by construction, then H has only one upper diagonal: H is tridiagonal
and all the recursions in Arnoldi’s method have only 3 terms (Problem 29).
Solutions to Exercises
95
1
AQ is similar to A, so H has the same eigenvalues as A (at the end of
Arnoldi). When Arnoldi stops sooner because the matrix size is large, the eigenvalues
of Hk (called Ritz values) are close to eigenvalues of A. This is an important way to
compute approximations to for large matrices.
30 In principle the conjugate gradient method converges in 100 (or 99) steps to the exact
solution x. But it is slower than elimination and its all-important property is to give
good approximations to x much sooner. (Stopping elimination part way leaves you
nothing.) The problem asks how close x 10 and x 20 are to x 100 , which equals x except
for roundoff errors.
29 H D Q
Problem Set 10.1, page 498
2 C 2i , 2 cos and products 5, 2i , 1. Note .e i /.e
p i
p
2 In polar form these are 5e , 5e 2i , p1 e i , 5.
1 (a)(b)(c) have sums 4,
5
1
, and 100. The angles are
10
D 23 , jz wj 5.
p
3
1
C
i ; w 12 D 1.
2
2
i
i
3 The absolute values are r D 10, 100,
4 jz wj D 6, jz C wj 5, jz=wj
5 a C ib D
p
3
2
C 12 i ,
1
2
C
p
3
i,
2
i,
i
/ D 1.
, 2 , and 2 .
6 1=z has absolute value 1=r and angle
; .1=r/e
times re equals 1.
a
b
c
ac bd real part
1
3
1
10
7
D
is the matrix
b
a d
bc C ad imaginary part 3
1
3
0
form of .1 C 3i /.1 3i / D 10.
A1
A2
x1
b1
8
D
gives complex matrix D vector multiplication .A1 C
A2
A1
x2
b2
iA2 /.x 1 C i x 2 / D b1 C i b2 .
9 2 C i ; .2 C i /.1 C i / D 1 C 3i ; e
10 z C z is real; z
11
12
13
14
15
i=2
D i; e
i
D 1;
1 i
1Ci
D
i ; . i /103 D i .
z is pure imaginary; zz is positive; z=z has absolute value 1.
a b
0 1
includes aI (which just adds a to the eigenvalues and b
. So the
b a
1 0
eigenvectors are x 1 D .1; i / and x 2 D .1; i /. The eigenvalues are 1 D a C bi and
2 D a bi . We see x 1 D x 2 and 1 D 2 as expected for real matrices with complex
eigenvalues.
p
(a) When a D b D d D 1 the square root becomes 4c; is complex if c < 0
(b) D 0 and D a C d when ad D bc
(c) the ’s can be real and different.
2
Complex ’s when .aCd / < 4.ad bc/; write .aCd /2 4.ad bc/ as .a d /2 C4bc
which is positive when bc > 0.
det.P I / D 4 1 D 0 has D 1, 1, i , i with eigenvectors .1; 1; 1; 1/ and
.1; 1; 1; 1/ and .1; i; 1; i / and .1; i; 1; i / D columns of Fourier matrix.
The 6 by 6 cyclic shift P has det.P6 I / D 6 1 D 0. Then D 1, w, w 2 , w 3 ,
w 4 , w 5 with w D e 2i=6 . These are the six solutions to b D 1 as in Figure 10.3 (The
sixth roots of 1).
Solutions to Exercises
96
16 The symmetric block matrix has real eigenvalues; so i is real and is pure imaginary.
17 (a) 2e i=3 , 4e 2i=3
50e
i=2
(c) 7e 3i=2 , 49e 3i .D 49/ (d)
p
50e
i=4
.
18 r D 1, angle
2
19 a C i b D 1, i ,
20 1, e
(b) e 2i , e 4i
2i=3
; multiply by e i to get e i=2 D i .
1, i , ˙ p1 ˙
2
pi .
2
The root w D w
4i=3
1
De
, e
are cube roots of 1. The cube roots of
Altogether six roots of z 6 D 1.
2i=8
1 are
p
is 1= 2
p
i= 2.
1, e i=3 , e
i=3
.
21 cos 3 D ReŒ.cos Ci sin /3  D cos3 3 cos sin2 ; sin 3 D 3 cos2 sin sin3 .
22 If the conjugate z D 1=z then jzj2 D 1 and z is any point e i on the unit circle.
23 e i is at angle D 1 on the unit circle; ji e j D 1e ; Infinitely many i e D e i.=2C2 n/e .
24 (a) Unit circle (b) Spiral in to e
2
(c) Circle continuing around to angle D 2 2 .
Problem Set 10.2, page 506
1 kuk D
uH v).
p
p
9 D 3, kvk D 3, uH v D 3i C 2, vH u D
"
2
0
0
2
1Ci
1Ci
2 AH A D
1 i 1 i
2
share the eigenvalues 4 and 2.
#
H
and AA D
3i C 2 (this is the conjugate of
3 1
are Hermitian matrices. They
1 3
3 z D multiple of .1Ci; 1Ci; 2/; Az D 0 gives zH AH D 0H so z (not z!) is orthogonal
to all columns of AH (using complex inner product zH times columns of AH ).
4 The four fundamental subspaces are now C.A/, N.A/, C.AH /, N.AH /. AH and not AT .
5 (a) .AH A/H D AH AHH D AH A again
(b) If AH Az D 0 then .zH AH /.Az/ D 0.
This is kAzk D 0 so Az D 0. The nullspaces of A and AH A are always the same.
(a) False
0 1
6
ADU D
(b) True: i is not an eigenvalue when A D AH .
(c) False
1 0
2
7 cA is still Hermitian for real c; .iA/H D
iAH D
iA is skew-Hermitian.
#
"
#
0
0
1
i
2
3
1
0
0 ,P D
i
8 This P is invertible and unitary. P D
D
0
1
0
i
iI . Then P 100 D . i /33 P D iP . The eigenvalues of P are the roots of 3 D i ,
which are i and i e 2i=3 and i e 4i=3 .
"
9 One unit eigenvector is certainly x 1 D .1; 1; 1/ with 1 D i . The other eigenvectors
are x 2 D .1; w; w 2 / and x 3 D .1; w 2 ; w 4 / with w D e 2i=3 . The eigenvector matrix
is the Fourier matrix F3 . The eigenvectors of any unitary matrix like P are orthogonal
(using the correct complex form x H y of the inner product).
10 .1; 1; 1/, .1; e 2i=3 ; e 4i=3 /, .1; e 4i=3 ; e 2i=3 / are orthogonal (complex inner product!)
because P is an orthogonal matrix—and therefore its eigenvector matrix is unitary.
,
Solutions to Exercises
97
"
#
D 2 C 5 C 4 D 11;
2 5 4
th
11 Not included in 4 edition C D 4 2 5 D 2C5P C4P 2 has 2 C 5e 2i=3 C 4e 4i=3 ;
5 4 2
2 C 5e 4i=3 C 4e 8i=3 :
.U H / 1 D U 1 .U 1 /H D I so U
.U V / .U V / D V U U V D V H V D I so U V is unitary.
11 If U H U D I then U
H
H
1
1
is also unitary. Also
H
12 Determinant D product of the eigenvalues (all real). And A D AH gives det A D det A.
13 .zH AH /.Az/ D kAzk2 is positive unless Az D 0. When A has independent columns
14
15
16
17
18
19
this means z D 0; so AH A is positive definite.
1
1
1
1Ci
2 0
1
1 i
p
AD p
.
1
0
1
1 i
1
3 1Ci
3
1
1
1 i
2i
0 1
1
1Ci
p
K D(iAT in Problem 14) D p
;
1
0
i
1Ci
1
3 1 i
3
’s are imaginary.
1 1 i
1
1
i
cos C i sin 0
has jj D 1.
QD p
p
i
1
0
cos i sin 2
2 i 1
p
p
p
1 1C 3
1C
i
1 p
i
1
0 1 1C 3
p
V D
with L2 D 6C2 3.
0
1
1Ci 1C 3
1 i
1C 3
L
L
Unitary means jj D 1. V D V H gives real . Then trace zero gives D 1 and 1.
The v’s are columns of a unitary matrix U , so U H is U 1 . Then z D U U H z D
H
(multiply by columns) D v1 .vH
1 z/ C C vn .vn z/: a typical orthonormal expansion.
R 2
Don’t multiply .e ix /.e ix /. Conjugate the first, then 0 e 2ix dx D Œe 2ix =2i 2
0 D 0.
20 z D .1; i; 2/ completes an orthogonal basis for C3 . So does any e i z.
21 R C iS D .R C iS /H D RT
iS T ; R is symmetric but S is skew-symmetric.
22 Cn has dimension n; the columns of any unitary matrix are a basis. For example use
the columns of iI : .i; 0; : : : ; 0/; : : : ; .0; : : : ; 0; i /
a
b C ic
w e i z
with jwj2 C jzj2 D 1
i
23 Œ 1  and Œ 1 ; any Œ e ;
;
b ic
d
z
e i w and any angle 24 The eigenvalues of AH are complex conjugates of the eigenvalues of A: det.A I / D 0
gives det.AH I / D 0.
25 .I 2uuH /H D I 2uuH and also .I 2uuH /2 D I 4uuH C 4u.uH u/uH D I . The
rank-1 matrix uuH projects onto the line through u.
26 Unitary U H U D I means .AT iB T /.ACiB/ D .AT ACB T B/Ci.AT B B T A/ D I .
AT A C B T B D I and AT B B T A D 0 which makes the block matrix orthogonal.
H
T
27 We are
iB T . Then A D AT and B D B T . So
given A C iB D .A C iB/ D A
A
B
that
is symmetric.
B A
D I gives .A 1 /H AH D I . Therefore .A 1 /H is .AH / 1 D A 1 and A 1 is
Hermitian.
2
1 i 1 i
1 0 1 2 C 2i
D SƒS 1 . Note real D 1 and 4.
29 A D
2
1
2
0 4 6 1Ci
28 AA
1
Solutions to Exercises
98
30 If U has (complex) orthonormal columns, then U H U D I and U is unitary. If those
columns are eigenvectors of A, then A D UƒU 1 D UƒU H is normal. The direct test
for a normal matrix (which is AAH D AH A because diagonals could be real!) and ƒH
surely commute:
AAHD.UƒU H /.UƒH U H /DU.ƒƒH /U H D U.ƒH ƒ/U H D .UƒH U H /.UƒU H / DAH A:
An easy way to construct a normal matrix is 1 C i times a symmetric matrix. Or take
A D S C iT where the real symmetric S and T commute (Then AH D S iT and
AAH D AH A).
Problem Set 10.3, page 514
1 Equation (3) (the FFT) is correct using i 2 D
1 in the last two rows and three columns.
2
3
1 1
1
1
1
1
1
1
17
6
7 1 61 i2
716
2 F 1D4
5 4
5 41
5 D F H.
1
1
1
1
2
2
4
1
i
i
1 i2
2
32
32
3
1 1
1
1
1
1
1
17
6
7 61 i2
76
3 F D4
54
5 permutation last.
1
1 1 5 41
1
2
1
i
i
1 i
2
3
2
3
1
1
1
1
5 (note 6 not 3) and F3 4 1 e 2i=3 e 4i=3 5.
e 2i=6
4 DD4
4i=6
e
1 e 4i=3 e 2i=3
2
6
7
8
9
10
11
2
3
w D v and F 1 v D w=4. Delta vector $ all-ones vector.
2
3
4 0 0 0
60 0 0 47
.F4 /2 D 4
and .F4 /4 D 16I . Four transforms recover the signal!
0 0 4 05
0 4 0 0
2 3 2 3 2 3 2 3
2 3 2 3 2 3 2 3
1
1
2
2
0
0
0
2
607 617 607 607
617 607 607 6 07
cD4 5!4 5!4 5!4 5DF c. Also C D4 5!4 5!4 5!4 5DF C .
1
0
0
2
0
1
2
2
0
0
0
0
1
1
0
0
Adding c C C gives .1; 1; 1; 1/ to .4; 0; 0; 0/ D 4 (delta vector).
c ! .1; 1; 1; 1; 0; 0; 0; 0/ ! .4; 0; 0; 0; 0; 0; 0; 0/ ! .4; 0; 0; 0; 4; 0; 0; 0/ D F8 c.
C ! .0; 0; 0; 0; 1; 1; 1; 1/ ! .0; 0; 0; 0; 4; 0; 0; 0/ ! .4; 0; 0; 0; 4; 0; 0; 0/ D F8 C .
p
If w 64 D 1 then w 2 is a 32nd root of 1 and w is a 128th root of 1: Key to FFT.
For every integer n, the nth roots of 1 add to zero. For even n, they cancel in pairs. For
n 1
n
any n, use the geometric series formula
1/=.w 1/ D 0.
p 1Cw CCw
p D .w
In particular for n D 3, 1 C . 1 C i 3/=2 C . 1 i 3/=2 D 0.
The eigenvalues of P are 1; i; i 2 D 1, and i 3 D i . Problem 11 displays the eigenvectors. And also det.P I / D 4 1.
5 F
1
3
Solutions to Exercises
99
"
#
0 1 0
12 ƒ D diag.1; i; i 2 ; i 3 /; P D 0 0 1 and P T lead to 3
1 0 0
1 D 0.
13 e1 D c0 C c1 C c2 C c3 and e2 D c0 C c1 i C c2 i 2 C c3 i 3 ; E contains the four
eigenvalues of C D FEF 1 because F contains the eigenvectors.
14 Eigenvalues e1 D 2 1 1 D 0, e2 D 2 i i 3 D 2, e3 D 2 . 1/ . 1/ D 4,
e4 D 2 i 3 i 9 D 2. Just transform column 0 of C . Check trace 0 C 2 C 4 C 2 D 8.
15 Diagonal E needs n multiplications, Fourier matrix F and F 1 need 12 n log2 n multiplications each by the FFT. The total is much less than the ordinary n2 for C times x.
16 The row 1; w k ; w 2k ; : : : in F is the same as the row 1; w N
; w N 2k ; : : : in F because
w
De
is e e
D 1 times w . So F and F have the same
rows in reversed order (except for row 0 which is all ones).
N k
.2i=N /.N k/
2i
.2i=N /k
k
k