# Solving linear systems: direct methods

## Transcription

Solving linear systems: direct methods
```Solving linear systems: direct methods
Two di↵erent approaches
Solve Ax = b
Direct methods:
I
Deterministic
I
Exact up to machine precision
I
Expensive (in time and space)
Iterative methods:
I
Only approximate
I
Cheaper in space and (possibly) time
I
Convergence not guaranteed
Really bad example of direct method
Cramer’s rule
write |A| for determinant, then
a11 a12 . . . a1i
a21
...
xi = .
..
an1
...
Time complexity O(n!)
1
b1 a1i+1 . . . a1n
b2
. . . a2n
..
.. /|A|
.
.
bn
. . . ann
Gaussian elimination
Example
2
6
412
3
2
8
13
0
2
6
3
|
|
|
6
@12
3
3
2
8
13
2
16
6
26 5 ! 40
19
0
Solve x3 , then x2 , then x1
6, 4, 4 are the ‘pivots’
1
0
1
2
16
6A x = @ 26 A
3
19
3
2
2 2 |
16
6
4 2 |
6 5 ! 40
12 2 |
27
0
2
4
0
2
2
4
|
|
|
3
16
65
9
Pivoting
If a pivot is zero, exchange that row and another.
(there is always a row with a nonzero pivot if the matrix is
nonsingular)
best choice is the largest possible pivot
in fact, that’s a good choice even if the pivot is not zero
Roundo↵ control
Consider
✓
with solution x = (1, 1)t
Ordinary elimination:
✓
◆
✓
◆
✏ 1
1+✏
x=
1 1
2
◆
✓
◆
✏
1
1
x=
0 (1 1✏ )
2 1✏
) x2 =
2
1
1
✏
1
✏
) x1 =
1
x2
✏
Roundo↵ 2
If ✏ < ✏mach , then 2
x2 =
1/✏ =
2
1
1
✏
1
✏
1/✏ and 1
= 1, ) x1 =
1/✏ =
1
x2
✏
=0
1/✏, so
Roundo↵ 3
Pivot first:
✓
◆
✓ ◆
✓
◆
✓
◆
1 1
2
1
1
2
x=
)
x=
✏ 1
1
0 1 ✏
1 2✏
If ✏ very small:
x1 =
1 2✏
= 1,
1 ✏
x2 = 2
x1 = 1
LU factorization
Same example again:
0
6
@
A = 12
3
1
2 2
8 6A
13 3
2nd row minus 2⇥ first; 3rd row minus 1/2⇥
equivalent to
0
1
@
2
L1 Ax = L1 b,
L1 =
1/2
first;
1
0 0
1 0A
0 1
LU 2
Next step: L2 L1 Ax = L2 L1 b with
0
1
L 2 = @0
0
1
0 0
1 0A
3 1
Define U = L2 L1 A, then A = L1 1 L2 1 U
‘LU factorization’
LU 3
Observe:
0
Likewise
1
1
0 0
L1 = @ 2 1 0A
1/2 0 1
0
1
@
L2 = 0
0
Even more remarkable:
L1 1
1
0 0
1 0A
3 1
L1 1 L2 1
0
1
1 0 0
= @ 2 1 0A
1/2 0 1
0
1
1 0 0
L2 = @0 1 0A
0 3 1
0
1
1 0 0
= @ 2 1 0A
1/2 3 1
Can be computed in place! (pivoting?)
Solve LU system
Ax = b ! LUx = b solve in two steps:
Ly = b, and Ux = y
Forward sweep:
0
1
B`21 1
B
B`31 `32
B
B ..
@ .
;
1
..
.
`n1 `n2
y1 = b 1 ,
···
10 1 0 1
y1
b1
C B y2 C B b 2 C
CB C B C
CB C B C
CB C = B C
C B .. C B .. C
[email protected] . A @ . A
1
yn
bn
y2 = b 2
`21 y1 , . . .
Solve LU 2
Backward sweep:
0
10 1 0 1
u11 u12 . . . u1n
x1
y1
B
C
B
C
B
C
u
.
.
.
u
x
22
2n C B 2 C
B
B y2 C
=B.C
B
C
B
C
.
.
.
..
.. A @ .. A @ .. A
@
;
unn
xn
yn
xn = unn1 yn ,
xn
1
= un 11n
1 (yn 1
un
1n xn ), . . .
Graph theory of sparse elimination
aij
aij
aik akk1 akj
Fill-in
Fill-in: index (i, j) where aij = 0 but `ij 6= 0 or uij 6= 0.
Fill-in is a function of ordering
0
⇤ ⇤ ···
B⇤ ⇤
B
B ..
..
@.
.
⇤ ;
After factorization the matrix is dense.
Can this be permuted?
1
⇤
;C
C
C
A
⇤
LU of a sparse matrix
0
4
1
B 1 4
B
B
..
B
.
B
@ 1 0
0
1
0
4
1
B
4 14
B
B
..
) B
.
B
@
1/4
1
0
1
..
.
...
0
..
.
...
0
...
0
1
..
.
...
0
...
0
..
.
...
1
0
1
..
.
1
4
4
1
1
1
1/4
...
C
C
C
C
C
A
1
..
4
...
1
1
1
4
.
1
4
1
1
C
C
C
C
C
A
Remaining matrix has a dense leading block
Fill-in during LU
2D BVP: ⌦ is n ⇥ n, gives matrix of size N = n2 , with
bandwidth n.
Matrix storage O(N)
LU storage O(N 3/2 )
LU factorization work O(N 2 )
```