DOUBLE SERIES AND PRODUCTS OF SERIES 1. Various ways to

Comments

Transcription

DOUBLE SERIES AND PRODUCTS OF SERIES 1. Various ways to
DOUBLE SERIES AND PRODUCTS OF SERIES
KENT MERRYFIELD
1. Various ways to add up a doubly-indexed series:
Let (u)jk be a “sequence” of numbers depending on the two variables j and k. I will assume
that 0 ≤ j < ∞ and 0 ≤ k < ∞. An ordinary sequence can be expressed as a list; to “list” such a
doubly-indexed sequence would require an array:


u00 u01 u02 · · ·
u10 u11 u12 · · ·


(1)
u20 u21 u22 · · ·


..
..
..
..
.
.
.
.
We would like to consider series with such “sequences” as terms; that is, we would like to “add
up all of the numbers (ujk ).” Our problem with making this notion meaningful is not that there is
no way to do this; rather, there are too many ways to do this. Out of a much larger collection of
possible meanings, let us pick out four to compare. What can we mean by the “sum of all of the
numbers (ujk )?”
!
∞
∞
X
X
Possibility 1: the iterated sum
ujk
(2)
j=0
k=0
∞
X
Possibility 2: the other iterated sum
k=0


∞
X

ujk 
(3)
j=0
Possibility 3: the limit of the square partial sums:
Let SN =
N X
N
X
ujk : interpret the series as lim SN
N →∞
j=0 k=0
(4)
Possibility 4: the limit of the triangular partial sums:
Let TN =
−j
N N
X
X
ujk : interpret the series as lim TN
N →∞
j=0 k=0
(5)
You can get a sense of the meaning of the N -th triangular partial sum by doing the following:
look at the array in (1), place a ruler at a 45◦ angle across the upper left corner of this array, then
add up all of the numbers that you can see above and to the left of the ruler. The further you pull
the ruler down and to the right, the larger the N that this partial sum represents. Pushing this
image a little further, we get the following notion: for each one-step move or our ruler down and
to the right, we add in one more lower-left to upper-right diagonal worth of terms. We can look at
TN as:
!
N
N
X
X
un−k,k =
n=0
k=0
1
u00 + (u10 + u01 ) + (u20 + u11 + u02 ) + (u30 + u21 + u12 + u03 ) + · · ·
(6)
So we have several different ways to add up a double series. (There are yet more ways, but let’s
not overburden the discussion.) The big question is this: do these methods necessarily yield the
same number? Experience with analysis, particularly with its counterexamples, should at the very
least make you skeptical – to no great surprise, the answer is NO!
We need an example. The following should be convincing enough. Define (ujk ) by the array:


1 −1 0
0 ···
0 1 −1 0 · · ·


0 0
1 −1 · · ·
(7)


0 0
0
1 · · ·


..
..
..
.. . .
.
.
.
.
.
Adding up the rows first as in (2), we get
∞
∞
X
X
j=0
k=0
!
ujk
=
∞
X
0=0
j=0
Adding up the colums first as in (3), we get


∞
∞
X
X

ujk  = 1 + 0 + 0 + 0 + · · · = 1
k=0
j=0
The sequence of square partial sums (SN ) goes as (1, 1, 1, 1, 1, 1, . . . ), which converges to 1.
And finally, the sequence of triangular partial sums TN goes as (1, 0, 1, 0, 1, 0, . . . ), which diverges.
With four different methods, we got three different answers – and the sense that the fact that
even two of them were the same is at best a lucky coincidence. However, we also note that there
is a great deal of cancellation going on here – positive and negative terms adding up to zero. It
is clear that this double sum cannot possibly be absolutely convergent by any of these medhods.
(What do I mean by “absolutely convergent”? Simply that if we replaced each and every term by
its absolute value, the resulting sum would converge to a number < ∞.) A well-known theorem
asserts that a single series is absolutely convergent if and only if it is unconditionally convergent
– that is, if and only if any rearrangement of that series still converges, and to the same number.
All of these methods of trying to add up a double series should be seen as various rearrangements
of the sum; if we are looking for a condition that guarantees that they will all give us the correct
number, we should expect that absolute convergence is just the condition we need.
As a general plan, theorems that have absolute convergence as a hypothesis proceed in two
stages; the first is the proof that everything works in the case of series with nonnegative sums; the
second is the use of that first proof as a lemma in the proof of the general absolutely convergent
case. What follows will be no exception
Lemma 1: If ujk ≥ 0 for all j and k, then the square partial sums and triangular partial sums
always have the same limit.
This limit may be ∞: if the square partial sums tend to ∞, then so also do the triangular partial
sums. The proof depends on the following inequality, which I present without written proof. (A
picture helps to explain it).
If ujk ≥ 0 for all j and k, then
TN ≤ SN ≤ T2N for all N
(8)
Both of these sequences - (SN ) and (TN ) - are non-decreasing sequences and thus subject to the
monotone sequence alternative: either they converge or they go to ∞. (T2N ) is just a subsequence
of (TN ) and hence has the same limit. An appeal to the Squeeze Theorem completes the argument.
Lemma 2: If ujk ≥ 0 for all j and k, then the limit of the square partial sums is the same as either
one of the iterated sums (2) or (3).
This time, the proof will take more work. To have some notation to work with,
!
∞
∞
X
X
let I =
ujk
and let L = lim SN .
j=0
N →∞
k=0
Our goal is to prove that I = L. The proof of this equality will be a classical analysts proof: we
very seldom prove equality directly. The way to show that I = L is to show that I ≥ L and that
I ≤ L.
First note that for each j, since these are series with nonnegative terms,
N
X
ujk ≤
k=0
∞
X
ujk .
k=0
Adding up these estimates for 0 ≤ j ≤ N yields
!
!
N
∞
N
N
X
X
X
X
ujk
ujk ≤
j=0
SN ≤
j=0
k=0
N
∞
X
X
j=0
!
ujk
≤
k=0
∞
∞
X
X
j=0
k=0
!
ujk
=I
k=0
Since SN ≤ I for all N, we must have that lim SN ≤ I, so L ≤ I.
N →∞
Naturally, we started with the easy half, but sooner or later we must face the other side. There
will be an in this argument, and a need to have some convenient convergent series with positive
2
1
terms whose sum is . Give us a choice of a convenient convergent series and we will usually take
2
1
a geometric series; given our choice of ratio, we will usually pick . That is, we will use this fact:
2
∞
X 1
1
=
(9)
2j+2
2
j=0
We are trying to prove that L ≥ I. Start by assuming that is any positive number. Choose an
M so large that for all m > M, we have
!
m
∞
X
X
ujk ≥ I −
(10)
2
j=0
k=0
Next, for each j, 0 ≤ j ≤ m, choose an nj large enough that
nj
X
ujk >
k=0
∞
X
k=0
ujk −
2j+2
Now we add up the estimates in (11) for 0 ≤ j ≤ m and use (9) and (10).
!
!
nj
m
m
∞
X
X
X
X
ujk >
ujk − j+2
2
j=0 k=0
j=0 k=0
!
m
∞
m
X
X
X
=
ujk −
2j+2
j=0
k=0
j=0
(11)
− =I −
2 2
Finally, let N be the maximum of the finite collection of numbers {m, n1 , n2 , . . . , nm }.
>I−
SN =
N
N
X
X
j=0
!
ujk
k=0
≥
mj
m
X
X
j=0
(12)
!
ujk
>I −
k=0
and since SN ≤ S, we have S > I − for all > 0. This forces S ≥ I. That finishes the proof at least in the case where both of these limits are assumed to be finite. A minor variation of this
proof will show that if either one is infinite, then both are infinite.
Theorem 3: If ujk ≥ 0 for all j and k, then if any one of the four sums – the limit of the square
partial sums, the limit of the triangular partial sums, or either of the iterated sums – converges,
then all four converge to the same number. If any one of the four is infinite, then all four are
infinite.
To prove this, just collect together Lemmas 1 and 2, and note that the proof of Lemma 2 would
work just as well for the other iterated sums.
The most interesting consequence of this theorem is the equality of the iterated sums:


!
∞
∞
∞
∞
X
X
X
X

ujk =
ujk 
(13)
j=0
k=0
k=0
j=0
The next stage is to claim that the same holds for absolutely convergent double series. We call
the double series absolutely convergent if the double series with the terms |ujk | converges. Which
of the four possibilities do we mean when we say it converges? By Theorem 3, it doesn’t matter –
if any one of the four converges, they all do, and to the same sum.
Theorem 4: If the double series with terms ujk converges absolutely, then both iterated sums, the
limit of the square partial sums, and the limit of the triangular partial sums all equal the same
number.
We re-use some of the methods and insights of the previous theorems. Let’s start with the square
0 and
partial sums and the triangular partial sums. Let SN and TN denote those sums of ujk and SN
0
0
0
TN denote the same sums for |ujk |. Note that SN and TN are given to be Cauchy sequences. Since
0 − S 0 | and |T − T | ≤ |T 0 − S 0 |, it follow that S
|SN − SM | ≤ |SN
N
M
N and TN are also Cauchy
M
N
N
0 − T 0 ) (as in Lemma 1, a picture
sequences, hence both convergent. Furthermore, |SN − TN | ≤ (SN
N
0 − T 0 ) = 0, it follows that lim
helps explain it). Since limN →∞ (SN
N →∞ (SN − TN ) = 0, hence SN
N
and TN converge to the same!limit.
∞
∞
∞
X
X
X
Now consider
ujk . We know that for each j,
ujk converges – after all, an absolutely
j=0
k=0
convergent series converges. Let aj =
∞
X
k=0
∞
X
ujk , and bj =
k=0
|ujk |. Let I =
k=0
0
lim SN
, and L = lim SN . If M2 > M1 ≥ N, then
N →∞
N →∞
M2
M2
X
X
0
a
≤
bj ≤ (I 0 − SN
).
j
j=M1 +1 j=M1 +1
∞
X
j=0
aj , I 0 =
∞
X
j=0
bj =
0 converges to I 0 , it follows that that
Since we know that SN
∞
X
aj is a Cauchy, hence convergent,
j=1
series. But further, now that we know that every row is a convergent series and that
∞
X
aj converges
j=0
absolutely, we can write
N
∞
∞
N
∞
∞
X
X
X
X
X
X
0
|I − SN | = ujk +
aj ≤
|ujk | +
bj = I 0 − SN
j=0 k=N +1
j=0 k=N +1
j=N +1
j=N +1
and since that can be made arbitrarily small, we have lim SN = I and hence I = L.
N →∞
As before, we can repeat this argument for the column-first iterated sums. Since both the rowfirst and column-first iterated sums equal the limit of the square partial sums, they equal each
other.
2. The Cauchy product of two series:
Suppose
∞
X
ak and
k=0
∞
X
bk are two series. We know that if they are both convergent, we can add
k=0
them together simply by adding them together term by term:
∞
X
ak +
k=0
∞
X
bk =
k=0
∞
X
(ak + bk )
k=0
What would we get if we multiplied them together? Certainly not the sum of the products of the
terms - that’s not the way multiplication works. Let’s consider this from the perspective of finite
sums: if you multiplied together two sums of ten terms each, how many terms would the product
have? In general, 100 terms. If we had to figure out a way to organize this sum, we’d write these
100 terms in a 10 × 10 array. It stands to reason that what you get when you multiply together
two sums is a doubly indexed sum.
∞
∞
X
X
Suppose
ak = A and
bk = B both converge. Then
k=0
∞
X
k=0
k=0
!
ak
∞
X
!
bk
= AB
k=0
=A
=
∞
X
bk
k=0
∞
X
Abk
k=0
=
∞
X


∞
X
aj 
bk 
k=0
j=0
since j is as good as k as an index


∞
∞
X
X

=
aj bk 
k=0
j=0
This is exactly the kind of double sum that we have been considering. If both of the original series
convevge absolutely, then Theorem 4 applies and we can write this sum in each of the other three
forms of that series and have it be equal. The most interesting form in what follows is the triangular
partial sums. That is, we have this lemma:
∞
∞
X
X
Lemma 5: If
ak and
bk both converge absoletely, then
k=0
∞
X
k=0
ak
! k=0∞
X
N
n
X
X
!
bk
= lim
N →∞
k=0
n=0
!
an−k bk
=
∞
n
X
X
n=0
k=0
!
an−k bk
(18)
k=0
The double sum on the right of 18) is called the Cauchy product of the two series.
The immediate application for Lemma 5 is to the product of two power seryes. In fact, we have
the following:
Theorem 6 – The Cauchy Product of Power Series: Suppose R > 0 and the two power series
f (x) =
∞
X
ak xk and g(x) =
k=0
∞
X
bk xk
k=0
both have radius of convergence at least as large as R, then the product f (x)g(x) is given by a power
series with radius of convergence at least R, and that power !
series is
∞
n
X
X
(19)
f (x)g(x) =
an−k bk xn
n=0
k=0
We can rephrase equation (19) as
∞
n
X
X
an−k bk .
If h(x) = f (x)g(x), then h(x) =
cn xn , where the cn ’s are computed as cn =
n=0
k=0
The name that we will give to getting a new coefficient sequence (cn ) from the two other coefficient
sequences (an ) and (bn ) is that it is the convolution of those two other sequences.
3. Two examples – the geometric series and the exponential:
Example 1: multiplying the geometric series by itself.
We know – it is the geometric series, after all – that for |x| < 1, we have
∞
X
1
=
xk
1−x
k=0
We now multiply this function by itself and use equation (19):
!
n
∞
∞
X
X
X
1
n
=
1·1 x =
(n + 1)xn
(1 − x)2
n=0
n=0
k=0
The same identity could have been derived in this case by differentiating the power serIes.
Example 2: the basic property of the exponential function.
Define the function E(x) by the power series (which has an infinite radius of convergence)
E(x) =
∞
X
xk
k=0
k!
.
We know that E ought to be the exponential function: E(x) = ex . The most basic algebraic
property of the exponential is the law of exponents: ex ey = ex+y . That is, E(x) · E(y) = E(x + y).
But what if we have never heard of the exponential? Or, more likely, suppose we are looking for
a rigorous way to define the exponential function and to prove that it has the required properties.
Power series provide a direct way to prove this property, as follows.
! ∞
!
∞
X yk
X
xk
E(x)E(y) =
k!
k!
k=0
k=0
!
∞
n
X
X
yk
xn−k
·
=
(n − k)! k!
n=0 k=0
!
!
n
n ∞
∞
X
X
1 X
n!
1 X n n−k k
n−k k
x
y
=
·x
y
=
n!
(n − k)!k!
n!
k
n=0
n=0
k=0
=
∞
X
n=0
1
(x + y)n = E(x + y)
n!
by Lemma 5
k=0
by the Binomial Theorem.
4. Exercises:
∞ X
∞
X
(1) Assume that 0 ≤ r < 1. Compute the sum
rj+2k in two ways by computing both
j=0 k=0
possible iterated sums. Does the sum converge?
(2) Let f (x) = [ln(1 + x)]2 . Use the series for the logarithm and Theorem 6 to compute that
!
∞
n−1
X
X
1
2
n
xn
f (x) = [ln(1 + x)] =
(−1)
(n − k)k
n=2
k=1
Use this to compute the 5th derivative of f evaluated at 0.
∞
∞
X
X
1
. Compute
(ζ(n) − 1) . Justify your steps.
(3) For s > 1, define ζ(s) =
ks
k=1
n=2

Similar documents