4 Solutions, Homework 4

Transcription

4 Solutions, Homework 4
4
Solutions, Homework 4
Exercise 4.1. Let X1 , X2 , . . . be i.i.d. with P(Xi = (−1)k k) = C/k 2 log k for k ≥ 2 where
C is chosen to make the sum of all probabilities one. Show that E[|Xi |] = ∞ but there is a
finite constant µ such that
Sn
→ µ in probability as n → ∞.
n
Solution: We see that
E[|X|] =
X
k≥2
k×
X 1
C
=
C
= ∞.
k 2 log k
k
log
k
k≥2
Next, we want to check the hypotheses in the Weak Law of Large number, i.e.
x lim P(|X| ≥ x) = 0.
x→∞
Since X only takes integer values, it is actually enough to check the above limit only over
integer values x = n. We have that
Z ∞
X C
1 1
1
≈
C
≤
C
.
P(|X| ≥ n) =
k 2 log k
x2 log x
log n n
n
k≥n
Therefore, indeed,
n lim P(|X| ≥ n) = 0.
n→∞
According to the WLLN (the most general form), we now have
Sn
− µn → 0 in probability as n → ∞,
n
where µn = E[X1{|X|≤n} ]. Now, we only have to see that, actually
µn =
n
X
C(−1)k
k=2
k log k
,
which coverges to the finite sum of the alternate series:
µn → µ =
∞
X
C(−1)k
k=2
Exercise 4.2.
k log k
∈ R.
1. Show that if X ≥ 0 is integer valued, then E[X] =
P
n≥1
P(X ≥ n).
2. find a similar expression for E[X 2 ].
Solution:
1. We know that
Z
∞
P(X ≥ x)dx =
E[X] =
0
∞ Z
X
n=1
beacuse X takes only integer values.
1
n
n−1
P(X ≥ x)dx =
∞
X
n=1
P(X ≥ n),
2. We could easily write that as
2
E[X ] =
∞
X
2
P(X ≥ n) =
n=1
∞
X
P(X ≥
√
n).
n=1
However, an even better way to do this is
2
Z
∞
2xP(X ≥ x)dx =
E[X ] =
0
∞ Z
X
E[X 2 ] =
2xP(X ≥ x)dx =
n−1
n=1
leading to
n
∞
X
∞
X
Z
n
P(X ≥ n)
n=1
2xdx
n−1
(2n + 1)P(X ≥ n).
n=1
Either of the above representations will be considered correct for grading.
Rx
Exercise 4.3. If h(y) ≥ 0 and H(x) = −∞ h(y)dy, show that
Z
∞
h(y)P(X > y)dy
E[H(X)] =
−∞
Solution: This is just a usual application of Fubini. To be precise,
Z ∞
Z ∞
Z ∞
h(y)P(X ≥ y)dy.
h(y)E[1{y≤X} ] =
1{y≤X} h(y)dy =
E[H(X)] = E
−∞
−∞
−∞
Remark:
A Weak Law of Large Numbers, in an extended sense, is a deterministic description
(using convergence in probability) of the empirical average Sn /n for large sample size n.
Usually we think about it as either
or
Sn
→ a ∈ [−∞, ∞],
n
(4.1)
Sn
bn
→ +(−)1 for some
→ ∞.
bn
n
(4.2)
When X ∈ L1 we know that we have the weak law in (4.1) with a = E[X] ∈ (−∞, ∞). We
only try to find a weak law in the form (4.2) when X is not integrable, so, because Sn takes
large values, we try to normalize it by something larger than n to obtain (4.2). Relation
(4.2) implies (4.1) since
Sn
S n bn
=
→ +(−)1 · ∞ = +(−)∞,
n
bn n
so (4.2) (when it holds) provides better information about the empirical average for large
samples than (4.1) with a = +(−)∞.
Exercise 4.4. (Weak Law for Positive random variables, in the form (4.1)) Using truncation,
show that if X ≥ 0 and E[X] = ∞, then
Sn
→ ∞, in probability as n → ∞.
n
2
Solution: If we truncate (each of the i.i.d random variables) at level M the empirical
average S˜n /n of the truncations converges to E[X1{X≤M } ]. Now, because each Xi is positive,
truncation decreases the sum, so
S˜n
Sn
≥
→ E[X1{X≤M } ], in probability.
n
n
This is true for each M , so we can let M % ∞. By monontone convergence we have
E[X1{X≤M } ] % E[X] = ∞. From here we obtain the conclusion, but the full argument
involves a precise definition of what it means for a sequence to converge in probability to
infinity.
Exercise 4.5. (Weak Law for positive random variables in form (4.2)) Assume X ≥ 0.
Rs
1. Assume E[X] = ∞ and lims→∞ s(1 − FX (s)) = 0. If we denote by µ(s) = 0 xdFX (x),
then
Sn
→ 1, in probability as n → ∞,
nµ(n)
and µ(n) → ∞. Please note, as in the short discussion above, that this is in line with
the conclusion of the previous exercise.
2. Assume E[X] = ∞ and
ν(s) =
µ(s)
→ ∞, as s → ∞.
s(1 − FX (s))
Show that for n large we can choose bn such that nµ(bn ) = bn and
Sn
→ 1, in probability as n → ∞,
bn
and bn /n → ∞.
Please note that the assumptions of item 1 are stronger than the assumptions of item
2. In particular, if assumptions of item 1 hold, then bn /nµ(n) → 1.
Solution:
1. Actually, the hypothesis lims→∞ s(1 − FX (s)) = 0 is exactly what is needed to apply
the WLLN (the most general version we learned), because 1 − F (s) = P(X > s). We
then have
Sn
− µ(n) → 0,
n
but µ(n) % ∞, so we can obtain the result by dividing the limit above by µ(n).
2. We note that
Z s
µ(s)
x
=
dF (x) → 0, as s → ∞,
s
0 s
by Dominated Convergence. The function
s→
µ(s)
s
is Right Continuous. In case it has jumps, the jumps equal to the jumps of F . In
particular, such jumps are positive jumps. We can, therefore, find for large n a bn such
that
µ(bn )/bn = 1/n
3
and bn → ∞. This is not entirely obvious, but you are certainly not expected to
make a full argument here, the idea is enough. Since bn → ∞, we can conclude that
µ(bn ) → E[X] = ∞, by monotone convergence. Therefore, bn /n = µ(bn ) → ∞ as well.
We now apply the weak law for triangular arrays (page 41 in the textbook). The
hypotheses there are satisfied, since
(a)
n(1 − F (bn )) = n
µ(bn ) 1
1
=
→0
bn ν(bn )
ν(bn )
(b)
n
X
˜2 ]
E[X
n,k
k=1
b2n
n
= 2
bn
Z
0
bn
n
2y(1−F (y))dy = 2
bn
Z
0
bn
µ(y)
nµ(bn )
dy ≤
ν(y)
b2n
Z
0
bn
1
dy → 0.
ν(y)
Now, from the triangular array WLLN, we have (Sn − an )/bn → 0, where actually
an = nµ(bn ) = bn .
Exercise 4.6. (St. Petersburg paradox) Consider X distributed as P(X = 2j ) = 2−j , j =
1, 2, . . . . In other words X is the outcome of a game where a fair coin is tossed and if heads
comes up first time after j tosses, then the player receives 2j dollars. Let Sn = X1 + · · · + Xn
where Xi are i.i.d. distributed as X. Use item 2 in the previous problem to show that
Sn
→ 1, in probability as n → ∞.
n log2 (n)
This is called a “paradox” since, although this game has an infinite value, it seems that
someone wouldn’t pay too much (more than 40 dollars, argued historically) to play this game
(since the probability of gains above 40 is very small). This problem shows that, if someone
plays this game and pays log2 (n) per game, it takes around n games to break even. However,
if log2 (n) = 40, then n = 240 is very large.
Solution: we just apply the second part of the previous problem. We actually need
to check the limitsnot over s → ∞ but only over integers of the
form n = 2j , because
P
−j
all probability mass lies at these points. Since 1 − F (2j ) =
= 2−j , we have
k>j 2
2j (1 − F (2−j ) = 1. Since µ(2j ) → E[X] = ∞ the hypotheses of the previous problem, second
part, are satisfied.P
Now, µ(2j ) = jk=1 2k ×2−k = j and is flat in between [2j , 2j+1 ). This means that solving
for bn such that nµ(bn ) = bn amounts to finding j with the property
nj ∈ [2j , 2j+1 ).
then set bn = nj. The j is found by solving for each n for j such that
log2 n + log2 j ∈ [j, j + 1).
This does have a solution j(n) % ∞. Dividing by j(n), we obtain
log2 n
→ 1,
j(n)
wich translates easily as
bn
nj(n)
=
→ 1 as n → ∞.
n log2 n
n log2 n
The proof is now complete, taking into account the second part of the previous problem.
4