Sample path behavior of a L´evy insurance risk process

Transcription

Sample path behavior of a L´evy insurance risk process
Sample path behavior of a L´evy insurance risk process
approaching ruin, under the Cram´er-Lundberg and convolution
equivalent conditions
Philip S. Griffin∗
Syracuse University
September 12, 2013
Abstract
Recent studies have demonstrated an interesting connection between the asymptotic behavior at ruin of a L´evy insurance risk process under the Cram´er-Lundberg and convolution
equivalent conditions. For example the limiting distributions of the overshoot and the undershoot are strikingly similar in these two settings. This is somewhat surprising since the
global sample path behavior of the process under these two conditions is quite different. Using tools from excursion theory and fluctuation theory we provide a unified approach, which
explains this connection and leads to new asymptotic results, by describing the evolution of
the sample paths from the time of last maximum prior to ruin until ruin occurs.
Keywords: L´evy insurance risk process, Cram´er-Lundberg, convolution equivalence, ruin time,
overshoot, EDPF
AMS 2100 Subject Classifications: 60G51; 60F17; 91B30; 62P05.
1
Introduction
It is becoming increasingly popular to model insurance risk processes with a general L´evy process.
In addition to new and interesting mathematics, this approach allows for direct modelling of
aggregate claims which can then be calibrated against real aggregate data, as opposed to the
traditional approach of modelling individual claims. Whether this approach is superior remains
to be seen, but it offers, at a minimum, an alternative, to the traditional approach. The focus of
this paper will be on two such L´evy models, and their sample path behavior as ruin approaches.
Let X = {Xt : t ≥ 0}, X0 = 0, be a L´evy process with characteristics (γ, σ 2 , ΠX ). The
characteristic function of X is given by the L´evy-Khintchine representation, EeiθXt = etΨX (θ) ,
where
Z
2 2
ΨX (θ) = iθγ − σ θ /2 + (eiθx − 1 − iθx1{|x|<1} )ΠX (dx), for θ ∈ R.
R
To avoid trivialities we assume X is non-constant. In the insurance risk model X represents
the excess in claims over premium. An insurance company starts with an initial positive reserve
∗
This work was partially supported by Simons Foundation Grant 226863.
1
u, and ruin occurs if this level is exceeded by X. To reflect the insurance company’s desire to
collect sufficient premia to prevent almost certain ruin, it is assumed that Xt → −∞ a.s. This is
the general L´evy insurance risk model, which we will investigate under two distinct conditions.
The first is the Cram´er-Lundberg condition;
EeαX1 = 1
and
EX1 eαX1 < ∞ for some α > 0.
(1.1)
The compound Poisson model,1 which arises when X is a spectrally positive compound Poisson
process with negative drift, has been extensively studied under this condition, but until recently
relatively little has been done in the general model. The second is the convolution equivalent
condition;
EeαX1 < 1 and X1+ ∈ S (α) for some α > 0,
(1.2)
where S (α) denotes the class of convolution equivalent distributions. The formal description of
S (α) will be given in Section 6. Typical examples of distributions in S (α) are those with tails of
the form
e−αx
P (X1 > x) ∼ p for p > 1.
x
Under (1.2), EeθX1 = ∞ for all θ > α, so (1.1) must fail. Hence conditions (1.1) and (1.2) are
mutually exclusive. For a further comparison see the introduction to [16].
Considerable progress has been made recently in calculating the limiting distribution of
several variables related to ruin under (1.1) and (1.2) in the general L´evy model. To give some
examples we first need a little notation. Set
X t = sup Xs ,
0≤s≤t
τ (u) = inf{t : X(t) > u},
and let P (u) denote the probability measure P (u) ( · ) = P ( · |τ (u) < ∞). Let H be the ascending
ladder height process, and ΠH , dH and q its L´evy measure, drift and killing rate respectively;
see Section 2 for more details. Then under the Cram´er-Lundberg condition, it was shown in [17]
that the limiting distributions of the shortfall at, and the maximum surplus prior to, ruin are
given by
Z
w
(u)
−1
P (Xτ (u) − u ∈ dx) −→ q α dH δ0 (dx) +
eαy ΠH (y + dx)dy ,
y≥0
(1.3)
w
(u)
−1
αy
P (u − X τ (u)− ∈ dy) −→ q α[dH δ0 (dy) + e ΠH (y)dy],
w
where −→ denotes weak convergence and δ0 is a point mass at 0. Under the convolution
equivalent condition, it follows from Theorem 4.2 in [19] and Theorem 10 in [9], see also Section
7 of [16], that the corresponding limits are
Z
w
(u)
−1
αH1 −αx
P (Xτ (u) − u ∈ dx) −→ q α − ln(Ee
)e
dx + dH δ0 (dx) +
eαy ΠH (y + dx)dy ,
y≥0
P
(u)
v
(u − X τ (u)− ∈ dy) −→ q
−1
αy
α[dH δ0 (dy) + e ΠH (y)dy],
(1.4)
1
This is often called the Cram´er-Lundberg model, as opposed to the Cram´er-Lundberg condition (1.1).
2
v
where −→ denotes vague convergence of measures on [0, ∞).2
The resemblance between the results in (1.3) and (1.4) is striking and in many ways quite
surprising, since the paths resulting in ruin behave very differently in the two cases as we
now explain. For the Cram´er-Lundberg case, at least in the compound Poisson model, with
b = EX1 eαX1 , we have
τ (u)
→ b−1 in P (u) probability
u
and
X(tτ (u))
sup − bt → 0 in P (u) probability,
τ (u)
t∈[0,1]
indicating that ruin occurs due to the build up of small claims which cause X to behave as though
it had positive drift; see Theorem 8.3.5 of [12].3 By contrast in the convolution equivalent case,
asymptotically, ruin occurs in finite time (in distribution), and for ruin to occur, the process
must take a large jump from a neighbourhood of the origin to a neighbourhood of u. This
jump may result in ruin, but if not, the resulting process X − u subsequently behaves like X
conditioned to hit (0, ∞). This representation of the limiting conditioned process leads to a
straightforward to proof (1.4), see [16]. However the description in the Cram´er-Lundberg case
is not sufficiently precise to yield (1.3). What is needed is a more refined characterization of the
process as ruin approaches, specifically, a limiting description of the path from the time of the
last strict maximum before time τ (u) up until time τ (u).
In the discrete time setting, such a result was proved by Asmussen [1]. Let Zk be IID,
non-lattice and set Sn = Z1 + · · · + Zn . Assume the Cram´er-Lundberg condition,
EeαZ1 = 1
and
EZ1 eαZ1 < ∞ for some α > 0.
As above, let τ (u) be the first passage time of Sn over level u and σ(u) the time of the last strict
ladder epoch prior to passage (thus σ(0) = 0). Set
Z(u) = (Zσ(u)+1 , . . . , Zτ (u) ).
It follows from Section 8 of [1] that for G bounded and continuous
E (u) G(Z(u), Sτ (u) − u)
Z ∞
1
eαy E G(Z(0), Sτ (0) − y); Sτ (0) > y, τ (0) < ∞ dy
→
αSτ (0)
CE(Sτ (0) e
; τ (0) < ∞) 0
(1.5)
where C = limu→∞ eαu P (τ (u) < ∞) and E (u) denotes expectation with respect to the conditional probability P (u) ( · ) = P ( · |τ (u) < ∞). This result describes the limit of the conditioned
process from the time of the last strict ladder epoch prior to first passage over a high level, up
until the time of first passage. From it, the limiting distribution of several quantities related to
first passage, such as those in (1.3), may be found in the random walk setting.
As it stands the formulation in (1.5) makes no sense for a general L´evy process. To even apply
to the compound Poisson model, the most popular risk model, some reformulation is needed.
2
In (1.3) and (1.4), it is assumed that X1 has a non-lattice distribution. Similar results hold in the lattice
case if the limit is taken through points in the lattice span, but to avoid repetition we will henceforth make the
non-lattice assumption.
3
We can find no reference for these results in the general L´evy model.
3
Furthermore, to prove (1.5), Asmussen derives a renewal equation by considering the two cases
τ (0) = τ (u) and τ (0) < τ (u). This is a standard renewal theoretic device which has no hope of
success in the general L´evy insurance risk model since typically τ (0) = 0. To circumvent these
problems we apply excursion arguments that enable us to derive not only new results in the
Cram´er-Lundberg setting, but also to provide a unified approach to the general L´evy insurance
risk process under (1.1) and (1.2), thus explaining the striking similarity between results in the
two different settings.
We conclude the introduction with an outline of the paper. Sections 2 and 3 introduce
the fluctuation theory and excursion theory needed for the remainder of the paper. Section
4 outlines the general approach. This is carried out in Section 5 under the Cram´er-Lundberg
condition and in Section 6 under the convolution equivalent condition. The special case where
(0, ∞) is irregular is then briefly discussed in Section 7. The joint limiting distribution of many
variables of interest as well as an example of a Gerber-Shiu EDPF is given in Section 8. Finally
the appendix contains a result in the case that X is compound Poisson which, as is often the
case, needs to be treated separately. Throughout C, C1 , C2 . . . will denote constants whose value
is unimportant and may change from one usage to the next.
2
Fluctuation Variables
Let Lt , t ≥ 0, denote the local time at 0 of the process X − X, normalized by
Z ∞
E
e−t dLt = 1.
(2.1)
0
Here we are following Chaumont [6] in our choice of normalization. When 0 is regular for
[0, ∞), L is the unique increasing, continuous, additive functional satisfying (2.1) such that the
support of the measure dLt is the closure of the set {t : X t = Xt } and L0 = 0 a.s. If 0 is
irregular for [0, ∞), the set {s : Xs > X s− } of times of strict new maxima of X is discrete. Let
Rt = |{s ∈ (0, t] : Xs > X s− }| and define the local time of X − X at 0 by
Lt =
Rt
X
ek
(2.2)
k=0
where ek , k = 0, 1, . . . is an independent sequence of i.i.d. exponentially distributed random
variables with parameter
1
p=
.
(2.3)
−τ
(0)
1 − E(e
; τ (0) < ∞)
Note that in this latter case, dLt has an atom of mass e0 at t = 0 and thus the choice of p
ensures that (2.1) holds. Let L−1 be the right continuous inverse of L and Hs = X L−1
. Then
s
(L−1
,
H
)
is
the
(weakly)
ascending
bivariate
ladder
process.
s s≥0
s
We will also need to consider the strictly ascending bivariate ladder process, which requires
a slightly different definition for L. Specifically when 0 is regular for (0, ∞), L is the unique
increasing, continuous, additive functional as above. When 0 is irregular for (0, ∞), L is defined
by (2.2). Thus the only difference is for the compound Poisson process, where the L switches
from being continuous to being given by (2.2). In this case, i.e. when X is compound Poisson,
4
the normalisation (2.1) still holds, but now the support of the measure dLt is the set of times
of strict maxima of X, as opposed to the closure of the set {t : X t = Xt }. L−1 and H are
then defined as before in terms of L and X, and (L−1
s , Hs )s≥0 is the strictly ascending bivariate
ladder process. See [3], [8] and particularly Chapter 6 of [20].
In the following paragraph, (L−1
s , Hs )s≥0 can be either the weakly ascending or strictly
ascending bivariate ladder process. When Xt → −∞ a.s., L∞ has an exponential distribution
with some parameter q > 0, and the defective process (L−1 , H) may be obtained from a nondefective process (L−1 , H) by independent exponential killing at rate q > 0. We denote the
bivariate L´evy measure of (L−1 , H) by ΠL−1 ,H (·, ·). The Laplace exponent κ(a, b) of (L−1 , H),
defined by
−1
−1
e−κ(a,b) = E(e−aL1 −bH1 ; 1 < L∞ ) = e−q Ee−aL1 −bH1
for values of a, b ∈ R for which the expectation is finite, may be written
Z Z
κ(a, b) = q + dL−1 a + dH b +
1 − e−at−bx ΠL−1 ,H (dt, dx),
t≥0
x≥0
where dL−1 ≥ 0 and dH ≥ 0 are drift constants. Observe that the normalisation (2.1) results in
κ(1, 0) = 1. The bivariate renewal function of (L−1 , H), given by
Z ∞
V (t, x) =
e−qs P (L−1
s ≤ t, Hs ≤ x)ds
0
Z ∞
=
P (L−1
s ≤ t, Hs ≤ x; s < L∞ )ds,
0
has Laplace transform
Z Z
e
t≥0
−at−bx
Z
∞
V (dt, dx) =
x≥0
−1
e−qs E(e−aLs
−bHs
)ds =
s=0
1
κ(a, b)
(2.4)
provided κ(a, b) > 0. We will also frequently consider the renewal function of H, defined on R
by
Z ∞
V (x) =
e−qs P (Hs ≤ x)ds = lim V (t, x).
t→∞
0
Observe that V (x) = 0 for x < 0, while V (0) > 0 iff H is compound Poisson. Also
V (∞) := lim V (x) = q −1 .
x→∞
(2.5)
From this point on we will take (L−1 , H) to be the strictly ascending bivariate ladder probt = −Xt , t ≥ 0 denote the dual process, and (L
b −1 , H)
b the weakly ascending
cesses of X. Let X
b This is opposite to the usual convention, and means some care
bivariate ladder processes of X.
needs to be taken when citing the literature in the compound Poisson case. This choice is made
because it leads to more natural results and a direct analogue of (1.5) when X is compound Poisb will be denoted in the obvious way, for example τb(0), pb, Π b−1 b ,
son. All quantities relating to X
L ,H
κ
b and Vb . With these choices of bivariate ladder processes, together with the normalisation of
the local times implying κ(1, 0) = κ
b(1, 0) = 1, the Wiener-Hopf factorisation takes the form
κ(a, −ib)b
κ(a, ib) = a − ΨX (b), a ≥ 0, b ∈ R.
5
(2.6)
If α > 0 and EeαX1 < ∞, then by analytically extending κ, κ
b and ΨX , it follows from (2.6) that
κ(a, −z)b
κ(a, z) = a − ΨX (−iz) for a ≥ 0, 0 ≤ <z ≤ α.
If further EeαX1 < 1, for example when (1.2) holds, then ΨX (−iα) < 0 and since trivially
κ
b(a, α) > 0, we have
κ(a, −α) > 0 for a ≥ 0.
(2.7)
The following result will be needed in Section 3.
Proposition 2.1 If X is not compound Poisson, then for s ≥ 0, x ≥ 0
dL−1 V (ds, dx) = P (X s = Xs ∈ dx)ds.
Proof For any s ≥ 0, x ≥ 0
Z
dL−1
0
∞
I(L−1
t
Z
L∞
≤ s, Ht ≤ x, t < L∞ )dt = dL−1
Z0 s
I(L−1
t ≤ s, X L−1 ≤ x)dt
t
I(X r ≤ x)dLr
= dL−1
0
Z s
=
I(X r ≤ x)I(X r = Xr )dr
0
by Theorem 6.8 and Corollary 6.11 of Kyprianou [20], which apply since X is not compound
Poisson, (Kyprianou’s (L−1 , H) is the weakly ascending ladder process in which case the result
holds in the compound Poisson case also). Taking expectations completes the proof. u
t
3
Excursions
Let D be the Skorohod space of functions w : [0, ∞) → R which are right continuous with left
limits, equipped with the usual Skorohod topology. The lifetime of a path w ∈ D is defined
to be ζ(w) = inf{t ≥ 0 : w(s) = w(t) for all s ≥ t}, where we adopt the standard convention
that inf ∅ = ∞. If ζ(w) = ∞ then w(ζ) is taken to be some cemetery point. Thus, for
example, if w(ζ) > y for some y then necessarily ζ < ∞. The jump in w at time t is given by
∆wt = w(t) − w(t−). We assume that X is given as the coordinate process on D, and the usual
right continuous completion of the filtration generated by the coordinate maps will be denoted
{Ft }t≥0 . Pz is the probability measure induced on F = ∨t≥0 Ft by the L´evy process starting at
z ∈ R, and we usually write P for P0 .
−1
Let G = {L−1
> 0} and D = {L−1
: ∆L−1
> 0} denote the set of left and right
t− : ∆Lt
t
t
endpoints of excursion intervals of X − X. For g ∈ G, let d ∈ D be the corresponding right
endpoint of the excursion interval, (d = ∞ if the excursion has infinite lifetime), and set
g (t) = X(g+t)∧d − X g , t ≥ 0.
6
Note, these are X-excursions in the terminology of Greenwood and Pitman, see Remark 4.6 of
[13], as opposed to X − X excursions. Let
E = {w ∈ D : w(t) ≤ 0 for all 0 ≤ t < ζ(w)}
and F E the restriction of F to E. Then g ∈ E for each g ∈ G, and ζ(g ) = d − g. The
characteristic measure on (E, F E ) of the X-excursions will be denoted n.
For fixed u > 0 let
(
g
if τ (u) = d for some excursion interval (g, d)
Gτ (u)− =
τ (u) else.
If X is compound Poisson, then Gτ (u)− is the first time of the last maximum prior to τ (u). When
X is not compound Poisson, Gτ (u)− is the left limit at τ (u) of Gt = sup{s ≤ t : Xs = X s },
explaining the reason behind this common notation.
Set
Yu (t) = X(Gτ (u)− +t)∧τ (u) − X τ (u)− , t ≥ 0.
Clearly ζ(Yu ) = τ (u) − Gτ (u)− . If ζ(Yu ) > 0 then Gτ (u)− ∈ G, X τ (u)− = X Gτ (u)− and Yu ∈ E.
If in addition τ (u) < ∞, equivalently ζ(Yu ) < ∞, then Yu is the excursion which leads to first
passage over level u. To cover the possibility that first passage does not occur at the end of an
excursion interval, introduce
E = E ∪ {x : x ≥ 0}
where x ∈ D is the path which is identically x. On the event ζ(Yu ) = 0, that is Gτ (u)− = τ (u),
either X creeps over u in which case Yu = 0, or X jumps over u from its current strict maximum
in which case Yu = x where x = ∆Xτ (u) > 0 is the size of the jump at time τ (u). In all cases,
Yu ∈ E.
Let F E be the restriction of F to E. We extend n trivially to a measure on F E by setting
n(E \ E) = 0. Let n
˜ denote the measure on F E obtained by pushing forward the measure Π+
X
˜ (E) = 0, and for
with the mapping x → x, where Π+
X is the restriction of ΠX to (0, ∞). Thus n
any Borel set B, n
˜ ({x : x ∈ B}) = Π+
˜.
X (B). Finally let n = n + dL−1 n
For u > 0, s ≥ 0, y ≥ 0, ∈ E define
Qu (ds, dy, d) = P (Gτ (u)− ∈ ds, u − X τ (u)− ∈ dy, Yu ∈ d, τ (u) < ∞).
(3.1)
The following result may be viewed as an extension of the quintuple law of Doney and Kyprianou
[9]; see the discussed following Proposition 3.2.
Theorem 3.1 For u > 0, s ≥ 0, y ≥ 0, ∈ E,
Qu (ds, dy, d) = I(y ≤ u)V (ds, u − dy)n(d, (ζ) > y) + dH
where
∂−
∂− u
denotes left derivative and
with the increasing function
∂−
∂− u V
∂−
∂− u V
∂−
V (ds, u)δ0 (dy)δ0 (d),
∂− u
(3.2)
(ds, u) is the Lebesgue -Stieltges measure associated
(s, u).
7
Proof There are three possible ways in which X can first cross level u; by a jump at the end
of an excursion interval, by a jump from a current strict maximum or by creeping. We consider
each in turn.
Let f, h and j be non-negative bounded continuous functions. Since X L−1 is left continuous,
t−
we may apply the master formula of excursion theory, Corollary IV.11 of [3], to obtain
E[f (Gτ (u)− )h(u − X τ (u)− )j(Yu ); Xτ (u) > u, Gτ (u)− < τ (u) < ∞]
X
=E
f (g)h(u − X g )j(g )I(X g ≤ u, g (ζ) > u − X g )
g∈G
∞
Z
Z
dLt f (t)h(u − X t )j()I(X t ≤ u, (ζ) > u − X t )n(d)
=E
E
0
Z L∞
Z
f (L−1
j()n(d)E
=
r )h(u − Hr )I(Hr ≤ u, (ζ) > u − Hr )dr
0
E
Z
Z
Z
j()n(d)
f (s)h(u − y)I((ζ) > u − y)V (ds, dy)
=
E
s≥0 0≤y≤u
Z
Z
Z
j()n(d)
f (s)h(y)I((ζ) > y)V (ds, u − dy)
=
s≥0 0≤y≤u
E
Z
Z
Z
=
f (s)h(y)j()V (ds, u − dy)n(d, (ζ) > y).
s≥0
0≤y≤u
(3.3)
E
Next, define ˜j : [0, ∞) → R by ˜j(x) = j(x). Then, since Yu (t) = ∆Xτ (u) all t ≥ 0 on
{Xτ (u) > u, Gτ (u)− = τ (u) < ∞}, we have by the compensation formula
E[f (Gτ (u)− )h(u − X τ (u)− )j(Yu ); Xτ (u) > u, Gτ (u)− = τ (u) < ∞]
= E[f (Gτ (u)− )h(u − X τ (u)− )˜j(∆Xτ (u) ); Xτ (u) > u, Gτ (u)− = τ (u) < ∞]
X
=E
f (s)h(u − X s− )˜j(∆Xs )I(Xs− = X s− ≤ u, ∆Xs > u − X s− )
Zs ∞
Z
f (s)h(u − X s )I(Xs = X s ≤ u)ds ˜j(ξ)I(ξ > u − X s )ΠX (dξ)
=E
0
ξ
Z ∞
Z
Z
=
f (s)
h(y) ˜j(ξ)I(ξ > y)ΠX (dξ)P (X s = Xs ∈ u − dy)ds
0
0≤y≤u
ξ
Z
Z
Z
=
f (s)
h(y) j()˜
n(d, (ζ) > y)P (X s = Xs ∈ u − dy)ds
E
s≥0
0≤y≤u
Z
Z
Z
=
f (s)h(y)j()˜
n(d, (ζ) > y)dL−1 V (ds, u − dy),
s≥0
0≤y≤u
(3.4)
E
where the final equality follows Proposition 2.1 if X is not compound Poisson. If X is compound
Poisson the first and last formulas of (3.4) are equal because P (Gτ (u)− = τ (u)) = 0 and dL−1 = 0
(recall that (L−1 , H) is the strictly ascending ladder process).
Finally
E[f (Gτ (u)− )h(u − X τ (u)− )j(Yu ); Xτ (u) = u, τ (u) < ∞]
= h(0)j(0)E[f (τ (u)); Xτ (u) = u, τ (u) < ∞]
Z
∂−
= dH h(0)j(0) f (s)
V (ds, u)
∂
−u
s
8
(3.5)
if dH > 0, by (3.5) of [15]. If dH = 0 then X does not creep, and so P (Xτ (u) = u) = 0. Thus
(3.5) holds in this case also. Combining the three terms (3.3), (3.4) and (3.5) gives the result.
u
t
The next two results will be used to calculate limits such as those of the form (1.3) and (1.4).
Proposition 3.1 For t ≥ 0, z ≥ 0
Vb (dt, dz) = n((t) ∈ −dz, ζ > t)dt + dL−1 δ(0,0) (dt, dz).
(3.6)
Proof If X is not compound Poisson nor |X| a subordinator, (3.6) follows from (5.9) of [6]
b
applied to the dual process X.
If X is a subordinator, but not compound Poisson, then n is the zero measure and dL−1 = 1
b −1 , H)
b remains at (0, 0) for an exponential amount of time with
by (2.1). On the other hand (L
parameter pb = 1, by (2.3), and is then killed. Hence (3.6) holds.
b −1 , H
b t ) = (t, X
bt ) and so Vb (dt, dz) = P (Xt ∈ −dz)dt. On
If −X is a subordinator, then (L
t
the other hand, n is proportional to the first, and only, excursion, so n((t) ∈ −dz, ζ > t) =
cP (Xt ∈ −dz) for some c > 0. Since dL−1 = 0, we thus only need check that |n| = 1. But
G = {0}, and so by the master formula
Z ∞
Z
X
−g
−t
1=E
e =E
e dLt n(d) = |n|.
g∈G
E
0
To complete the proof it thus remains to prove (3.6) when X is compound Poisson. We defer
this case to the appendix. u
t
For notational convenience we define (0−) = 0 for ∈ E . Thus, in particular, x(ζ−) = 0
since ζ(x) = 0. Note also that x(ζ) = x.
Proposition 3.2 For t ≥ 0, z ≥ 0 and x > 0
n(ζ ∈ dt, (ζ−) ∈ −dz, (ζ) ∈ dx) = Vb (dt, dz)ΠX (z + dx).
(3.7)
Proof First consider the case t > 0, z ≥ 0 and x > 0. For any 0 < s < t, using the Markov
property of the excursion measure n, we have
n(ζ ∈ dt,(ζ−) ∈ −dz, (ζ) ∈ dx)
Z
=
n((s) ∈ −dy, ζ > s)P−y (τ (0) ∈ dt − s, Xτ (0)− ∈ −dz, Xτ (0) ∈ dx)
y≥0
Z
=
n((s) ∈ −dy, ζ > s)P (τ (y) ∈ dt − s, Xτ (y)− ∈ y − dz, Xτ (y) ∈ y + dx).
y≥0
By the compensation formula, for any positive bounded Borel function f ,
E[f (τ (y));Xτ (y)− ∈ y − dz, Xτ (y) ∈ y + dx]
X
= E[
f (r); X r− ≤ y, Xr− ∈ y − dz, Xr ∈ y + dx]
Z
r
∞
f (r)P (X r− ≤ y, Xr− ∈ y − dz)drΠX (z + dx).
=
r=0
9
Thus
P (τ (y) ∈ dt − s, Xτ (y)− ∈ y − dz,Xτ (y) ∈ y + dx)
= P (X t−s ≤ y, Xt−s ∈ y − dz)dtΠX (z + dx).
Hence
n(ζ ∈ dt,(ζ−) ∈ −dz, (ζ) ∈ dx)
Z
n((s) ∈ −dy, ζ > s)P (X t−s ≤ y, Xt−s ∈ y − dz)dtΠX (z + dx)
=
y≥0
= n((t) ∈ −dz, ζ > t)dtΠX (z + dx)
= Vb (dt, dz)ΠX (z + dx)
by (3.6).
Finally if t = 0, then for any positive bounded Borel function
Z
f (t, z, x)n(ζ ∈ dt, (ζ−) ∈ −dz, (ζ) ∈ dx)
{(t,z,x):t=0,z≥0,x>0}
Z
= dL−1
f (0, 0, x)˜
n((ζ) ∈ dx)
Zx>0
= dL−1
f (0, 0, x)Π+
X (dx)
x>0
Z
=
f (t, z, x)Vb (dt, dz)ΠX (z + dx)
{(t,z,x):t=0,z≥0,x>0}
by Proposition 3.1.
u
t
As mentioned earlier, Theorem 3.1 may be viewed as an extension of the quintuple law of
Doney and Kyprianou [9]. To see this observe that from Theorem 3.1 and Proposition 3.2, for
u > 0, s ≥ 0, t ≥ 0, 0 ≤ y ≤ u ∧ z, x ≥ 0,
P (Gτ (u)− ∈ ds, u − X τ (u)− ∈ dy, τ (u) − Gτ (u)− ∈ dt, u − Xτ (u)− ∈ dz, Xτ (u) − u ∈ dx)
= P (Gτ (u)− ∈ ds, u − X τ (u)− ∈ dy, ζ(Yu ) ∈ dt, Yu (ζ−) ∈ y − dz, Yu (ζ) ∈ y + dx)
= I(x > 0)V (ds, u − dy)n(ζ ∈ dt, (ζ−) ∈ y − dz, (ζ) ∈ y + dx)
∂−
+ dH
V (ds, u)δ(0,0,0,0) (dt, dx, dz, dy)
∂− u
∂−
= I(x > 0)V (ds, u − dy)Vb (dt, dz − y)ΠX (z + dx) + dH
V (ds, u)δ(0,0,0,0) (dt, dx, dz, dy).
∂− u
(3.8)
When X is not compound Poisson, this is the statement of Theorem 3 of [9] with the addition of
the term due to creeping; see Theorem 3.2 of [15]. When X is compound Poisson the quintuple
law, though not explicitly stated in [9], remains true and can be found in [10]. In that case the
result is slightly different from (3.8) since the definitions of Gτ (u)− , V and Vb then differ due to
the choice of (L−1 , H) as the weakly ascending ladder process in [9] and [10]. Thus we point out
that Vigon’s ´equation amicale invers´ee, [24],
Z
ΠH (dx) =
Vb (dz)ΠX (z + dx), x > 0,
(3.9)
z≥0
10
and Doney and Kyprianou’s extension
Z
ΠL−1 ,H (dt, dx) =
Vb (dt, dv)ΠX (v + dx), x > 0, t ≥ 0,
(3.10)
v≥0
continue to hold with our choice of (L−1 , H) as the strongly ascending ladder process. The proof
of (3.10) is analogous to the argument in Corollary 6 of [9], using (3.8) instead of Doney and
Kyprianou’s quintuple law, and (3.9) follows immediately from (3.10).
Corollary 3.1 For x > 0
n((ζ) ∈ dx) = ΠH (dx).
(3.11)
Proof Integrating out in (3.7)
Z
n((ζ) ∈ dx) =
Vb (dz)ΠX (z + dx) = ΠH (dx)
z≥0
by (3.9).
4
u
t
A Unified Approach
In this section we outline a unified approach to proving results of the form (1.3) and (1.4). The
details will be carried out in subsequent sections. We assume from now on that Xt → −∞.
We will be interested in a marginalised version of (3.1) conditional on τ (u) < ∞. Thus for
u > 0, y ≥ 0 and ∈ E define
Q(u) (dy, d) = P (u) (u − X τ (u)− ∈ dy, Yu ∈ d),
where recall P (u) ( · ) = P ( · |τ (u) < ∞). Setting V (u) = V (∞) − V (u), and using the PollacekKhintchine formula
P (τ (u) < ∞) = qV (u),
(4.1)
see Proposition 2.5 of [19], it follows from (3.2) that
Q(u) (dy, d) = I(y ≤ u)
V (u − dy)
V 0 (u)
n(d, (ζ) > y) + dH
δ0 (dy)δ0 (d).
qV (u)
qV (u)
Here we have used the fact that V is differentiable when dH > 0, see Theorem VI.19 of [3]. Now
under either the Cram´er-Lundberg condition (1.1) or the convolution equivalent condition (1.2),
V (u − dy) v α αy
V 0 (u)
α
−→ e dy and dH
→ dH as u → ∞;
q
q
qV (u)
qV (u)
(4.2)
see Sections 5 and 6 below. This suggests that under suitable conditions on F : [0, ∞) × E → R,
Z
Z
F (y, )Q(u) (dy, d) →
F (y, )Q(∞) (dy, d)
(4.3)
[0,∞)×E
[0,∞)×E
11
where
Q(∞) (dy, d) =
α αy
α
e dy n(d, (ζ) > y) + dH δ0 (dy)δ0 (d),
q
q
(4.4)
thus yielding a limiting description of the process as ruin approaches. Observe that (4.3) may
be rewritten as
Z
Z
α αy
α
(u)
E F (u − X τ (u)− , Yu ) →
e dy F (y, ) n(d, (ζ) > y) + dH F (0, 0),
(4.5)
q
E
[0,∞) q
indicating how the limiting behaviour of many functionals of the process related to ruin may be
calculated.
Conditions under which (4.3) holds, will be stated in terms of conditions on the function
Z
(4.6)
h(y) =
F (y, ) n(d, (ζ) > y).
E
Since, by (4.2), (4.3) is equivalent to
Z u
Z ∞
V (u − dy)
αeαy
h(y)
h(y)
→
dy,
q
qV (u)
0
0
(4.7)
it will be of interest to know when h is continuous a.e. with respect to Lebesgue measure m. We
conclude this section with such a result.
The most obvious setting in which the condition on By below holds, is when F is continuous
in y for each . In particular it holds when F is jointly continuous. The boundedness condition
holds when F is bounded, but applies to certain unbounded functions. This will be useful later
when investigating convergence of the mgf of the overshoot.
Proposition 4.1 Assume EeαX1 < ∞, and F : [0, ∞) × E → R is product measurable, and
F (y, )e−α((ζ)−y) I((ζ) > y) is bounded in (y, ). Further assume n(Byc ) = 0 for a.e. y with
respect to Lebesgue measure m, where
By = { : F (· , ) is continuous at y}.
Then h is continuous a.e. m.
Proof From (4.6),
Z
h(z) =
fz ()n(d)
E
where
fz () = F (z, )I((ζ) > z).
Fix y > 0 and assume |z − y| < y/2. Then for some constant C, independent of z and ,
|fz ()| ≤ Ceα((ζ)−z) I((ζ) > z) ≤ Ceα((ζ)−y/2) I((ζ) > y/2).
(4.8)
By (3.11),
Z
eα((ζ)−y/2) I((ζ) > y/2)n(d) =
E
Z
eα(x−y/2) ΠH (dx),
x>y/2
and since EeαX1 < ∞, this last integral is finite by Proposition 7.1 of [14].
12
(4.9)
c is countable and
Now let A = {y : n(Byc ) = 0} and CH = {y : ΠH ({y}) = 0}. Then CH
n((ζ) = y) = ΠH ({y}) = 0, if y ∈ CH .
Thus if y > 0, y ∈ A∩CH and z → y, then fz () → fy () a.e. n. Hence by (4.8) and (4.9), we can
c ) = 0
apply dominated convergence to obtain continuity of h at such y. Since m(Ac ) = m(CH
this completes the proof. u
t
Verification of the general limiting result (4.3) for the Cram´er-Lundberg and convolution
equivalent cases will be given separately in the next two sections.
5
Cram´
er-Lundberg Condition
In studying the process X under the Cram´er-Lundberg condition (1.1), it is useful to introduce
the Esscher transform. Thus let P ∗ be the measure on F defined by
dP ∗ = eαXt dP on Ft ,
for all t ≥ 0. Then X under P ∗ is the Esscher transform of X. It is itself a L´evy process with
E ∗ X > 0; see Section 3.3 of Kyprianou [20].
When (1.1) holds, Bertoin and Doney [4] extended the classical Cram´er-Lundberg estimate
for ruin to a general L´evy process; assume X is nonlattice in the case that X is compound
Poisson, then
q
,
(5.1)
lim eαu P (τ (u) < ∞) =
u→∞
αm∗
where m∗ = E ∗ H1 . Under P ∗ , H is a non-defective subordinator with drift d∗H and L´evy measure
Π∗H given by
d∗H = dH and Π∗H (dx) = eαx ΠH (dx),
and so
E ∗ H1 = d∗H +
∞
Z
xΠ∗H (dx) = dH +
Z
0
∞
xeαx ΠH (dx).
(5.2)
0
Combining (5.1) with the Pollacek-Khintchine formula (4.1), we obtain
lim
u→∞
V (u − y)
= eαy ,
V (u)
(5.3)
and hence the first result in (4.2) holds as claimed. The second result in (4.2) is a consequence of
(4.15) in [17] for example. Since V (ln x) is regularly varying, the convergence in (5.3) is uniform
on compact.
Let
Z ∞
Z
∗
∗
V (x) :=
P (Hs ≤ x)ds =
eαy V (dy),
(5.4)
0
y≤x
see [4] or Section 7.2 of [20]. Then V ∗ is a renewal function, and so by the Key Renewal Theorem
Z ∞
Z u
1
∗
g(y)dy,
(5.5)
g(y)V (u − dy) → ∗
m 0
0
13
if g ≥ 0 is directly Riemann integrable on [0, ∞), We will make frequent use of the following
criterion for direct Riemann integrability. If g ≥ 0 is continuous a.e. and dominated by a
bounded, nonincreasing integrable function on [0, ∞), then g is directly Riemann integrable on
[0, ∞). See Chapter V.4 of [2] for information about the Key Renewal Theorem and direct
Riemann integrability.
The function to which we would like to apply (5.5), namely eαy h(y) where h is given by
(4.6), is typically unbounded at 0. To overcome this difficulty, we use the following result.
Proposition 5.1 If (1.1) holds, h ≥ 0, eαy h(y)1[ε,∞) (y) is directly Riemann integrable for every
ε > 0, and
Z
V (u − dy)
lim sup lim sup
h(y)
= 0,
(5.6)
V (u)
u→∞
ε→0
[0,ε)
then (4.3) holds.
Proof By (4.1) and (5.4),
V (u − dy)
eαy V ∗ (u − dy)
.
= αu
e P (τ (u) < ∞)
qV (u)
Hence by (5.1), and (5.5) applied to eαy h(y)1[ε,∞) (y),
Z
ε
u
V (u − dy)
h(y)
=
qV (u)
u
Z
V ∗ (u − dy)
→
e h(y) αu
e P (τ (u) < ∞)
αy
ε
Z
ε
∞
α
eαy h(y) dy
q
as u → ∞.
Combined with (5.6) and monotone convergence, this proves (4.7), which in turn is equivalent
to (4.3). u
t
The next result gives conditions on F which ensure that h satisfies the hypotheses of Proposition 5.1.
Proposition 5.2 Assume F ≥ 0 satisfies the hypotheses of Proposition 4.1, and further that
EX1 eαX1 < ∞. Then h satisfies the hypotheses of Proposition 5.1.
Proof For any y ≥ 0,
Z
αy
e h(y) =
eαy F (y, ) n(d, (ζ) > y)
ZE
I((ζ) > y)e−α((ζ)−y) F (y, ) eα(ζ) n(d)
EZ
≤C
eαx ΠH (dx).
=
(5.7)
(y,∞)
by (3.11). Further
Z
Z
Z
αx
e ΠH (dx)dy =
y≥0
x>y
xeαx ΠH (dx) < ∞
x≥0
by an argument analogous to Proposition 7.1 of [14]. Thus eαy h(y) is dominated by a nonincreasing integrable function on [0, ∞) and hence, for each ε > 0, eαy h(y)1[ε,∞) (y) is dominated
14
by a bounded nonincreasing integrable function on [0, ∞). Additionally, by Proposition 4.1, h
is continuous a.e. with respect to Lebesgue measure. Consequently, eαy h(y)1[ε,∞) (y) is directly
Riemann integrable for every ε > 0.
Next, by the uniform convergence on compact in (5.3), for any x ≥ 0,
Z
V (u − dy)
→ eαx − 1
(5.8)
V
(u)
[0,x)
as u → ∞. Thus by (5.7) and (5.8), if ε < 1 and u is sufficiently large
Z
V (u − dy)
h(y)
V (u)
[0,ε)
Z
Z
V (u − dy)
eαx ΠH (dx)
≤C
V (u)
[0,ε)
(y,∞)
Z
Z
Z
Z
V (u − dy)
V (u − dy)
eαx ΠH (dx)
=C
+C
eαx ΠH (dx)
V
(u)
V (u)
[0,ε)
[0,x)
[ε,∞)
[0,ε)
Z
Z
≤ C1
xeαx ΠH (dx) + C1 ε
eαx ΠH (dx)
[0,ε)
[ε,∞)
Z
Z
α
α
≤ C1 e
xΠH (dx) + C1 e εΠH (ε) + C1 ε
eαx ΠH (dx).
[0,ε)
[1,∞)
R
R
Now [0,1) xΠH (dx) < ∞, since H is a subordinator, thus [0,ε) xΠH (dx) → 0 and εΠH (ε) → 0 as
R
ε → 0. Combined with x≥1 eαx ΠH (dx) < ∞, this shows that the final expression approaches
0 as ε → 0. u
t
As a consequence we have
Theorem 5.1 If (1.1) holds, and F ≥ 0 satisfies the hypotheses for Proposition 4.1, then (4.3),
equivalently (4.5), holds. In particular
w
Q(u) (dy, d) −→ Q(∞) (dy, d).
Proof This follows immediately from Proposition 5.1 and Proposition 5.2.
u
t
The convergence in Theorem 5.1 may alternatively be expressed in terms of the overshoot
Xτ (u) − u rather than the undershoot of the maximum u − X τ (u)− .
Theorem 5.2 Assume G : E × [0, ∞) → [0, ∞) is product measurable, e−αx G(, x) is bounded
in (, x) and G(, · ) is continuous for a.e. (n). Then under (1.1),
E (u) G(Yu , Xτ (u) − u)
Z
Z
α αy
α
→
e dy
G(, x) n(d, (ζ) ∈ y + dx) + dH G(0, 0).
q
[0,∞) q
E×(0,∞)
15
(5.9)
Proof Let F (y, ) = G(, (ζ) − y)I((ζ) ≥ y). Then F satisfies the conditions of Theorem 5.1,
and
G(Yu , Xτ (u) − u) = F (u − X τ (u)− , Yu )
on {τ (u) < ∞}. Consequently (4.5) yields
Z
Z
α αy
α
(u)
e dy G(, (ζ) − y) n(d, (ζ) > y) + dH G(0, 0)
E G(Yu , Xτ (u) − u) →
q
E
[0,∞) q
Z
Z
α αy
α
=
G(, x) n(d, (ζ) ∈ y + dx) + dH G(0, 0)
e dy
q
q
[0,∞)
E×(0,∞)
completing the proof.
6
u
t
Convolution Equivalent Condition
We begin with the definition of the class S (α) . As mentioned previously, we will restrict ourselves
to the nonlattice case, with the understanding that the alternative can be handled by obvious
modifications. A distribution F on [0, ∞) with tail F = 1 − F belongs to the class S (α) , α > 0,
if F (u) > 0 for all u > 0,
lim
u→∞
F (u + x)
= e−αx , for x ∈ (−∞, ∞),
F (u)
(6.1)
F 2∗ (u)
exists and is finite,
u→∞ F (u)
(6.2)
and
lim
where F 2∗ = F ∗ F . Distributions in S (α) are called convolution equivalent
index α. When
R with
αx
(α)
F
F
F ∈ S , the limit in (6.2) must be of the form 2δα , where δα := [0,∞) e F (dx) is finite.
Much is known about the properties of such distributions, see for example [7], [11], [18], [21],
[22] and [25]. In particular, the class is closed under tail equivalence, that is, if F ∈ S (α) and G
is a distribution function for which
G(u)
= c for some c ∈ (0, ∞),
u→∞ F (u)
lim
then G ∈ S (α) .
The convolution equivalent model (1.2) was introduced by Kl¨
uppelberg, Kyprianou and
4
θX
1
Maller [19]. As noted earlier, when (1.2) holds, Ee
= ∞ for all θ > α, so (1.1) must fail.
Nevertheless (4.2) continues to hold under (1.2). This is because by (2.5), F (u) = qV (u) is a
distribution function, and combining several results in [19], see (4) of [9], together with closure
of S (α) under tail equivalence, it follows that F ∈ S (α) . Hence the first condition in (4.2) follows
from (6.1). The second condition, which corresponds to asymptotic creeping, again follows from
results in [19] and can also be found in [9].
We begin with a general result about convolution equivalent distributions.
+
(α)
In [19], (1.2) is stated in terms of Π+
. This is equivalent to X1+ ∈ S (α) by
X (· ∩ [1, ∞))/ΠX ([1, ∞)) ∈ S
Watanabe [25].
4
16
Lemma 6.1 If F ∈ S (α) , g ≥ 0 is continuous a.e.(Lebesgue) and g(y)/F (y) → L as y → ∞,
then
Z ∞
Z ∞
Z
F (u − dy)
αy
eαy F (dy) as u → ∞.
g(y)αe dy + L
g(y)
→
F (u)
0
0
0≤y≤u
Proof Fix K ∈ (0, ∞) and write
Z
Z
Z
Z
F (u − dy)
F (u − dy)
g(y)
g(y)
+
=
+
F (u)
F (u)
0≤y≤u
u−K≤y≤u
K<y<u−K
0≤y≤K
(6.3)
= I + II + III.
By vague convergence
Z
K
g(y)αeαy dy.
I→
0
Next
Z
g(u − y)
III =
0≤y≤K
F (dy)
=
F (u)
Z
0≤y≤K
g(u − y) F (u − y)
F (dy).
F (u − y) F (u)
For large u, the integrand is bounded by 2LeαK and converges to Leαy , thus by bounded
convergence
Z K
III → L
eαy F (dy).
0
Finally
g(y)
lim sup lim sup II ≤ lim sup lim sup sup
u→∞
u→∞ y≥K F (y)
K→∞
K→∞
Z
F (y)
K<y<u−K
F (u − dy)
=0
F (u)
by Lemma 7.1 of [19]. Thus the result follows by letting u → ∞ and then K → ∞ in (6.3).
u
t
We now turn to conditions under which (4.3) holds, in terms of h given by (4.6).
Proposition 6.1 If (1.2) holds, h ≥ 0 is continuous a.e. (Lebesgue) and h(y)/V (y) → 0 as
y → ∞, then (4.3) holds. More generally if h(y)/V (y) → L, then an extra term needs to be
added to the RHS of (4.3), namely L/qκ(0, −α).
Proof As noted above, qV (u) is a distribution function in S (α) . Thus by Lemma 6.1,
Z
Z ∞
Z ∞
V (u − dy)
h(y)
→
h(y)αeαy dy + L
eαy V (dy).
V
(u)
0≤y≤u
0
0
Dividing through by q and using (2.4) and (2.7) gives
Z
Z ∞
V (u − dy)
αeαy
L
h(y)
→
dy +
.
h(y)
q
qκ(0, −α)
qV (u)
0≤y≤u
0
With L = 0 this is (4.7) which is equivalent to (4.3).
u
t
The next result gives conditions on F in (4.6) which ensures convergence of h(y)/V (y) as
y → ∞.
17
Proposition 6.2 If (1.2) holds and
|F (y, ) − L|I((ζ) > y) → 0 uniformly in ∈ E as y → ∞,
then h(y)/V (y) → Lκ2 (0, −α).
Proof By (4.6),
h(y) ∼ Ln((ζ) > y) = LΠH (y) ∼ Lκ2 (0, −α)V (y)
by (4.1) together with (4.4) of [19].
u
t
In contrast to the weak convergence in Theorem 5.1, when (1.2) holds we have,
Theorem 6.1 If (1.2) holds, F ≥ 0 satisfies the hypotheses of Proposition 4.1, and
F (y, )I((ζ) > y) → 0 uniformly in ∈ E as y → ∞,
(6.4)
then (4.3) holds. Condition (6.4) holds if, for example, F has compact support. In particular
v
Q(u) (dy, d) −→ Q(∞) (dy, d).
Proof This follows immediately from Propositions 6.1 and 6.2.
(6.5)
u
t
Remark 6.1 Another condition under which (6.4) holds, other than when F has compact support, is if F (y, ) = F˜ (y, )I((ζ) ≤ K) for some function F˜ and some K ≥ 0. In particular if
F˜ ≥ 0 satisfies the hypotheses of Proposition 4.1, then F satisfies all the hypotheses of Theorem
6.1.
w
The convergence in (6.5) can not be improved to −→ since from (4.4) the total mass of Q(∞)
is given by
Z
α
α
(∞)
|Q | =
eαy n(d, (ζ) > y)dy + dH
q [0,∞)×E
q
Z
α
α
=
eαy ΠH (y)dy + dH
q [0,∞)
q
(6.6)
Z
1
α
αy
=
(e − 1)ΠH (dy) + dH
q [0,∞)
q
=1−
κ(0, −α)
.
q
Under (1.1), κ(0, −α) = 0 so |Q(∞) | = 1, but under (1.2), κ(0, −α) > 0 and so |Q(∞) | < 1.
As with Theorem 5.1, the convergence in Theorem 6.1 may alternatively be expressed in
terms of the overshoot Xτ (u) − u rather than the undershoot u − X τ (u)− .
Theorem 6.2 Assume G : E × [0, ∞) → [0, ∞) is product measurable, e−αx G(, x) is bounded
in (, x) and G(, · ) is continuous for a.e. (n). Further assume that
G(, (ζ) − y)I((ζ) > y) → 0 uniformly in ∈ E as y → ∞.
18
Then under (1.2),
E (u) G(Yu , Xτ (u) − u)
Z
Z
α αy
α
G(, x) n(d, (ζ) ∈ y + dx) + dH G(0, 0).
→
e dy
q
[0,∞) q
E×(0,∞)
7
(6.7)
The Irregular Case
We briefly consider the special case of Theorems 5.2 and 6.2 where 0 is irregular for (0, ∞) for
X. In addition to covering the natural L´evy process version of Asmussen’s random walk result
(1.5), that is when X is compound Poisson, it also includes the widely studied compound Poisson
model, which recall includes a negative drift. We begin by identifying n in terms of the stopped
process X[0,τ (0)] where
X[0,τ (0)] (t) = Xt∧τ (0) , t ≥ 0.
(7.1)
Proposition 7.1 Assume 0 is irregular for (0, ∞) for X, then
P (τ (0) < ∞)n(d) = |ΠH |P (X[0,τ (0)] ∈ d).
(7.2)
Proof By construction, or using the compensation formula as in Theorem 3.1, for some constant
c ∈ (0, ∞),
n(d) = n(d) = cP (X[0,τ (0)] ∈ d).
(7.3)
Since P (Xτ (0) = 0, τ (0) < ∞) = 0, this implies
cP (τ (0) < ∞) = n((ζ) > 0, ζ < ∞) = |ΠH |
by (3.11). Combining (7.3) and (7.4) proves (7.2).
(7.4)
u
t
Proposition 7.2 Assume 0 is irregular for (0, ∞) for X and either, G is as in Theorem 5.2
and (1.1) holds, or G is as in Theorem 6.2 and (1.2) holds, then
E (u) G(Yu , Xτ (u) − u)
→
α|ΠH |
qP (τ (0) < ∞)
Z
∞
eαy E G(X[0,τ (0)] , Xτ (0) − y); Xτ (0) > y, τ (0) < ∞ dy.
(7.5)
0
Proof Since (1.1) or (1.2) holds, we have P (τ (0) < ∞) > 0. Thus by (7.2), if y ≥ 0, x ≥ 0,
then
n(d, (ζ) ∈ y + dx) =
|ΠH |
P (X[0,τ (0)] ∈ d, Xτ (0) ∈ y + dx, τ (0) < ∞).
P (τ (0) < ∞)
(7.6)
Since H is compound Poisson when 0 is irregular for (0, ∞), we have dH = 0. Consequently
(5.9) or (6.7) yields
Z
Z
α αy
(u)
E G(Yu , Xτ (u) − u) →
e dy
G(, x) n(d, (ζ) ∈ y + dx)
[0,∞) q
E×(0,∞)
19
which, by (7.6), is equivalent to (7.5).
u
t
Proposition 7.2 thus provides a natural L´evy process version of (1.5) under (1.2) as well as
under (1.1). We conclude this section by confirming that the constants preceding the integrals
in (1.5) and (7.5) are in agreement when (1.1) holds. By (5.1), the natural L´evy process form
of the constant in (1.5), when (1.1) holds, is
αm∗
qE(Xτ (0) e
αXτ (0)
; τ (0) < ∞)
.
To see that this agrees with the constant in (7.5), it thus suffices to prove;
Lemma 7.1 If 0 is irregular for (0, ∞) for X and (1.1) holds, then
|ΠH |E(Xτ (0) eαXτ (0) ; τ (0) < ∞) = P (τ (0) < ∞)E ∗ H1 .
Proof By (3.11) and (7.2)
|ΠH |P (Xτ (0) ∈ dx, τ (0) < ∞) = P (τ (0) < ∞)ΠH (dx),
and so
|ΠH |E(Xτ (0) e
αXτ (0)
Z
; τ (0) < ∞) = P (τ (0) < ∞)
∞
xeαx ΠH (dx).
0
Since dH = 0 when 0 is irregular for (0, ∞), the result now follows from (5.2).
8
u
t
Limiting Distributions
Theorems 5.1 and 6.1 provide a clear explanation of why many results concerning first passage
under (1.1) are strikingly similar to those under (1.2). To illustrate this further, we consider
some particular forms for F . Let f : [0, ∞)4 → [0, ∞) be a Borel function, and set
F (y, ) = f (y, (ζ) − y, y − (ζ−), ζ)I((ζ) ≥ y).
(8.1)
Then
F (u − X τ (u)− , Yu ) = f (u − X τ (u)− , Xτ (u) − u, u − Xτ (u)− , τ (u) − Gτ (u)− )
on {τ (u) < ∞}. To calculate the limit in this case we need,
Lemma 8.1 If F is of the form (8.1) then for every y ≥ 0,
Z
Z
Z
Z
F (y, )n(d, (ζ) > y) =
f (y, x, v, t)I(v ≥ y)Vb (dt, dv − y)ΠX (v + dx). (8.2)
E
x>0
v≥0
t≥0
If in addition EeαX1 < ∞, and f is jointly continuous in the first three variables and e−αx f (y, x, v, t)
is bounded, then F satisfies the hypotheses of Proposition 4.1.
20
Proof Using Proposition 3.2 in the third equality, we have
Z
F (y, )n(d, (ζ) > y)
E
Z
f (y, (ζ) − y, y − (ζ−), ζ)n(d, (ζ) > y)
=
E
Z
Z
Z
f (y, x, y + z, t)n((ζ) ∈ y + dx, (ζ−) ∈ −dz, ζ ∈ dt)
=
x>0 z≥0 t≥0
Z
Z
Z
=
f (y, x, y + z, t)Vb (dt, dz)ΠX (z + y + dx)
x>0 z≥0 t≥0
Z
Z
Z
=
f (y, x, v, t)I(v ≥ y)Vb (dt, dv − y)ΠX (v + dx).
x>0
v≥0
t≥0
which proves (8.2).
For the second statement, we only need to check n(Byc ) = 0 for a.e. y. But Byc ⊂ { : (ζ) =
t
y}, and so n(Byc ) ≤ ΠH ({y}) = 0 except for at most countably many y. u
Remark 8.1 Lemma 8.1 remains true if f is replaced by φ(y)f (y, x, v, t) where φ is bounded
and continuous a.e, since in that case n(Byc ) = 0 except when y is a point of discontinuity of φ
or ΠH ({y}) = 0.
For reference below we note that if e−αx f (y, x, v, t) is bounded then
Z
Z
Z
Z
f (y, x, v, t)eαy dyI(v ≥ y)Vb (dt, dv − y)ΠX (v + dx) = 0,
since
x=0
y≥0
v≥0
t≥0
Z
Z
Z
Z
x=0
y≥0
v≥0
(8.3)
eαy dyI(v ≥ y)Vb (dt, dv − y)ΠX (v + dx)
t≥0
Z
Z
=
eαy dyI(v ≥ y)Vb (dv − y)ΠX ({v})
y≥0 v≥0
Z
Z
b
=
V (dv)
eαy ΠX ({v + y})dy = 0.
v≥0
y≥0
We first consider the resulting limit for F of the form (8.1) in the Cram´er-Lundberg setting.
Theorem 8.1 Assume (1.1) holds and f : [0, ∞)4 → [0, ∞) is a Borel function which is jointly
continuous in the first three variables and e−αx f (y, x, v, t) is bounded. Then
E (u) f (u − X τ (u)− , Xτ (u) − u, u − Xτ (u)− , τ (u) − Gτ (u)− ) →
Z
Z
Z
Z
α
α
f (y, x, v, t) eαy dyI(v ≥ y)Vb (dt, dv − y)ΠX (v + dx) + dH f (0, 0, 0, 0).
q
q
x≥0 y≥0 v≥0 t≥0
(8.4)
In particular we have joint convergence; for y ≥ 0, x ≥ 0, v ≥ 0, t ≥ 0
P (u) (u − X τ (u)− ∈ dy, Xτ (u) − u ∈ dx, u − Xτ (u)− ∈ dv, τ (u) − Gτ (u)− ∈ dt)
α
w α
−→ eαy dyI(v ≥ y)Vb (dt, dv − y)ΠX (v + dx) + dH δ(0,0,0,0) (dx, dy, dv, dt).
q
q
21
(8.5)
Proof Define F by (8.1). Then by Lemma 8.1, F satisfies the hypotheses of Theorem 5.1, and
hence the result follows from (4.5), (8.2) and (8.3). u
t
Marginal convergence in each of the first three variables was shown in [17]. Equation (8.5)
exhibits the stronger joint convergence and includes the additional time variable τ (u) − Gτ (u)− .
Note also that in the time variable, there is no restriction on f beyond bounded, and hence the
convergence is stronger than weak convergence in this variable.
As an illustration of (8.4) we obtain, for any λ ≤ 0, η ≤ α, ρ ≤ 0 and δ ≥ 0,
E (u) eλ(u−X τ (u)− )+η(Xτ (u) −u)+ρ(u−Xτ (u)− )−δ(τ (u)−Gτ (u)− ) →
Z
Z
Z
Z
α
α (8.6)
eλy+ηx+ρv−δt eαy dyI(v ≥ y)Vb (dt, dv − y)ΠX (v + dx) + dH .
q
q
x≥0 y≥0 v≥0 t≥0
This gives the future value, at time Gτ (u)− , of a Gerber-Shiu expected discounted penalty
function (EDPF) as u → ∞. The present value is zero since τ (u) → ∞ in P (u) probability as
u → ∞. The limit can be simplified if ρ = 0. From (3.10) and (8.3), we obtain
E (u) eλ(u−X τ (u)− )+η(Xτ (u) −u)−δ(τ (u)−Gτ (u)− )
Z
Z
Z
Z
α
α
→
eλy+ηx−δt eαy dy
Vb (dt, dv − y)ΠX (v + dx) + dH
q
q
x>0 y≥0 t≥0
v≥y
Z
Z
Z
α
α
=
eλy+ηx−δt eαy dyΠL−1 ,H (dt, y + dx) + dH
q
q
x>0 y≥0 t≥0
Z Z
Z
α
α
=
eλy+η(x−y)−δt eαy dyΠL−1 ,H (dt, dx) + dH
q
q
t≥0 y≥0 x>y
Z Z
α
α
eηx − e(λ+α)x e−δt ΠL−1 ,H (dt, dx) + dH
=
q(η − λ − α) t≥0 x>0
q
α(κ(δ, −(λ + α)) − κ(δ, −η))
=
.
q(η − λ − α)
(8.7)
Under (1.1), it is possible that EeθX1 = ∞ for all θ > α, but it is often the case that
< ∞ for some θ > α. The next result extends Theorem 8.1 to include this possibility,
and also provides more information when restricted to the former setting. This is done by taking
advantage of the special form of F in (8.1), whereas Theorem 8.1 was derived from the general
convergence result in Theorem 5.1. It is interesting to note how the exponential moments may
be spread out over the undershoot variables. The EDPF results in (8.6) and (8.7) also have
obvious extensions to this setting.
EeθX1
Theorem 8.2 Assume (1.1) holds and f : [0, ∞)4 → [0, ∞) is a Borel function which is jointly
continuous in the first three variables. Assume θ ≥ α and one of the following three conditions
holds;
(i) EeθX1 < ∞, ρ < θ and λ + ρ < θ − α;
(ii) EX1 eθX1 < ∞, ρ ≤ θ and λ + ρ ≤ θ − α, with at least one of these inequalities being strict;
(iii) EX12 eθX1 < ∞, ρ ≤ θ and λ + ρ ≤ θ − α.
22
If e−λy−θx−ρv f (y, x, v, t) is bounded, then
E (u) f (u − X τ (u)− , Xτ (u) − u, u − Xτ (u)− , τ (u) − Gτ (u)− ) →
Z
Z
Z
Z
α
α
f (y, x, v, t) eαy dyI(v ≥ y)Vb (dt, dv − y)ΠX (v + dx) + dH f (0, 0, 0, 0).
q
q
x≥0 y≥0 v≥0 t≥0
(8.8)
Proof Define F by (8.1) and then h by (4.6). We will show that h satisfies the hypotheses of
Proposition 5.1. Let f˜(y, x, v, t) = e−λy−θx−ρv f (y, x, v, t). Then f˜ is bounded, jointly continuous
in the first three variables, and by (8.2) for every y ≥ 0
Z
F (y, e)n(de, e(ζ) > y)
h(y) =
ZE Z
Z
=
f˜(y, x, v, t)I(v ≥ y)eλy+θx+ρv Vb (dt, dv − y)ΠX (v + dx)
x>0 v≥0 t≥0
Z
Z
Z
(λ+ρ−θ)y
f˜(y, x − v − y, v + y, t)I(x > v + y)eθx+(ρ−θ)v Vb (dt, dv)ΠX (dx)
=e
x>0
v≥0
t≥0
(8.9)
after a change of variables. Let gy (x, v, t) = f˜(y, x − v − y, v + y, t)I(x > v + y)eθx+(ρ−θ)v . Then
clearly gz (x, v, t) → gy (x, v, t) as z ↓ y, for every y ≥ 0. Now fix y > 0 and let |z − y| < y/2.
Then for some constant C independent of z, x, v, t,
|gz (x, v, t)| ≤ CI(x > v + y/2)eθx+(ρ−θ)v
and
Z
x>0
Z
v≥0
Z
I(x > v + y/2)eθx+(ρ−θ)v Vb (dt, dv)ΠX (dx)
t≥0
Z
Z
θx
≤
e ΠX (dx)
x>y/2
(8.10)
e(ρ−θ)v Vb (dv).
0≤v<x−y/2
If we show this last integral is finite, then by dominated convergence, h(z) → h(y) as z ↓ y
for every y > 0, showing that h is right continuous on (0, ∞), and consequently continuous a.e.
The final expression in (8.10) is decreasing in y, hence to prove finiteness it suffices to prove the
following stronger result, which will be needed below; for every ε > 0
Z ∞
Z
Z
(α+λ+ρ−θ)y
θx
Iε :=
e
dy
e ΠX (dx)
e(ρ−θ)v Vb (dv) < ∞.
(8.11)
ε
x>y
0≤v<x−y
We will need the following consequence of Proposition 3.1 of Bertoin [3]; for every y > 0 there
is a constant c = c(y) such that
Vb (v) ≤ cv for v ≥ y.
(8.12)
First assume ρ < θ. Then integrating by parts and using (8.12)
Z
Z
θx
Iε ≤ C
e ΠX (dx)
e(α+λ+ρ−θ)y dy
x>ε
ε<y<x
(R
eθx ΠX (dx),
if α + λ + ρ − θ < 0
≤ C Rx>ε θx
x>ε xe ΠX (dx), if α + λ + ρ − θ = 0.
23
Thus Iε < ∞ under each of the assumptions (i), (ii) and (iii) by Theorem 25.3 of Sato [23].
Now assume ρ = θ. Then
Z
Z
θx
Iε =
e ΠX (dx)
e(α+λ+ρ−θ)y Vb (x − y)dy.
x>ε
ε<y<x
If α + λ + ρ − θ = 0 then we are in case (iii) and
Z
Z
Iε ≤
xVb (x)eθx ΠX (dx) ≤ C
x2 eθx ΠX (dx)
x>ε
x>ε
which is finite under (iii). Finally, if α + λ + ρ − θ < 0 then we are in case (ii) or (iii). We break
Iε into two parts Iε (1) + Iε (2) where
Z
Z
θx
e(α+λ+ρ−θ)y Vb (x − y)dy
e ΠX (dx)
Iε (1) =
ε∨(x−1)<y<x
x>ε
Z
≤ Vb (1)
eθx ΠX (dx),
x>ε
and
Z
Z
θx
e ΠX (dx)
e(α+λ+ρ−θ)y Vb (x − y)dy
ε<y≤ε∨(x−1)
Z
Z
≤ c(1)
eθx ΠX (dx)
e(α+λ+ρ−θ)y (x − y)dy
x>ε
ε<y≤ε∨(x−1)
Z
≤C
xeθx ΠX (dx).
Iε (2) =
x>ε
x>ε
Thus Iε is finite in this case also, completing the proof of (8.11).
By (8.9),
Z
Z
I(x > v + y)eθx+(ρ−θ)v Vb (dv)ΠX (dx) =: k(y)
eαy h(y) ≤ Ce(α+λ+ρ−θ)y
v≥0
say. Clearly k is nonincreasing on [0, ∞), and for every ε > 0
Z ∞
Z ∞
Z
Z
(α+λ+ρ−θ)y
θx
dy
k(y)dy = C
e
e ΠX (dx)
ε
ε
(8.13)
x>0
x>y
e(ρ−θ)v Vb (dv) < ∞
0≤v<x−y
by (8.11) under (i),(ii) or (iii). Hence in each case eαy h(y)1[ε,∞) (y) is directly Riemann integrable
for every ε > 0.
Finally, from (8.13), for ε ∈ (0, 1),
Z
Z
Z
Z
V (u − dy)
V (u − dy)
θx+(ρ−θ)v b
h(y)
≤C
e
V (dv)ΠX (dx)
I(y < x − v)
V (u)
V (u)
[0,ε)
v≥0 x>v
[0,ε)
Since uα V (ln u) is slowly varying at infinity, it follows from Theorem 1.2.1 of [5] that
V (u − z)
→ eαz , uniformly for 0 ≤ z ≤ 1.
V (u)
24
Thus for large u
Z
Z Z
Z
V (u − dy)
eθx+(ρ−θ)v [(x − v) ∧ ε]Vb (dv)ΠX (dx)
+
h(y)
≤ C1
V (u)
v>1
x>v
v≤1
[0,ε)
= I + II.
Now, using (3.9),
Z
Z
(x ∧ ε)eθx Vb (dv)ΠX (v + dx)
I ≤ C1
v≤1
Z
x>0
(x ∧ ε)eθx ΠH (dx) → 0
≤ C1
x>0
R
as ε → 0 by dominated convergence, since x>1 eθx ΠH (dx) < ∞ by Proposition 7.1 of [14] and
R
θx
x≤1 xe ΠH (dx) < ∞ because H is a subordinator. For the second term,
Z
Z
II ≤ C1 ε
eθx ΠX (dx)
e(ρ−θ)v Vb (dv) → 0
x>1
0≤v<x
as ε → 0, since the integral is easily seen to be finite from (8.11). Thus we may apply Proposition
5.1 to h, and (8.8) follows after observing that the integral over x = 0 in (8.8) is zero by (8.3).
u
t
We now turn to the convolution equivalent setting. In this case we need to impose an extra
condition on f in (8.1).
Proposition 8.1 Assume F is given by (8.1) where
f (y, x, v, t) → 0 as y → ∞.
sup
(8.14)
x>0,t≥0,v≥y
Then (6.4) holds.
Proof From (8.1)
sup F (y, )I((ζ) > y) = sup f (y, (ζ) − y, y − (ζ−), ζ)I((ζ) > y)
∈E
∈E
≤
sup
f (y, x, v, t) → 0
x>0,t≥0,v≥y
as y → ∞ by (8.14).
u
t
As an immediate consequence we have the following analogue of Theorem 8.1, which extends
Theorem 10 of [9].
Theorem 8.3 Assume (1.2) holds and that f : [0, ∞)4 → [0, ∞) satisfies (8.14), is jointly
continuous in the first three variables, and e−αx f (y, x, v, t) is bounded. Then
E (u) f (u − X τ (u)− , Xτ (u) − u, u − Xτ (u)− , τ (u) − Gτ (u)− ) →
Z
Z
Z
Z
α
α
f (x, y, v, t) eαy dyI(v ≥ y)Vb (dt, dv − y)ΠX (v + dx) + dH f (0, 0, 0, 0).
q
q
x≥0 y≥0 v≥0 t≥0
(8.15)
25
In particular we have joint convergence; for y ≥ 0, x ≥ 0, v ≥ 0, t ≥ 0
P (u) (u − X τ (u)− ∈ dy, Xτ (u) − u ∈ dx, u − Xτ (u)− ∈ dv, τ (u) − Gτ (u)− ∈ dt)
α
v α
−→ eαy dyI(v ≥ y)Vb (dt, dv − y)ΠX (v + dx) + dH δ(0,0,0,0) (dx, dy, dv, dt).
q
q
(8.16)
Proof Define F by (8.1). By Lemma 8.1 and Proposition 8.1, F satisfies the hypotheses of
Theorem 6.1. Thus (4.3) holds which is equivalent to (8.15) by (8.2) and (8.3). u
t
Theorem 8.3 imposes the extra condition (8.14) on f compared with Theorem 8.1. As a
typical example, any function f satisfying the conditions of Theorem 8.1, when multiplied by
a bounded continuous function with compact support φ(y), trivially satisfies the conditions of
Theorem 8.3. This manifests itself in the convergence of (8.16) only being vague convergence
rather than weak convergence. It can not be improved to the weak convergence of (8.5) since,
as noted earlier in (6.6), the total mass of the limit in (8.16) is 1 − κ(0, −α)q −1 .
Another example of the effect of condition (8.14) is in the calculation of the EDPF analogous
(8.6). Using Remark 8.1, the continuity assumption on φ above can be weakened to continuous
a.e. Hence we may take φ(y) = I(y ≤ K) for some K ≥ 0. Thus applying (8.15) to the function
f (y, x, v, t) = eλy+ηx+ρv−δt I(y ≤ K) where K > 0, λ ≤ 0, η ≤ α, ρ ≤ 0 and δ ≥ 0, we obtain,
E (u) (eλ(u−X τ (u)− )+η(Xτ (u) −u)+ρ(u−Xτ (u)− )−δ(τ (u)−Gτ (u)− ) ; u − X τ (u)− ≤ K) →
Z
Z
Z
Z
α
α
eλy+ηx+ρv−δt eαy dyI(v ≥ y)Vb (dt, dv − y)ΠX (v + dx) + dH .
q
q
0≤y≤K x≥0 v≥0 t≥0
The restriction imposed by K can not be removed as will be apparent from Proposition 8.2
below. Finally we point out that there is no extension of Theorem 8.3 to the setting of Theorem
8.2 since EeθX1 = ∞ for all θ > α.
The marginal distributions of the limit in (8.5) can be readily calculated. Under (1.1) we
obtain, in addition to (1.3),5
Z
w
P (u) (u − Xτ (u)− ∈ dx) −→ q −1 αdH δ0 (dx) + q −1 αeαx ΠX (x)dx
e−αv Vb (dv),
0≤v≤x
P
(u)
w
(τ (u) − Gτ (u)− ∈ dt) −→ q
where
Z
K(dt) =
−1
αdH δ0 (dt) + K(dt)
(eαz − 1)ΠL−1 ,H (dt, dz).
(8.17)
z≥0
Under (1.2), we need to be careful. The marginals of the limit in (8.16) are the same as in
(8.5), but they all have mass less than one. This does not mean that we can simply replace
weak convergence of the marginals under (1.1) with vague convergence under (1.2). For the
undershoots of X and X this is correct, but the overshoot and τ (u) − Gτ (u)− both converge
weakly under (1.2), indeed they converge jointly as we discuss next, and the limit is not just the
corresponding marginal of the limit in (8.16).
Strictly speaking, the proof of (1.3) in [17] assumes that (L−1 , H) is the weakly ascending ladder process,
whereas the marginals of (8.5) yield the same formulae as (1.3) but with (L−1 , H) the strictly ascending ladder
process. Thus, as can be easily checked directly, the limiting expressions must agree irrespective of the choice of
ascending ladder process. This remark applies to (1.4) and several other limiting distributions discussed here.
5
26
If F is given by (8.1) where f depends only on x and t, then, using (3.10), (8.2) reduces to
Z
Z
Z
f (x, t)ΠL−1 ,H (dt, y + dx), y ≥ 0.
F (y, )n(d, (ζ) > y) =
(8.18)
E
x>0
t≥0
In particular, under (1.1), by Theorem 8.1, for x ≥ 0, t ≥ 0
Z
w α
(u)
P (Xτ (u) − u ∈ dx, τ (u) − Gτ (u)− ∈ dt) −→
eαy ΠL−1 ,H (dt, y + dx)dy
q y≥0
α
+ dH δ(0,0) (dx, dt).
q
(8.19)
Under (1.2), the mass of the limit in (8.19) is less than one. In this case an extra term appears
in the limit. The distribution of this additional mass and proof of joint weak convergence under
(1.2) is given in the following result.
Proposition 8.2 Assume (1.2) holds and that f : [0, ∞)2 → [0, ∞) is a Borel function which
is continuous in the first variable, and e−βx f (x, t) is bounded for some β < α. Then
Z
Z
Z
−ΨX (−iα)
(u)
−αx
E f (Xτ (u) − u, τ (u) − Gτ (u)− ) →
f (x, t)αe
dx
e−αv Vb (dt, dv)
q
x≥0 t≥0
v≥0
Z
Z
Z
α
α
αy
+
f (x, t)
e ΠL−1 ,H (dt, y + dx)dy + dH f (0, 0).
q x≥0 t≥0
q
y≥0
(8.20)
In particular we have joint convergence; for x ≥ 0, t ≥ 0
Z
w −ΨX (−iα)
P (u) (Xτ (u) − u ∈ dx, τ (u) − Gτ (u)− ∈ dt) −→
αe−αx dx
e−αv Vb (dt, dv)
q
v≥0
Z
(8.21)
α
α
αy
+
e ΠL−1 ,H (dt, y + dx)dy + dH δ(0,0) (dx, dt).
q y≥0
q
Proof We will use Proposition 6.1. Let
F (y, ) = f ((ζ) − y, ζ)I((ζ) ≥ y).
By Lemma 8.1, F satisfies the hypothesis of Proposition 4.1, hence h is continuous a.e. Next we
evaluate the limit of h(y)/ΠX (y) as y → ∞. By (8.2), for y ≥ 0
Z
h(y) =
F (y, )n(d, (ζ) > y)
E
Z
Z
Z
=
f (x, t)I(v ≥ y)Vb (dt, dv − y)ΠX (v + dx)
x>0 v≥0 t≥0
Z Z
Z
b
=
V (dt, dv)
f (x, t)ΠX (y + v + dx).
t≥0
v≥0
x>0
Observe that for v ≥ 0, from footnote 4 (p16) and (6.1),
ΠX (y + v + dx) w
−→ αe−α(v+x) dx on [0, ∞) as y → ∞.
ΠX (y)
27
Further, by Potter’s bounds, see for example (4.10) of [16], if γ ∈ (β, α) then
ΠX (y + v)
≤ Ce−γv if v ≥ 0, y ≥ 1,
ΠX (y)
where C depends only on γ. Thus for any y ≥ 1, v ≥ 0 and K ≥ 0
Z
Z
ΠX (y + v + dx)
ΠX (y + v + x)
eβK ΠX (y + v + K)
eβx
=
βeβx
dx +
ΠX (y)
ΠX (y)
ΠX (y)
x>K
x>K
Z
≤C
βeβx e−γ(v+x) dx + CeβK e−γ(v+K)
x>K
−γv−(γ−β)K
≤ Ce
.
Now, for any v ≥ 0 and K ≥ 0 write
Z
Z
Z
ΠX (y + v + dx)
ΠX (y + v + dx)
f (x, t)
=
+
f (x, t)
= I + II.
ΠX (y)
ΠX (y)
x>0
0<x≤K
x>K
By weak convergence
Z
f (x, t)αe−α(v+x) dx as y → ∞,
I→
0<x≤K
and by monotone convergence,
Z
Z
−α(v+x)
f (x, t)αe
dx →
0<x≤K
f (x, t)αe−α(v+x) dx as K → ∞.
x≥0
On the other hand, by (8.22),
II ≤ Ce−γv−(γ−β)K , for y ≥ 1.
Thus letting y → ∞ then K → ∞ in (8.23) gives
Z
Z
ΠX (y + v + dx)
f (x, t)
→
f (x, t)αe−α(v+x) dx.
ΠX (y)
x≥0
x>0
Further by (8.22) with K = 0, for every v ≥ 0
Z
ΠX (y + v + dx)
f (x, t)
≤ Ce−γv .
ΠX (y)
x>0
Hence by dominated convergence
Z
Z
Z
h(y)
−αx
→
f (x, t)αe
dx
e−αv Vb (dt, dv).
ΠX (y)
x≥0 t≥0
v≥0
Since
lim
y→∞
(8.22)
ΠX (y)
= κ2 (0, −α)b
κ(0, α)
V (y)
by (4.1), together with (4.4) and Proposition 5.3 of [19], we thus have
Z
Z
Z
h(y)
2
−αx
→ κ (0, −α)b
κ(0, α)
f (x, t)αe
dx
e−αv Vb (dt, dv).
V (y)
x≥0 t≥0
v≥0
28
(8.23)
Hence by (2.6) and Proposition 6.1
E
(u)
Z
Z
Z
−ΨX (−iα)
−αx
f (x, t)αe
dx
e−αv Vb (dt, dv)
f (Xτ (u) − u,τ (u) − Gτ (u)− ) →
q
x≥0 t≥0
v≥0
Z
Z
α αy
α
+
e dy F (y, )n(d, (ζ) > y) + dH f (0, 0)
q
y≥0 q
E
Z
Z
Z
−ΨX (−iα)
e−αv Vb (dt, dv)
f (x, t)αe−αx dx
=
q
v≥0
Z
Z x≥0 t≥0 Z
α
α
+
eαy ΠL−1 ,H (dt, y + dx)dy + dH f (0, 0)
f (x, t)
q x>0 t≥0
q
y≥0
by (8.18). This proves (8.20) since the integral over {x = 0} in the final expression vanishes.
u
t
From (8.21), a simple calculation shows that the limiting distribution of the overshoot is as
given in (1.4), and
w
Vb
P (u) (τ (u) − Gτ (u)− ∈ dt) −→ q −1 − ΨX (−iα)δ−α
(dt) + αdH δ0 (dt) + K(dt)
where K(dt) is given by (8.17) and
V
δ−α
(dt) =
Z
e−αv Vb (dt, dv).
b
v≥0
Using (8.20) we can calculate the limiting value of an EDPF similar to (8.7); for any β < α
and δ ≥ 0
E (u) eβ(Xτ (u) −u)−δ(τ (u)−Gτ (u)− )
Z
Z Z
−ΨX (−iα)
−(α−β)x
→
αe
dx
e−δt−αv Vb (dt, dv)
q
x≥0
t≥0 v≥0
Z
Z
Z
α
α
+
eβx−δt
eαy ΠL−1 ,H (dt, y + dx)dy + dH f (0, 0)
q x≥0 t≥0
q
y≥0
−αΨX (−iα)
α(κ(δ, −β) − κ(δ, −α))
=
+
q(α − β)b
κ(δ, α)
q(α − β)
by the same calculation as (8.7).
The results of this section, in the convolution equivalent case, can derived from a path
decomposition for the limiting process given in [16]. The main result in [16], Theorem 3.1,
makes precise the idea that under P (u) for large u, X behaves like an Esscher transform of X
up to an independent exponential time τ . At this time the process makes a large jump into a
neighbourhood of u, and if Wt = Xτ +t − u then
Z
P (W ∈ dw) = κ(0, −α)
αe−αz V (−z)dz Pz (X ∈ dw|τ (0) < ∞), w ∈ D,
z∈R
q −1
where we set V (y) =
for y < 0. Thus W has the law of X conditioned on τ (0) < ∞ and
started with initial distribution
P (W0 ∈ dz) = κ(0, −α)αe−αz V (−z) dz, z ∈ R.
29
In the Cram´er-Lundberg case there is no comparable decomposition for the entire path since
there is no “large jump” at which to do the decomposition. One of the aims of this paper is to
offer an alternative approach by describing the path from the time of the last maximum prior
to first passage until the time of first passage. This allows the limiting distribution of many
variables associated with ruin to be readily calculated.
9
Appendix: Completion of the proof of Proposition 3.1 when
X is compound Poisson
For ε > 0, let
Xtε = Xt − εt.
If X is compound Poisson then Proposition 3.1 holds for X ε . The aim is then to take limits
as ε → 0 and check that (3.6) continues to hold in the limit. We begin with an alternative
characterization of the constants in (7.2). Recall the notation of (7.1).
Lemma 9.1 Assume 0 is irregular for (0, ∞), then
dLb−1 n(d) = P (X[0,τ (0)] ∈ d).
Proof If s ∈
/ G set s = ∆ where ∆ is a cemetery state. Then {(t, L−1 ) : t ≥ 0} is a Poisson
t−
point process with characteristic measure dt ⊗ n(d). By construction, n is proportional to the
law of the first excursion, thus
n(d) = |n|P (X[0,τ (0)] ∈ d).
(9.1)
Now let σ = inf{t : L−1 6= ∆}. Then σ is exponentially distributed with parameter |n|. On the
t−
other hand σ is the time of the first jump of L−1 and hence is exponential with parameter p
given by (2.3). A short calculation using duality, see for example the paragraph following (2.7)
in [6], shows that if 0 is irregular for (0, ∞), then
pdLb−1 = 1.
Hence |n|−1 = dLb−1 and the result follows from (9.1).
(9.2)
u
t
Let nε denote the excursion measure of X ε , with similar notation for all other quantities
cε . To ease the notational complexity we will write
related to X ε or X
dbε = d(Lbε )−1 , db = dLb−1 .
Lemma 9.2 Assume 0 is irregular for (0, ∞), then dbε is non decreasing, and for any δ ≥ 0
dbε ↓ dbδ as ε ↓ δ.
30
Proof Clearly, for 0 ≤ δ < ε, we have τ δ (0) ≤ τ ε (0) and τ ε (0) ↓ τ δ (0) as ε ↓ δ. Thus
E(e−τ
ε (0)
; τ ε (0) < ∞) ↑ E(e−τ
and so from (2.3), pε ↑ pδ . Hence by (9.2), dbε ↓ dbδ .
δ (0)
; τ δ (0) < ∞),
u
t
Proposition 9.1 Assume X is compound Poisson and f : [0, ∞)2 → [0, ∞) is continuous with
compact support. Then
Z Z
Z Z
ε
f (t, z)n((t) ∈ −dz, ζ > t)dt as ε → 0.
f (t, z)n ((t) ∈ −dz, ζ > t)dt →
t≥0
t≥0
z≥0
z≥0
Proof Assume f vanishes for t ≥ r. Then
f (t, −Xtε )I(τ ε (0) > t) ≤ ||f ||∞ I(t ≤ r).
(9.3)
Thus, using Lemma 9.1,
Z Z
Z Z
ε
−1
b
f (t, z)n ((t) ∈ −dz, ζ > t)dt = dε
f (t, z)P (Xtε ∈ −dz, τ ε (0) > t)dt
t≥0 z≥0
t≥0 z≥0
Z ∞
E(f (t, −X ε ); τ ε (0) > t)dt
= db−1
t
ε
Zt=0
∞
E(f (t, −Xt ); τ (0) > t)dt
→ db−1
Z Zt=0
=
f (t, z)n((t) ∈ −dz, ζ > t)dt
t≥0
z≥0
by (9.3) and dominated convergence, since Xtε → Xt , τ ε (0) → τ (0) and P (τ (0) = t) = 0.
u
t
Proposition 9.2 Assume X is compound Poisson and f : [0, ∞)2 → [0, ∞) is continuous with
compact support. Then
Z Z
Z Z
ε
b
f (t, z)V (dt, dz) →
f (t, z)Vb (dt, dz) as ε → 0.
t≥0
z≥0
t≥0
z≥0
Proof We will show
b ε )−1
b −1 b ε
b
(L
s → Ls , Hs → Hs , for all s ≥ 0 as ε → 0,
(9.4)
b ε )−1
bε
f ((L
s , Hs ), 0 < ε ≤ 1,
(9.5)
and that the family
is dominated by an integrable function with respect to P × ds. Then
Z Z
Z ∞
b ε )−1
bε
f (t, z)Vb ε (dt, dz) =
Ef ((L
s , Hs )ds
t≥0
z≥0
t=0
∞
Z
→
Z
b −1 , H
b s )ds
Ef (L
s
t=0Z
=
f (t, z)V (dt, dz).
t≥0
31
z≥0
csε = X
csε }. Then for 0 ≤ δ < ε, Aδ ⊂ Aε . Further, for any T , if ε is
For ε ≥ 0 let Aε = {s : X
sufficiently close to 0, then A0 ∩ [0, T ] = Aε ∩ [0, T ]. Thus by Theorem 6.8 and Corollary 6.11 of
[20],
Z t
Z t
δ
b
b εt , all 0 ≤ δ < ε,
b
IAε (s)ds = dbε L
dδ Lt =
IAδ (s)ds ≤
0
0
and
b t = dbε L
b εt , 0 ≤ t ≤ T,
db L
if ε sufficiently close to 0. Hence for all 0 ≤ δ < ε,
b b bε
b δ )−1
bδ
b ε −1
(L
s = inf{t : Lt > s} ≥ inf{t : (dε /dδ )Lt > s} = (L ) b
(dδ /dbε )s
(9.6)
with equality if δ = 0 and ε sufficiently close to 0.
Fix s ≥ 0 and assume ε is sufficiently close to 0 that equality holds in (9.6) with δ = 0. Thus
b ε )−1 = L
b −1
(L
s
b
b
(dε /d)s
.
(9.7)
cε )t = X
b t + Jε,t where
Since (X
0 ≤ Jε,t ≤ εt,
(9.8)
it then follows that
cε = (X
cε ) bε −1 = X
b b−1
H
s
(L )s
L
b
(dbε /d)s
+ Jε,Lb−1
b
(dbε /d)s
b b b + J b−1
=H
ε,L
(dε /d)s
.
(9.9)
b
(dbε /d)s
Hence, using Lemma 9.2, (9.4) follows from (9.7), (9.8) and (9.9).
Now let 0 ≤ ε ≤ 1. Then by (9.6)
b ε )−1 ≥ (L
b 1 )−1
(L
s
b
(dε /db1 )s
Thus by monotonicity of dbε ,
b ε )−1
b 1 −1
I((L
s ≤ r) ≤ I((L ) b
(dε /db1 )s
b 1 )−1
≤ r) ≤ I((L
b b
(d/d1 )s
≤ r).
Hence if f vanishes for t ≥ r, then
b ε )−1
bε
b 1 −1
f ((L
s , Hs ) ≤ ||f ||∞ I((L ) b b
(d/d1 )s
where
Z
E
0
which proves (9.5).
∞
b 1 )−1
I((L
b b
(d/d1 )s
Z
b
≤ r)ds = (db1 /d)E
∞
≤ r),
b 1 )−1
I((L
s ≤ r)ds < ∞,
0
u
t
Proof of Proposition 3.1 when X is compound Poisson Assume X is compound Poisson.
Since dL−1 = 0 whenever 0 is irregular for (0, ∞), it follows that d(Lε )−1 = dL−1 = 0. Further,
(3.6) holds for X ε . Hence (3.6) for X follows from Propositions 9.1 and 9.2. u
t
Acknowledgement I would like to thank Ron Doney for his help with parts of Section 3.
32
References
[1] Asmussen, S. (1982). Conditioned Limit Theorems relating a random walk to its associate,
with applications to risk reserve and the GI/G/1 queue. Adv. in Appl. Probab. 14, 143–170.
[2] Asmussen, S. (2003). Applied Probability and Queues. Application of Mathematics 51,
Springer.
[3] Bertoin, J. (1996). L´evy Processes. Cambridge Univ. Press.
[4] Bertoin, J. and Doney, R. (1994). Cram´er’s estimate for L´evy processes. Statist. Prob.
Letters 21, 363–365.
[5] Bingham, N.H., Goldie, C.M. and Teugels, J.L. (1987). Regular Variation. Cambridge University Press, Cambridge.
[6] Chaumont, L. (2013) On the law of the supremum of L´evy processes. Ann. Probab. 41,
1191–1217.
[7] Cline, D.B.H. (1986). Convolution tails, product tails and domains of attraction. Probability
Theory and Related Fields 72, 529–557
[8] Doney, R.A. (2005). Fluctuation Theory for L´evy Processes. Notes of a course at St Flour,
July 2005.
[9] Doney, R.A. and Kyprianou, A. (2006). Overshoots and undershoots of L´evy processes.
Ann. Appl. Probab. 16, 91–106.
[10] Eder, I. and Kl¨
uppelberg, C. (2009). The first passage event for sums of dependent L´evy
processes with applications to insurance risk. Ann. Appl. Probab. 19, 2040–2079.
[11] Embrechts, P. and Goldie, C.M. (1982). On convolution tails. Stoch. Proc. Appl. 13, 263–
278.
[12] Embrechts, P., Kl¨
uppelberg, C. and Mikosch, T. (1997). Modelling Extremal Events for
Insurance and Finance Application of Mathematics 33, Springer
[13] Greenwood, P. and Pitman,J. (1980). Fluctuation identities for L´evy processes and splitting
at the maximum. Adv. Appl. Prob. 12, 893–902
[14] Griffin, P.S. (2013) Convolution equivalent L´evy processes and first passage times. Ann.
Appl. Probab. 23, 1506-1543
[15] Griffin, P.S. and Maller, R.A. (2011). The time at which a L´evy processes creeps. Electron.
J. Probab. 16, 2182–2202
[16] Griffin, P.S. and Maller, R.A. (2012) Path decomposition of ruinous behaviour for a general
L´evy insurance risk process. Ann. Appl. Probab. 22, 1411–1449
[17] Griffin, P.S., Maller, R.A. and van Schaik, K (2012) Asymptotic distributions of the overshoot and undershoots for the L´evy insurance risk process in the Cram´er and convolution
equivalent cases. Insurance: Mathematics and Economics 51, 382–392
33
[18] Kl¨
uppelberg, C. (1989). Subexponential distributions and characterizations of related
classes. Probab. Theory Related Fields 82, 259–269.
[19] Kl¨
uppelberg, C., Kyprianou A. and Maller, R. (2004). Ruin Probability and Overshoots for
General L´evy Insurance Risk Processes. Ann. Appl. Probab. 14, 1766-1801.
[20] Kyprianou A. (2005). Introductory Lectures on Fluctuations of L´evy Processes with Applications. Springer, Berlin Heidelberg New York.
[21] Pakes, A.G. (2004). Convolution equivalence and infinite divisibility. J. Appl. Probab. 41,
407-424.
[22] Pakes, A.G. (2007). Convolution equivalence and infinite divisibility: Corrections and corollaries. J. Appl. Probab. 44, 295–305.
[23] Sato, K. (1999). L´evy Processes and Infinitely Divisible Distributions. Cambridge University
Press, Cambridge.
[24] Vigon, V. (2002). Votre L´evy rampe-t-il? J. London Math. Soc. 65, 243–256.
[25] Watanabe, T. (2008). Convolution equivalence and distributions of random sums. Probab.
Theory Relat. Fields 142, 367–397.
34