Motion In A Gaussian, Incompressible Flow.

Transcription

Motion In A Gaussian, Incompressible Flow.
Motion In A Gaussian, Incompressible Flow.
Tomasz Komorowski
George Papanicolaouy
Abstract
We prove that the solution of a system of random ordinary dierential equations dXdt(t) =
V(t; X(t)) with diusive scaling, X"(t) = "X( "t2 ),dconverges weakly to a Brownian Motion when
" # 0. We assume that V(t; x), t 2 R, x 2 R is a d-dimensional, random, incompressible,
stationary Gaussian eld which has mean zero and decorrelates in nite time.
1 Introduction.
Let us consider a particle undergoing diusion with a drift caused by an external velocity
eld V(t; x); t 0; x 2 Rd . Its motion is determined by the It^o stochastic dierential
equation
(
dX(t) = V(t; X(t))dt + dB(t);
(1)
X(0) = 0;
where B(t); t 0 denotes the standard d-dimensional Brownian Motion and 2 is the
molecular diusivity of the medium. The particle trajectory X(t); t 0 is assumed, for
simplicity, to start at the origin.
We want the velocity eld V to model turbulent, incompressible ows so we assume
that it is random, zero mean, divergence-free and mixing at macroscopically short space
and time scales (see e.g. [8], [29],[32]). To describe the long time and long distance
behavior of the particle trajectory we introduce macroscopic units in which the time t
and space variable x are of order ",2 and ",1 , respectively, where 0 < " 1 is a small
scaling parameter. In the new variables the Brownian Motion is expressed by the formula
"B( "t2 ); t 0. Its law is then identical to that of the standard Brownian Motion. In the
scaled variables the It^o stochastic dierential equation for the trajectory is
(
dX" (t) = 1" V( "t2 ; X""(t) )dt + dB(t);
X" (0) = 0:
(2)
We are interested in the asymptotic behavior of the trajectories fX" (t)gt0 ; " > 0, when
" # 0.
In the early twenties, G.I. Taylor (cf [31]) argued that the trajectory X(t); t 0 of
a particle in a diusive medium with turbulent advection due to a zero mean random
Department of Mathematics, Michigan State University, East Lansing MI 48824
y Department of Mathematics, Stanford University, Stanford CA 94305. Work supported by NSF grant
DMS 9308471 and by AFOSR grant F49620-94-1-0436.
1 American Mathematical Society 1991 subject classications. Primary 60H25; secondary 62M40.
Key words and phrases: Random eld, mixing condition, weak convergence, diusion approximation.
1
velocity eld V(t; x) will behave approximately like a d-dimensional Brownian Motion
whose covariance matrix is given by
Z +1
where
0
vi;j (t)dt + 2i;j ; i; j = 1; ; d;
(3)
vi;j (t) =< Vi (t; X(t))Vj (0; 0) + Vj (t; X(t))Vi (0; 0) >; i; j = 1; ; d
(4)
is the symmetric part of the Lagrangian covariance function of the random velocity eld,
with < > denoting averaging over all possible realizations of the medium. As noted by
Taylor, the diusivity is increased by advection in a turbulent ow. The main assumption
in [31] was the convergence of the improper integrals in (3). This means that rapid
decorrelation of Eulerian velocities V(t; x) will be inherited by the Lagrangian velocities
V(t; X(t)) of the tracer particle. The mathematical analysis and proof of this fact, under
dierent circumstances regarding V(t; x), is immensely dicult. In this paper we make a
contribution towards the understanding of this issue.
Before stating our results we will review briey the status of the mathematical theory
of turbulent diusion. We mention rst the papers [26], [24], [25],[23], [21] in which, among
other things, it is shown that for an ergodic, zero mean velocity eld V with suciently
smooth realizations and suitably restricted power spectra, when the molecular diusivity is positive then the family of trajectories fX" (t)gt0 ; " > 0 converges weakly to Brownian
Motion in dimensions d 2. Its covariance matrix K = [i;j ]i;j =1;;d is, in accordance with
Taylor's prediction,
Z +1
ME
[Vi (t; X(t))Vj (0; 0) + Vj (t; X(t))Vi(0; 0)]dt + 2i;j ; i; j = 1; ; d; (5)
0
where X(t); t 0 is the solution of (1), M and E denote expectations over the underlying
probability spaces for the Brownian Motion B(t); t 0 and the random vector eld V,
i;j =
respectively. The improper integrals of (5) converge in the Cesaro sense, that is, the limits
ZT Z
1
d ME[V (t; X(t))V (0; 0) + V (t; X(t))V (0; 0)]dt
lim
T "+1 T 0
0
i
j
j
j
exist and are nite. We should stress that the Cesaro convergence of the improper integrals
appearing in (5) follows from the proof of weak convergence and is not an assumption of
the theorem. Since 2I is the intrinsic diusivity of the medium we note that K > 2I (or
that K , 2 I is positive denite) so that advection enhances the eective diusivity K.
The proof of the statements made above can be found in [26] and [24] for bounded time
independent velocity elds. For unbounded elds (such as Gaussian elds for example)
the proof can be found in [25], [4] and [13]. Diusion in time dependent, zero mean elds
can be analyzed using an extension of the averaging technique developed there.
The one dimensional case, d = 1, has also been investigated and the results there
are dierent from the higher dimensional ones. This is because there are no nontrivial
incompressible velocity elds in one dimension. For a Gaussian
p \white noise" velocity
2
eld typical particle displacements are of order ln t instead of t, as is expected in the
multidimensional case. This localization phenomenon is discussed in [28,23,17,6] and does
not occur when the random velocity eld is incompressible and integrable in a suitable
sense [13].
2
Since formula (5) for the eective diusivity K involves the solution of an It^o stochastic
dierential equation with stochastic drift it is far from being explicit. In many applications
it is important to consider how K behaves as a function of , especially when # 0, which
is the large Peclet number case where advection is dominant [16]. Although this question
is very dicult to deal with analytically some progress has been made recently by noting
that K is a minimum of an energy-like functional allowing, therefore, extensive use of
variational techniques (see [11] for periodic and [12] for certain random ows). We can go
one step further and ask about the limiting behavior of the trajectories of (2), as " # 0, in
the absence of any molecular diusivity, that is, when = 0. The eective diusivity, if
it exists in this case, is purely due to turbulent advection. It is this aspect of the problem
of turbulent diusion that we will address in this paper.
In previous mathematical studies of advection induced diusion the focus has been
mainly on velocity elds which besides regularity, incompressibility and stationarity, either
in time or both in space and time, have also strong mixing properties in t ([20,18,19]) and
they are slowly varying in x. This means that equation (2) (with = 0) is now an ordinary
dierential equation with a stochastic right hand side
(
dX" (t) = 1 V( t ; X" (t) );
dt
" "2 "1,
X" (0) = 0
(6)
and the parameter is in the range 0 < 1, with = 1 typically. One can prove
that the solutions of (6) considered as stochastic processes converge weakly to Brownian
Motion with covariance matrix given by the Kubo formula
i;j =
Z +1
0
E[Vi (t; 0)Vj (0; 0) + Vj (t; 0)Vi(0; 0)]dt; i; j = 1; ; d:
As before, E denotes expectation taken over the underlying probability space of the random
velocity eld V. The convergence of the improper integral follows from the rapid decay of
the correlation matrix of the Eulerian velocity eld, which is a consequence of the assumed
strong mixing properties of the eld.
Very little is known when = 0, that is, when the velocity eld is not slowly varying
in the x coordinates. However, it is believed that we have again diusive behavior, in
the sense described above, if the velocity eld V(t; x) is suciently strongly mixing in
time and then the eective diusivity is given by (5). In a series of numerical experiments
R. Kraichnan ([22]) observed that the diusion approximation holds even for zero mean,
divergence-free velocity elds V(x) that are independent of t, provided that they are
strongly mixing in the spatial variable x and the dimension d of the space Rd is greater
than two. For d = 2 there is a \trapping" eect for the advected particles, prohibiting
diusive behavior.
Some theoretical results in this direction have been obtained recently by M. Avellaneda
et al. in [3] for specially constructed two dimensional, stationary velocity elds. An
extensive exposition of turbulent advection from a physical point of view can be found in
[16].
In this paper we assume that the eld V(t; x) is stationary both in t and x, has
zero-mean, is Gaussian and its correlation matrix
R(t; x) = [E[Vi(t; x)Vj (0; 0)]]i;j=1;;d
3
has compact support in the t variable. This means that there exists a T > 0 such that
R(t; x) = 0;
for j t j T , x 2 Rd . Such random elds are sometimes called T -dependent (cf e.g. [9]).
We also require that the realizations of V are continuous in t, C 1 smooth in the x variable
and that
d
X
divV(t; x) = @xi Vi (t; x) 0:
i=1
We prove that for such velocity elds the improper integrals appearing in (5) converge
and that the family of processes fX" (t)gt0 ; " > 0 tends weakly to Brownian Motion with
covariance matrix given by (5). The precise formulation of the result and an outline of its
proof is presented in the next section. It is important to note that the result presented
here seems to be unattainable by a simple extension of the path-freezing technique used
in the proofs of the diusion approximation for slowly varying velocity elds in [20,18,19].
The paper is organized as follows. In Section 2 we introduce some basic notation,
formulate the main result and outline the proof. Sections 3-5 contain some preparatory
lemmas for proving weak compactness of the family fX" (t)gt0; " > 0. The proof of
tightness is done in Section 6 where we also identify the unique weak limit of the family
with the help of the martingale uniqueness theorem.
2 Basic Notation, Formulation Of The Main Result And
Outline Of The Proof.
Let (
; V ; P ) be a probability space, E expectation relative to the probability measure P ,
A a sub -algebra of V and X : ! R a random variable measurable with respect to V .
We denote the conditional expectation of X with respect to A by E[X j A] and the space
of all V -measurable, p integrable random variables by Lp(
; V ; P ). The Lp -norm is
kX kLp = fE j X jpg1=p:
We assume that there is a family of transformations on the probability space such that
t;x : ! ; t 2 R; x 2 Rd
such that
i)
s;x t;y = s+t;x+y ; for all s; t 2 R ; x; y 2 Rd :
ii)
t;,x1 (A) 2 V ; for all t 2 R ; x 2 Rd ; A 2 V :
iii)
P [t;,x1(A)] = P [A]; for all t 2 R ; x 2 Rd ; A 2 V :
4
iv) the map
(t; x; ! ) 7! t;x (! )
is jointly BR BRd V to V measurable. Here we denote by BRd the -algebra of
Borel subsets of Rd .
Suppose now that Ve : ! Rd , where Ve = (Ve1; ; Ved ) and EVe = 0 is such that the
random eld
V(t; x; !) = Ve (t;x(!))
(7)
has all nite dimensional distributions Gaussian. Thus, V(t; x; ! ) is a stationary, zero
mean, Gaussian random eld. We denote by (V1 (t; x); ; Vd (t; x)) its components. Let
L2a;b be the closure of the linear span of Vi (t; x), a t b, x 2 Rd , i = 1; ; d; in the
L2-norm. We denote by Va;b the -algebra generated by all random vectors from L2a;b. Let
L2a;b;? = L2,1;+1 L2a;b , that is, L2a;b;? is the orthogonal complement of L2a;b in L2,1;+1 . We
? the -algebra generated by all random vectors belonging to L2;? . According
denote by Va;b
a;b
? are independent. Let Va;b (t; x) be the
to [33], p.181 Theorems 10.1 and 10.2, Va;b and Va;b
orthogonal projection of V(t; x) onto L2a;b, that is, each component of Va;b is the projection
of the corresponding component of V. We denote by Va;b (t; x) = V(t; x) , Va;b (t; x)
? -measurable while
the orthogonal complement of Va;b (t; x). Of course Va;b (t; x) is Va;b
Va;b(t; x) is Va;b-measurable for any t and x. The correlation matrix of the eld V is
dened as
R(t; x) = [E[Vi(t; x)Vj (0; 0)]]i;j=1;;d :
We will assume that the Gaussian eld V given by (7) satises the following hypotheses.
A1) V(t; x) is almost surely continuous in t and C 1 smooth in x. Moreover, for any a b
all Va;b (t; x) have the same property.
P
A2) V(t; x) is divergence-free i.e. divV(t; x) = di=1 @xi Vi (t; x) 0.
A3) EV(0; 0) = 0:
A4) R(t; x) is Lipschitz continuous in x and so are the rst partials in x of all its entries.
A5) The eld V is T -dependent i.e. there exists T > 0 such that for all j t j T and
x 2 Rd R(t; x) = 0.
Remark 1. Clearly A4) implies the continuity of the eld in t and x (see e.g. [1])2
Remark 2. Note that although Va;b(t; x) and Va;b(t; x) are no longer stationary in t
they are stationary in x, for any t xed. Moreover
divVa;b (t; x) = divVa;b (t; x) 0
i.e. both Va;b and Va;b are divergence-free2
Remark 3. We note that the eld
(t; x)
W(t; x) = pt2V
+ j x j2 +1
5
is a.s. bounded. This can be seen by using some well known conditions for boundedness
of a Gaussian eld. We recall that for a Gaussian eld G(t), where t 2 T and T is some
abstract parameter space, a d-ball with center at t and radius % is dened relative to the
pseudometric
d(t1; t2) = [E j G(t1) , G(t2) j2] 21 :
Let N (") be the minimal number of d-balls with radius " > 0 for the velocity eld W
needed to cover Rd . It can be veried that
N (") K",2d ;
where K is a constant independent of ". It is also clear that for suciently large ",
N (") = 1. Thus
Z +1 q
log N (")d" < +1:
0
According to Corollary 4.15 of [1] this is all we need to guarantee a.s. boundedness of W2
Remark 4. Condition A 1) is clearly satised when the covariance matrix R satises
the condition
j R(0; 0) , R(t; x) j +
where > 0 and
d
X
i;j =1
j Rij (0; 0) , Rij (t; x) j ln1+ pCt2+ j x j2 ;
(8)
Rij (t; x) = @x2i ;xj R(t; x)
for i; j = 1; ; d. Indeed as it is well known (see e.g. [1] p. 62) that (8) implies that
E j V(t; x) , V(t + h; x + k) j2 +E j rxV(t; x) , rxV(t + h; x + k) j2
C
p
ln1+ h2 +
(9)
j k j2 ;
where k = (k1; ; kd).
Since the projection operator is a contraction in the L2 -norm we can also see that
Va;b(t; x) satises (9), for any a b, thus by Theorem 3.4.1 of [1] it is continuous in t and
C 1 -smooth in x2
Let us consider now the family of processes fX" (t)gt0 ; " > 0 given by
(
dX" (t) = 1 V( t ; X" (t) );
dt
" "2 "
X" (0) = 0:
(10)
Because V(t; x) grows at most linearly, both in t and x, (10) can be solved globally in
t and the processes fX" (t)gt0; " > 0 are therefore well dened. Note that each such
process has continuous (even C 1 smooth) trajectories therefore its law is supported in
C ([0; +1); Rd) the space of all continuous functions with values in Rd equipped with the
standard Frechet space structure.
Denition. 1 We will say that a family of processes fY"(t)gt0; " > 0 having continuous
trajectories converges weakly to a Brownian Motion if their laws in C ([0; +1); Rd ) are
weakly convergent to a Wiener Measure over that space.
6
The main result of this paper can be now formulated as follows.
Theorem. 1 Suppose that a Gaussian random eld V(t; x) is stationary in t and x
e (t;x(! )) for some V
e (! ) as in (7)). Assume that it satises condi(i.e.V(t; x; ! ) = V
tions A1)-A4).
Then
1o The improper integrals
bi;j =
Z +1
0
E[Vi(t; X(t))Vj (0; 0)]dt; i; j = 1; ; d
converge. Here X(t) = X1 (t).
2o The processes fX" (t)gt0 ; " > 0 converge weakly, as " # 0, to Brownian Motion
whose covariance matrix is given by
i;j = bi;j + bj;i i; j = 1; ; d:
(11)
At this point we would like to present a brief outline of the proof of Theorem 1. As usual
we will rst establish tightness of the family fX" (t)gt0; " > 0. According to [30] it is
enough to prove that this family is tight in C ([0; L]; Rd)- the space of continuous functions
on [0; L] with values in Rd , for any L > 0. In order to show this fact we need estimates of
the type
E j X" (u) , X" (t) jpj X" (t) , X" (s) j2 C (u , s)1+q ;
(12)
for all " > 0, 0 s t u L and some constants p; q; C > 0. After establishing (12) we
can apply some of the classical weak compactness lemmas of Kolmogorov and Chentzov
(see [5]) to conclude tightness. Note that
X" (t) = "X( "t2 ); " > 0; t 0:
For brevity we shall use the notation
V(t; s; x) = V(t; Xs;x (t))
with the obvious extension to the components of the eld. Let us also make a convention
of suppressing the last two arguments i.e. s and x in case when they both vanish. For any
nonnegative integer k and i1; ; ik 2 f1; ; dg we shall use the notation
Vi1 ;;ik (s1; ; sk ; s; x) =
k
Y
p=1
Vip (sp ; Xs;x (sp )):
Again we shall suppress writing the last two arguments in case they are both zero.
We now write the equation for the trajectories in integral form
X" (t) , X" (s) = "
7
Z t
"2
s
"2
V(%)d%
and so the left hand side of (12) is
d
X
i=1
2"2
Z t="2
s="2
d%0
Z %0
s="2
E[j X" (u) , X" (t) jp Vi;i (%0; %00)]d%00:
(13)
Since, according to Lemma 1, the eld fV(t)gt0 is stationary, (13) is equal to
d
X
i=1
d
X
i=1
2"2
2"2
Z t="2
s="2
Z t="2
s="2
d%0
Z %0
s="2
E[j X" (u , "2%00) , X" (t , "2%00) jp Vi;i (%0 , %00; 0)]d%00 =
(14)
Z %0 ,s="2
0
d%
E[j X" (u , "2%0 + "2%00) , X" (t , "2%0 + "2%00) jp Vi;i (%00; 0)]d%00:
0
The key observation we make in Remark 1 at the end of Section 4 is that for any positive
integer k the sequence of Lagrangian velocities
fV(t1 + NT ; !); :::; V(tk + NT ; !)gN 0
has law identical with that of the sequence
fV(t1; N ); :::; V(tk; N )gN 0
where fN gN 0 is a certain Markov Chain of random environments (i.e. the state space
? ; P ). Now the
of the chain is (
; V,1;0)) dened over the probability space (
; V,1
;0
expectation in (14) can be written as
E[j X" (u , "2%0 + "2(%00 , NT ); N (!)) , X" (t , "2%0 + "2(%00 , NT ); N (!)) jp
Vi (%00 , NT ; N (! ))Vi(0; !)] =
E[j X" (u , "2%0 + "2(%00 , NT )) , X" (t , "2%0 + "2(%00 , NT )) jp
Vi(%00 , NT )QN Vei ];
00
where N = [ %T ] ([ ] is the greatest integer function), Q is a transition of probabilities
operator for the Markov chain fN gn0 and Vei is the i-th component of the random vector
Ve dening the random eld (see (7)). In order to be able to dene the chain we will have
to develop some tools in Sections 3 and 4 and establish certain technical lemmas about the
expectation of a process along the trajectory X(t); t 0 i.e. f (t;X(t;!)(! )) conditioned
on the -algebra V,1;0 representing here the past.
The proof of estimate (12) can thus be reduced to the investigation of the rate of
convergence of the orbit fQN Vei gN 0 to 0, i = 1; ; d. We will prove in Lemma 10 that
the L1 -norm of QN Vei satises
kQN Vei kL1 = 0;
lim
1
N !+1
Ns
(15)
for any s > 0. The result in (15) may appear to be too weak to claim (12) yet combined
with the fact that V(t) is a Gaussian random variable for any xed t 0, hence
P [j V(%00 , NT; X(%00 , NT )) j N ] e,CN 2 ;
8
for some constant C > 0, we will be able to prove in Section 6 that (12) is estimated by
C (u , s)1+q , for some q > 0. In this way we will establish tightness of fX" (t)gt0; " > 0.
The limit identication will be done by proving that any limiting measure must be a
solution of a certain martingale problem. Thus the uniqueness of the limiting measure
will follow from the fact that the martingale problem is well-posed in the sense of [34].
Remark. In what follows we wish to illustrate the idea of an abstract valued Markov
Chain fn gn0 we have introduced above and shall develop in Section 4. Consider a
space-time stationary velocity eld V(t; x; ! ) on a probability space (
; V ; P ) with the
group of measure preserving transformations t;x , t 2 R, x 2 Rd . Let us denote by Z+
the set of all positive integers and
Y
e =
;
n2Z+
Pe =
Ve =
and dene
O
n2Z+
O
n2Z+
P;
V
W(t; x; (!n)n2Z+ ) = V(t , NT; x; !N ); for NT t (N + 1)T:
This eld is of course not time stationary (actually it can be made so by randomizing the
\switching times" NT ) and we do not even assume that it is Gaussian. However we wish
to use this case to shed some light on the main points of the construction we carry out in
this article.
Let
S ((!n )n2Z+ ) = (!n+1)n2Z+
and
Z ((!n)n2Z ) = S ((0;X(T ;!0)(!n ))n2Z+ ):
Here X(t) denotes the random trajectory generated by V starting at 0 at time t = 0. Let
us consider the Markov chain with the state space e on the probability space (
e ; Ve ; Pe )
given by
0 ((!n )n2Z+ ) = (!n )n2Z+
m+1 ((!n )n2Z+ ) = Z (m ((!n )n2Z+ )):
We can easily observe that
W(t; Y(t; (!n)n2Z+ ); (!n)n2Z+ ) =
W(t , NT; Y(t , NT ; N ((!n)n2Z+ )); N ((!n)n2Z+ ))
for all t and N 2 Z+ . By Y we have denoted the random trajectory of a particle generated
by the ow W and such that Y(0) = 0. The transition of probabilities operator Q for the
chain fm ((!n )n2Z+ )gm2Z+ dened on L1(
e ; Ve ; Pe ) is given by the formula
Z
Qf ((!n )n2Z+ ) = f ((!n0 )n2Z+ )P (d!0 );
9
(16)
where
!00 = !0;
!n0 = 0;,X (T ;!0) (!n ); for n 1:
Let us observe that
Q1 = 1
i.e. Q preserves Pe . Therefore (13) equals, up to a term of order 0("2),
d
X
i=1
2"2
Here
Z (t,s)="2
0
d%0
Z %0 ,s="2
0
E[j Y"(u , s) , Y" (t , s) jp Wi;i(%0; %00)]d%00:
Wi(t; (!)n2Z+ ) = Wi (t; Y(t; (! )n2Z+ ); (! )n2Z+ ); i = 1; ; d
and the notation concerning multiple products of the above processes can be introduces
in analogy with what we have done for V. We also denote
Y" (t) = "Y(t="2 ):
Consider only the o diagonal part of the double integral i.e. when %0 , %00 > T . We get,
again up to a term of order 0("2)
d
X
i=1
2"2
00 =T ]
(t,sX
)=("2 T ) [%X
Z Z
(t,s)="2 >%0 >%00 +T n=[%0 =T ] m=0
(17)
E[j Y" (u , s , "2nT ; n,m ) , Y" (t , s , "2nT ; n,m ) jp
Wi(%0 , "2 nT ; n,m )Wi(%00 , "2 mT )]d%0 d%00 =
d
X
i=1
2"2
00 =T ]
(t,sX
)=("2 T ) [%X
Z Z
E[j Y" (u , s , "2nT ) , Y" (t , s , "2nT ) jp
(t,s)="2 >%0 >%00 +T n=[%0 =T ] m=0
W (%0 "2nT )Qn,m (W
00 2
0 00
i (% , " mT ))]d% d% :
Let us observe that the Lagrangian velocity Wi (%00 ,"2 mT ) is of zero mean and Ve0 ,measurable,
i
where
,
Ve0 = V T T and T = f; g is a trivial ,algebra. The last term in (17) therefore can be easily shown
to be equal to 0, when n , m 1, using (16).
We can see thus that for any p > 0
E[j Y" (u) , Y" (t) jpj Y" (t) , Y" (s) j2] C (t , s);
which suces to claim tightness of the family fY" (t)gt0 .
10
3 Lemmas On Stationarity And Conditional Expectations.
The following lemma is a version of a well known fact about random shifts, shown in
[27] and [15], where it was derived with the help of the theory of Palm measures. Before
formulating it let us make a convention concerning the terminology. We say that a random
vector eld W(t; x) on the probability space (
; V ; P ) is of at most linear growth in t and
x if for almost every ! 2 there is a constant K (!) so that j W(t; x; !) j K (!)(j t j + j
x j).
Remark. Note that according to Remark 3 given after the listing of conditions A1-A5)
in the last section V(t; x) is of at most linear growth2
For a random variable Ue : ! R let us denote
U (t; s; x; ! ) = Ue (t;Ys;x (t;!)(! ))
with the convention of suppressing both s and x when they are both zero.
Lemma. 1 Suppose that W(t; x) = (W1(t; x); ; Wd(t; x)) is a space-time strictly staf so that W(t; x; ! ) =
tionary random d-dimensional vector eld on (
; V ; P ) i.e. there exists W
f (t;x(! )), for all !; t; x. In addition assume that W has continuous in t, C 1 smooth in
W
x trajectories,
divW(t; x) =
d
X
i=1
@xi Wi 0
and that its growth in t; x is at most linear. Then for any s; x and U~ 2 L1 (
; V ; P ) the
random process U (t; s; x); t 2 R is strictly stationary. Here
Ys;x (t) = x +
Zt
s
W(; Ys;x())d:
Proof. Note that for any t; h 2 R
Ys;x (t + h; !) = Ys;x (h; !) + Y0;0 (t; h;Ys;x(h;!)(!)):
(18)
The transformation of the probability space
h : ! 7! h;Ys;x (h;!)(!)
preserves the measure P , see e.g. [27] Theorem 3 p. 501 or the results of [15]. We can
write that for any t1 tn and A1 ; ; An 2 Rd
E1A1 (U (t1 + h; s; x)) 1An (U (tn + h; s; x)) =
E1A1 (U (t1; h(!))) 1An (U (tn; h(!))) =
E1A1 (U (t1)) 1An (U (tn ))2
In the next lemma we recall Theorem 3 of Port and Stone ([27]).
Lemma. 2 Under the assumptions of Lemma 1
~
EU~ (0;Ys;x(t;!)(!)) = EU:
11
The following lemma is a simple consequence of the fact that in the case of Gaussian
random variables L2 -orthogonality and the notion of independence coincide.
Lemma. 3 Assume that f 2 L2(
; V,1;+1; P ) and ,1 a b +1. Then there
? ; P P ) such that
exists f~ 2 L2 (
; Va;b Va;b
i)
f (!) = f~(!; !):
ii) f and f~ have the same probability distributions.
Before stating the proof of this lemma let us introduce some additional notation.
Denote by C = C (Rd+1 ; Rd ) the space of all continuous mappings from Rd+1 to Rd . For
f; g 2 C
1 supjtj+jxjn j f (t; x) , g (t; x) j :
n
n=1 2 1 + supjtj+jxjn j f (t; x) , g (t; x) j
As it is well known D is a metric on C and the metric space (C ; D) is a Polish space i.e. it
is separable and complete. By M we denote the smallest -algebra containing the sets of
the form
[f : f (t; x) 2 A]
where (t; x) 2 Rd+1 and A 2 BRd . It can be shown with no diculty that M is the
Borelian -algebra of subsets of (C ; D).
Proof Of Lemma 3. Consider the mappings ; a;b; ?a;b from to C and from
to C given by the formulas
D(f; g ) =
1
X
(!) = V(; ; !);
a;b(!) = Va;b (; ; ! );
? (; ; ! );
?a;b(!) = Va;b
? (; ; ! 0);
(!; ! 0) = Va;b (; ; ! ) + Va;b
where in the last denition the addition is understood as the addition of two vectors in
a linear space C . These mappings are V,1;+1 to M measurable, Va;b to M measurable,
? to M measurable and Va;b V ? to M measurable, respectively.
Va;b
a;b
Suppose that f : ! R is a V,1;+1 -measurable random variable. There exists
g : C ! R such that f = g . Set
f~(!; !0) = g ( (!; !0)):
Obviously it fullls condition i) from the conclusion of the lemma.
The fact that f and f~ have the same probability distributions follows from the observation that the probability distribution laws of the processes
V(t; x; !) = (!)(t; x)
and
W(t; x; !; !0) = (!; !0)(t; x)
12
are identical2
The following lemma provides us with a simple formula for an expectation of a random
process along the trajectory X(t); t 0 (the solution of (10) with " = 1) conditioned on
Va;b. Let Xs;x be the solution of
Xs;x (t) = x +
Zt
s
V(%; Xs;x(%))d%;
with the convention that we suppress superscripts if s = 0 and x = 0.
By fXe b;c x (t)gt2R we denote a stochastic process obtained by solving the following system of O. D. E-s.
(
e b;c x (t)
dX
0
a;b
0 0
e b;x
e b;x
dt = Va;b (t + c; Xc (t; !; ! ); ! ) + V (t + c; Xc (t; !; ! ); ! );
Xe b;c x (b) = x:
In addition when a = ,1 and b = 0 we set
Xe = Xe 00;0:
(19)
(20)
Lemma. 4 Assume that f 2 L2(
; V,1;+1; P ). Let f~ be any random variable which
satises conditions i) and ii) of Lemma 3. Suppose that ,1 a b +1 and c is any
real number.
? measurable
Then for any event C that is Va;b
Eff (0;Xb;0(t;c;0(!))(!))1C (!) j Va;bg =
E!0 ff~(0;Xe b;c 0(t;!;!0)(!); 0;Xe b;c 0(t;!;!0)(!0))1C (!0)g:
Here E!0 denotes the expectation taken over the ! 0 variable only.
Proof. Let h be xed. Consider the dierence equation
(
Xhn+1 = Xhn + Vnh (Xhn )h;
Xh0 = 0;
where Vnh (x) = V(b + c + nh; x). Dene also fXe hn gn0 via the dierence equation
(
Xe hn+1 = Xe hn + Uhn(Xe hn )h + Wnh(Xe hn )h;
Xe h0 = 0;
where
Uhn (x) = Va;b(b + c + nh; x);
Wnh(x) = Va;b(b + c + nh; x):
The following lemma holds.
?
Lemma. 5 For any bounded measurable function ' : Rd ! Rd and C 2 Va:b
E['(Xhn )1C j Va;b] =
E!0 ['(Xe hn(!; !0))1C (!0)]; for all n 0:
13
(21)
We shall prove this lemma after we rst nish the proof of Lemma 4. Suppose that f~
satises conditions i) ii) of the conclusion of Lemma 3. Then
E[f (0;Xb;0(t;c;0(!))(!))1C (!) j Va;b] =
(22)
E[f~(0;Xb;0(t;c;0(!))(!); 0;Xb;0(t;c;0(!))(!))1C (!) j Va;b]:
By a standard approximation argument we may consider f~(!; ! 0) as being of the form
? -measurable. Then the
f1 (! )f2(!0), where f1 ; f2 are respectively Va;b-measurable and Va;b
right hand side of (22) is equal to
E[f1(0;Xb;0(t;c;0(!))(!))f2(0;Xb;0(t;c;0(!))(!)1C (!) j Va;b]:
Consider two random functions
(x; ! ) 7! F1 (x; ! ) = f1 (0;x (! ))
and
(x; ! ) 7! F2 (x; ! ) = f2 (0;x(! ));
? -measurable. We wish to prove
which are respectively BRd Va:b -measurable and BRd Va;b
that
E[F1(Xb;0 (t; c;0(!)); !)F2(Xb;0(t; c;0(!)); !)1C (!) j Va;b] =
(23)
E!0 [F1(Xe b;c 0(t; !; !0); !)F2(Xe b;c 0(t; !; !0); !0)1C (!0)]:
Let us consider the functions
F1(x; ! ) = 1 (x)1C1 (!);
(24)
F2(x; ! ) = 2 (x)1C2 (!);
where 1; 2 : Rd ! R+ are bounded and continuous, R+ = [0; +1) and C1 2 Va;b; C2 2
? . It is enough to prove (23) for such functions only. By a standard approximation
Va;b
argument we can prove (23) for arbitrary measurable F1 ; F2. Equation (23) will be proven
if we can show that for any n 0 and h 2 R
E[1(Xhn )1C1 2(Xhn )1C2 1C j Va;b] =
E[1(Xhn )2(Xhn )1C2 1C j Va;b]1C1 =
E!0 [1(Xe hn (!; !0))2(Xe hn (!; !0))1C2 (!0)1C (!0)]1C1 (!):
Indeed, passing to the limit with h # 0 we see that the functions
Xh (t) = Xhn ; for nh t < nh
and
(25)
Xe h (t) = Xe hn ; for nh t < nh
converge, for all !; ! 0, to X(t) and Xe (t), respectively, uniformly on any compact interval.
14
The rst equality in (25) follows from an elementary property of conditional expectations, while the second is a direct application of Lemma 52
Hence the only thing remaining to be proven is Lemma 5.
Proof of Lemma 5. Formula (21) is obviously true for n = 0. Suppose it holds for
a certain n. We get for n + 1 that
Ef'(Xhn+1 )1C j Va;bg = Ef'(Xhn + Vnh (Xhn )h)1C j Va;bg =
Ef'(Xhn + Uhn (Xhn )h + Wnh (Xhn)h)1C j Va;bg:
The only thing we need yet to prove is therefore that for any continuous function
Rd Rd Rd
! R+
Ef (Xhn ; Uhn(Xhn )h; Wnh(Xhn )h)1C j Va;bg =
E!0 f (Xe hn (!; !0); Uhn(Xe hn(!; !0); !)h; Wnh(Xe hn (!; !0); !0)h)1C (!0)g:
(26)
:
(27)
Actually it is enough to consider only (x; y; z ) = 1 (x) 2(y ) 3 (z ). A standard approximation argument will then yield (26) for any . We need to verify that
Ef 1(Xhn ) 2(Uhn(Xhn )h) 3(Wnh(Xhn )h)1C j Va:bg =
(28)
E!0 f 1(Xe hn(!; !0)) 2(Uhn (Xe hn (!; !0); !)h) 3(Wnh(Xe hn(!; !0); !0)h)1C (!0)g:
Consider now the random functions
2 (x; ! ) = 2 (Uhn (x; ! )h)
and
3 (x; ! ) = 3 (Wnh (x; ! )h):
? -measurable, respectively. Applying again
They are BRd Va;b-measurable and BRd Va:b
an approximation argument we may reduce the verication procedure to the functions of
the form
2 (x; ! ) = e 2 (x)1C2 (! )
and
3 (x; ! ) = e 3 (x)1C3 (! );
? . Substituting those elds to
where e 2 ; e 3 : Rd ! R+ are continuous, C2 2 Va;b , C3 2 Va;b
(28) we get
Ef 1 (Xhn )e 2(Xhn )1C2 e 3(Xhn )1C3 1C j Va;bg =
Ef 1 (Xhn )e 2(Xhn )e 3(Xhn)1C3 1C j Va;bg1C2 =
E!0 f 1(Xe hn (!; !0))e 2(Xe hn (!; !0))e 3(Xe hn (!; !0))1C (!0)1C3 (!0)g1C2 (!):
The last equality holds because we have assumed that Lemma 5 is true for n. Therefore
we have veried (28) for functions specied as above and thus the proof of the lemma can
be concluded with a help of a standard approximation argument2
15
4 The Operator Q And Its Properties.
Throughout this section we shall assume that Xe has the same meaning as in (20). For
xed ! let Z!t : ! be dened by
0
Z!t (!0 ) = 0;X(
e t;!;!0) (! ):
? ; P ) given by
Let J t (; ! ) be a probability measure on (
; V,1
;0
J t (A; !) = P [(Z!t ),1(A)]:
? the function
It follows easily from the denition of Xe (t); t 0 that for any set A 2 V,1
;0
t
s
! 7! J (A; ! ) is V,1;0 -measurable. Let V0;T be the -algebra generated by V,1;0 (t; x);
0 t T and let us set
0
s
V,1
;0 = T;0(V0;T ) V,1;0 :
R
For any f 2 L1 (
; V,1;0; P ); such that f 0, fdP = 1 we dene
Z
[Qf ][A] =
J T (A; !)f (! )P (d!)
? ).
which is a probability measure on (
; V,1
;0
Lemma. 6 1) For any f as described above [Qf ] is an absolutely continuous measure
with respect to P .
Qf ] , is V s -measurable.
2) The Radon-Nikodym derivative of [Qf ] with respect to P , d[dP
0;T
3) [Q1] = P , where 1 = 1
.
? . Then
Proof. 1) Suppose that P [A] = 0, for some A 2 V,1
;0
[Qf ][A] =
Z
Z
Z
J T (A; !)f (!)P (d!) =
0
0
f (! )P (d!) 1A (0;X(
e T ;!;!0) (! ))P (d! ):
By Lemma 4 we get that the last expression equals to
Z
f (! )P (d! )Ef1A(0;X(T ;!)(!)) j V,1;0g =
Z
f (! )1A(0;X(T ;!)(! ))P (d! ) = 0;
since E1A (0;X(T ;!)(! )) = E1A = P [A] = 0 by virtue of Lemma 2.
2) Let
t(! ) = t;X(t;!)(!); t 2 R:
Then the process
ff (t(!))1A(0;X(T ;t(!))(t(!)))gt2R
16
(29)
is stationary by virtue of Lemma 2. The right side of (29) is, hence, equal to
Z
since by (18)
f (,T (! ))1A (,T;0(! ))P (d! );
(30)
X(,T ; !) + X(T ; ,T (!)) = X(0; !) = 0:
Using the invariance of P with respect to ,T;0 (! ) we get that expression (30) is equal to
Z
f (0;X(,T ;T;0 (!))(! ))1A (! )P (d! );
which by Lemma 4 equals
Z Z
Z
f (0;Xe 0;0(,T ;!;!0 )(! ))1A (!0)P (d! )P (d! 0) =
A
T
Z
P (d! 0) f (0;Xe 0;0 (,T ;!;!0) (!))P (d! );
T
where XT is dened by formula (19) for a = ,1; b = 0. Since
e 0;0
V,1;0 (t + T; x; !0); x 2 Rd ; ,T t 0
are V0s;T -measurable Xe 0T;0 (,T ; !; ! 0) is jointly V,1;0 V0s;T -measurable. Thus
Z
f (0;Xe 0;0 (,T ;!;!0)(!))P (d!)
T
is V0s;T -measurable which proves part 2) of the lemma.
3)
Z
[Q1][A] = J T (A; ! )P (d! ) = Ef1A (0;X(T ;!)(! ))g = E1A = P [A]:
The next to last equality holds by virtue of Lemma 22
For an arbitrary f we dene Qf by the following formula
Qf = d[Qf ] :
dP
,T;0
Qf ] is the Radon-Nikodym derivative of [Qf ] with respect to P on (V ? ; P ). By
Here d[dP
,1;0
Lemma 6 2)
0
Qf 2 L1 (
; V,1
;0; P ):
The following lemma contains a several useful properties of Q.
Lemma. 7 1) Q : L1(
; V,1;0; P ) ! L1(
; V,1;0; P ) is a linear operator.
2) Qf 0 for f 0.
R
R
3) QfdP = fdP
4) Q1 = 1
5) kQf kLp kf kLp , for any f 2 Lp (
; V,1;0; P ), 1 p +1.
17
Proof. 1) and 4) follow directly from Lemma 6. 2) is obvious.
3) Note that from (29) we get
Z
Z
QfdP = [Qf ][
] =
Z
f (!)1
(0;X(T ;!)(! ))P (d!) = fdP:
5), 2) and 4) imply together that Q is a contraction in L1 (
; V,1;0; P ) space (cf [14]
p. 75). Therefore by the Riesz-Torin convexity theorem (see [10] p. 525) Q is a contraction
0
in any Lp (
; V,1
;0; P ) space2
The next lemma will be of great use in establishing the Kolomogorov type estimates
leading to the proof of weak compactness.
Lemma. 8 Let p 0, N; k be positive integers and sk+2 s1 NT , 1 i d.
Assume that Y 2 L1(
; V,1;0; P )
Then
E[j
E[j
Z sk+2
Z sk+2 ,NT
sk+1 ,NT
sk+1
V(%)d% jp Vi;;i(s1; ; sk )Y ] =
V(%)d% jp Vi;;i(s1 , NT; ; sk , NT )QN Y ]:
Proof. Let us observe that
Z sk+2
E[j s V(%)d% jp Vi;;i(s1; ; sk )Y ] =
k+1
EfE[j
Let us dene
f (! ) =j
Z sk+2
V(%)d% jp Vi;;i(s1; ; sk ) j V,1;0]Y g:
sk+1
Z sk+2
sk+1
V(%; T; 0; !)d% jp Vi;;i(s1; ; sk; T; 0; !)
and
Using the fact that
f~(!; !0) = f (!0):
X(t; !) = XT;0(t; 0;X(T ;!)(!)) + X(T ; !):
we can write that the right hand side of (31) equals
EfE[j
Z sk+2
sk+1
(31)
(32)
V(%; T; 0; 0;X(T ;!)(!))d% jp Vi;;i(s1; ; sk ; T; 0; 0;X(T ;!)(!)) j V,1;0 ]Y g:
Applying Lemma 4 with t = T; b; c = 0 and a = ,1 we obtain that (33) equals
Z
Y (! )P (d! )
Z
j
Z sk+2
sk+1
0
p
V(%; T; 0; 0;X(
e T ;!;!0) (! ))d% j
0
0
Vi;;i(s1; ; sk ; T; 0; 0;X(
e T ;!;!0) (! ))P (d! ):
18
(33)
(34)
By the denition of J T (d! 0; ! ) given at the beginning of this section we can conclude
that the last expression in (34) is equal to
Z
Y (!)P (d!)
Z
j
Z sk+2
sk+1
V(%; T; 0; !0)d% jp
(35)
Vi;;i (s1 ; ; sk; T; 0; ! 0)J T (d!0; !);
By virtue of part 2) of Lemma 6 and the denition of the operator Q given after the
Lemma the expression given in (35) is equal to
Z
j
Z sk+2
sk+1
Ve (%,T;XT;0 (%;,T;0 (!0))(!0))d% jp Vei(sk ,T;XT;0 (sk ;,T;0 (!0))(!0)) (36)
Vei (s1 ,T;XT;0 (s1 ;,T;0 (!0))(! 0 ))QY P (d! 0);
0
where QY is V,1
;0 -measurable according to Lemmas 6 and 7. Using the fact that
XT;0(%; ,T;0(!0)) = X(% , T ; !0)
we get that (36) equals to
Z
j
Z sk+2 ,T
sk+1 ,T
V(%; !0)d% jp Vi;;i(s1 , T; ; sk , T ; !0)QY (!0)P (d!0):
Repeating the above procedure N -times we obtain the desired result2
Remark 1. There is a possible interpretation of what we have done so far in terms
of a certain Markov chain dened on the abstract space of random environments . Let
us consider a random family of maps Z (; ! ) : ! , ! 2 , dened by
Z (!; !0) = Z!T (,T;0(!0)):
Using the argument employed to prove part 2) of Lemma 6 we note that the map Z :
? to V,1;0 -measurable and therefore f! gn0 given by
! is V,1;0 V,1
;0
n
!0 (! 0) = !
!n+1 (! 0) = Z (!n (! 0); ! 0); n 0
is a Markov Family with values in an abstract state space of random environments
? ; P ) where Q is its transition op(
; V,1;0) dened over the probability space (
; V,1
;0
erator. Lemma 8 can be now formulated as follows.
Lemma. 9 Let p 0, N; k be positive integers and sk+2 s1 NT , 1 i d.
Assume that Y 2 L1(
; V,1;0; P ). Then
E[j
E! E!0 [j
Z sk+2 ,NT
sk+1 ,NT
Z sk+2
sk+1
V(%)d% jp Vi;;i(s1; ; sk )Y ] =
V(%; !N (!0))d% jp Vi;;i(s1 , NT; ; sk , NT ; !N (!0))Y (!)]:
19
5 Rate Of Convergence Of fQ Y g 0.
n
n
Our main objective in this section is to prove the following lemma.
Lemma. 10 Let Y be a random variable belonging to L2(
; V,1;0; P ) such that EY = 0.
Then for any s > 0 there is a constant C depending only on s and kY kL2 such that
kQnY kL1 nCs ; for all n:
Proof. Note that according to the denition of [Qf ] for any f 0 belonging to
?
L1 (
; V,1;0; P ) and A 2 V,1
;0
Z
Z
d
[
Qf
]
[Qf ][A] =
dP = J T (A; ! )f (!)P (d! ):
A dP
(37)
According to [15], p.149 Sec. 5, the absolutely continuous with respect to P part of
J T (d! 0; !) has a density given by the formula
Z
0T (dx; !; !0) :
(38)
Rd GT (0; !; 0;x(! 0))
Here xT (U ; !; ! 0) stands for the cardinality of those y 2 U for which
0
t (y; !; ! ) = x;
where
0
e (t; !; 0;x(! 0));
t (x; !; ! ) = x + X
GT (x; !; !0) = det r t(x; !; !0):
Note that t is at least of C 1 -class because
(
d t (x) = V
,1;0 (t; t (x) , x; ! ) + V,1;0 (t; t (x); ! 0)
dt
(39)
0 (x) = x
and both V,1;0 , V,1;0 are C 1 smooth in x. As a result we know that r t (x) exists and
satises
r t(x; !; !0) = r t(0; !; 0;x(!0)) =
I + rXe (t; !; 0;x(!0)):
We also have that
d r (0; !; ! 0) = r V (t; Xe (t); !)[r (0; !; !0) , I]+
(40)
x ,1;0
t
dt t
and
rxV,1;0(t; Xe (t); !0)r t(0; !; !0)
r 0(0; !; !0) = I:
The expression (38) is P (d! 0) a.s. nite according to [15] (see formula (13) i.b.i.d).
We have therefore that
Z
Z
T
0
(41)
J T (A; !) P (d! 0) d G (00(;d!;x;!; !(!)0)) :
A
R T
0;x
Let (%) be an increasing, for % 0, smooth function such that
20
i) (,%) = (%).
ii) (0) = 0.
iii) 0 (%) > 0, for % > 0.
iv) (%) = p%, for % 1.
Set '(x) = (j x j). Fix 2 (1=2; 1). For any > 0 let
Kn () = [! : sup [j V,1;0 (t; x) j + j rx V,1;0 (t; x) j] ('(x) + log n)]:
0tT
(1)
(d)
Let V,1;0 (t; x) = (V,1
;0 (t; x); ; V,1;0 (t; x)). Consider the processes
W (i)(t;
n
(i)
V,1
t; x)
d
x) = '(x) +;0 (log
n ; for 0 t T; x 2 R ; i = 1; ; d:
It is easy to see that for any Wn(i) the minimal number Nn(i) of d-balls with radius " > 0
needed to cover Rd satises
Nn(i) K",4d ;
where K does not depend on "; n and i. The Borell-Fernique-Talagrand type of estimates
of the tail probabilities for Gaussian elds (see e.g. [2] p. 120 Theorem 5.2.) imply that
there exist constants C; > 0 independent of n so that for and all n we have
2
P [Kn ()c] C4d+1 expf, 82 g;
n
here
Kn ()c = n Kn ();
"
#
2
2
j
V
,1
;
0 (t; x) j + j rx V,1;0 (t; x) j
2
n = sup d E
=
('(x) + log n)2
0tT;x2R
1 sup E[j V (t; 0) j2 + j r V (t; 0) j2] = C ;
,1;0
x ,1;0
log2 n 0tT
log2 n
for some constant C . Hence
P [Kn ()c ] C01 4d+1 expf,C02 2 log2 ng;
for and certain constants C01 ; C02. Let 0 < < 1 be such that
e (1+)T < 1
+1
and < 1.
In the sequel we will denote Kn () by simply writing Kn , n 1. Dene
Lm = [! 2 : sup [j V,1;0(t; x) j + j rx V,1;0 (t; x) j] j x j + log m]:
0tT
21
(42)
The fact that limm!+1 P [Lm ] = 1 can be proven precisely in the same way as in the
argument used in the Remark 3 after conditions A1)-A5) in Section 2. There exists a
constant C depending on only such that '(x) j x j +C . For ! 2 Kn ; ! 0 2 Lm we
get
(
e t)
e
j dX(
dt j (1 + ) j X(t) j +C + log n + log m;
e
X(0) = 0:
Hence by the Gronwall inequality
, 1 (C + log n + log m):
sup j Xe (t) j e (1 + )
1
(1+)T
0tT
(43)
Here C1 = C . Using (40) we get that j r 0 (0) j= 1 and
d j r (0) j sup j r V (t; Xe (t)) j [j r (0) j +1]+
t
x ,1;0
t
dt
0tT
(44)
sup j rx V,1;0 (t; Xe (t)) jj r t (0) j
0tT
( sup j Xe (t) j +C1 + log n)[j r t (0) j +1]+
0tT
( sup j Xe (t) j + log m) j r t (0) j
0tT
e(1+)T (C1 + log n + log m)
j r t(0) j +
(e (1+)T , 1)(C + log n + log m) + C + log n;
1
1
+1
for 0 t T .
The last inequality in (44) follows from estimate (43) of sup0tT j Xe (t) j. Hence by
the Gronwall inequality
sup j r t (0) j 2 expfe (1+)T (C + log n + log m)T g 0tT
C21eC22 log n eC23 log m ; for ! 0 2 Lm ; ! 2 Kn :
(45)
The next lemma establishes how large the random set [x : T (x) = 0] is for ! 0 2 Lm ; ! 2
Kn . Before the formulation of the lemma let us introduce the following notation, B%(0)
denotes a ball of radius % with a center at 0 2 Rd .
Lemma. 11 For !0 2 Lm; ! 2 Kn the set [x : T (x; !; !0) = 0] is nonempty. There exits
a constant C31 independent of n; m such that
[x : T (x) = 0] BC31 (log n+log m) (0):
Assuming the above lemma observe that
[
jxjC31 (log n+log m)
0;x(Lm ) L[M ]+1;
22
where
log M = C31 (log n + log m):
Indeed if ! 0 2 Lm ; x1 2 BC31 (log n+log m) (0) then
(46)
j V,1;0(t; x; 0;x1 (!0)) j j x + x1 j + log m j x j +C31 (log n + log m) + log m j x j +(1 + C31 )(log n + log m) j x j +C31 (log n + log m)
provided that C31 in Lemma 11 has been chosen suciently large. Hence 0;x1 (! 0 ) 2
L[M ]+1 for the choice of M according to (46). Therefore we can write that
0T (dx; !; !0)
0T (dx; !; !0) C Z
41
Rd GT (0; !; 0;x(! 0))
Rd j r T (0; !; 0;x(! 0)) j
Z
(47)
C42 eC23 log1([M ]+1) eC221log n C43 eC44 1log m eC45 1log n :
The last but one inequality follows from Lemma 11 and estimate (45). Taking into account
(41) we see that the left hand side of (37) is greater than or equal to
Z
Z
f (!)P (d!) P (d! 0)
Z
0T (dx; !; !0) =
Rd GT (0; !; 0;x(! 0))
A
+
1 Z Z
X
m;n=0 A
(48)
f (!)1Kn+1nKn (!)1Lm+1 nLm (!0)P (d!)P (d!0)
Z
0T (dx; !; !0) :
Rd GT (0; !; 0;x(! 0))
Here K0 = L0 = . Using (47) we can estimate the right hand side of (48) from below by
C51
+
1 Z Z
X
m;n=0 A
where
f (!)1Kn+1nKn (!)1Lm+1nLm (!0) eC441log n eC45 1log m P (d!)P (d!0) = (49)
Z
A
,(! 0)P (d! 0)
Z
f (! )(!)P (d!);
,(! 0 ) = C51 eC45 1log m on Lm+1 n Lm
1
(! ) =
on K n K :
eC44 log n
n+1
n
s
?
?
Note that each Lm 2 V,1
;0 V,1;0 so , is V,1;0 measurable. Combining this with (49)
and (37) we get that
d[Qf ] (!0) ,(!0 ) Z f dP
dP
23
P (d! 0) a.s.
(50)
Hence
Qf ,,T;0
Z
f dP
R
P a.s. Now denote Yn = Qn Y . Choose the minimum of
it is the rst one. We have then by (50)
+
Yn dP
Z
kYn+1 kL1 kQYn+kL1 + kQYn,kL1 , Yn+dP
kYnkL1 ,
Z
Kn
Yn+ dP
kYn kL1 , e,C44 log n
kYnkL1 + e,C44 log n
Since
R
Yn dP
= 0 we get
R
Y + dP
n
Z
Z
Z
Kn
Z
,dP (
Z
Z
and
R
, dP ; say
Yn
,dP (51)
,dP Yn+ dP
Z
Y + dP
Knc n
,dP =
Z
, Yn+ dP ):
= 1=2kYn kL1 . Schwartz inequality guarantees that
q
Y + dP P [Knc ]kYn+kL2
c n
Kn
The right hand side of the above inequality is ,by (42) and Lemma 7 5), less than or equal
to
2
C61 e,C62 log n kY kL2 :
Finally we can conclude that
kYn+1 kL1 kYn kL1 (1 , C70e,C72 log n ) + C71e,C73 log2 n:
The positive constants C70 ; C71 depend only on kY kL2 . We can observe that
kYnkL1 C71
n
n
X
Y
e,C73 log2 k
k=1
kY kL1
p=k+1
n
Y
p=1
(1 , C70 e,C72 log p )+
(52)
(1 , C70 e,C72 log p ):
Now the k-th term of the sum on the right hand side of (52) can be estimated by
C71 e,C73 log2 k expf,C70
n
X
p=k+1
e,C72 log p g:
For k > [ n2 ] we get therefore an estimate of (53) by the rst factor, i.e. by
C81
For k < [ n2 ]
nC82 log2,1 ( n2 )
n
X
p=k+1
e,C72 log p =
n
X
p=k+1
24
:
1
pC72 = log1, p
n1,r ;
(53)
for 0 < r < 1 and n suciently large. Therefore (53) can be estimated by C91 e,C71 n1,r .
Hence our lemma has been completely proven, provided that we prove Lemma 112
Proof Of Lemma 11. For ! 2 Kn ; !0 2 Lm consider the mapping F : Rd ! Rd
constructed in the following way. For any x 2 Rd solve for 0 t T the system of
O.D.E-s
(
dt (x) = V
,1;0 (t; t (x) , x; ! ) + V,1;0 (t; t(x); ! 0);
dt
(54)
T (x) = 0:
Let us set
F (x) = 0(x):
Note that
t(x) = t(x); for all 0 t T
i
F (x) = x:
Observe that then x belongs to the set
[x : T (x) = 0]:
We can write therefore that
j ddtt(x) j j t(x) , x j +C1 + log n + j t(x) j + log m ( + 1) j t (x) j + j x j +C1 + log n + log m
and T (0) = 0. By the Gronwall inequality
(+1)T
j t(x) j e ( + 1), 1 ( j x j +C1 + log n + log m) =
For
(55)
(e(+1)T , 1) j x j + e(+1)T , 1 (C + log n + log m):
+1
( + 1) 1
R
e(+1)T ,1 (C1 + log n + log m)
(+1)
1 , (e(+1)T , 1)
+1
we can observe that j x j R implies j F (x) j R. Choose therefore R = C (log n +log m)
for C suciently large independent of n; m. By Brouwer's theorem there exists an x such
that F (x) = x. Then, as we have noted [x : T (x) = 0] is not empty. The second part of
the lemma follows from the a'priori estimate (55)2
6 Tightness And The Limit Identication.
The rst step towards establishing tightness of the family fX" (t)gt0 ; " > 0 is to check
that for any L > 0 there is C > 0 so that for all 0 s t L, " > 0
E j X" (t) , X" (s) j2 C (t , s):
25
(56)
The left hand side of (56) equals
d Z
X
"2 E j X( "t2 ) , X( "s2 ) j2= 2"2
t,s
"2
i=1 0
d Z t2
X
"
2"2
s
i=1 "2
d Z t,2s
X
"
2
2"
i=1 0
s
"2
ds1
Z s1
0
Z s1
0
E[Vi;i(s1; %)]d% =
(57)
E[Vi(s1 , %)Vei ]d% =
E[Vi(s1 , %)Vei]d% =
s1
[ T ] Z s ,(k,1)T
X
1
E[Vi (s1 , %)Vei ]d%+
s
,
kT
1
k=1
t
,
s
Z s1 ,[ s1 ]T
Z
d
X "2
T
E[Vi(s1 , %)Vei]d%:
2"2
ds1
0
i=1 0
that the stationarity of fVi2 (t; X(t))gt0 implies that
j E[Vi(s1 , %)Vei ] j [EVi2(s1 , %)]1=2kVei kL2 = kVei k2L2 :
2"2
Let us notice
d Z t,2s
X
"
Z s1
ds1
ds1
i=1 0
ds1
Therefore the last term on the right hand side of (57) is of order (t , s). We will be
concerned with the rst term only. By Lemma 8 it equals
2"2
d Z t,2s
X
"
i=1 0
ds1
s1 ]
[X
T Z s1 ,(k,1)T
k=1 s1 ,kT
E[Vi (s1 , % , (k , 1)T )Qk,1 Vei]d%:
Let
(58)
W = Vi (s1 , % , (k , 1)T ):
Because one dimensional distributions of W are Gaussian, EW = 0 and EW 2 = EVei2 we
get from that and Lemma 10
j E(WQk,1 Vei ) j
Z
(
jW jk
Z
jW jk
j WQk,1 Vei j dP
+
Z
jW jk
j WQk,1 Vei j dP W 2 dP )1=2kQk,1 Vei kL2 + kkQk,1 Vei kL1 kC2 ;
where C is some constant. Hence (58) can be estimated by
C"
Corollary. 1 The integrals
d Z t,s
2 X "2
i=1 0
Z +1
0
ds1
[ sT1 ]
X
k=1
1 C 0 (t , s):
k2
E[Vi(%; X(%))Vi(0; 0)]d%;
are convergent for i; j = 1; d .
26
Let us estimate
"2 E[j X" (u) , X" (t) jp (Xi ( "t2 ) , Xi ( "s2 ))2];
(59)
for 0 s t u L, 0 < p < 2. The expression in (59) equals
Z t2
"
s
"2
2"2
2"2
Z t,s
2 "2
2"
0
Z t,s
"2
0
ds1
ds1
ds1
Z s1
0
Z s1
Z s1
0
s
"2
E[j X" (u) , X" (t) jp Vi;i(s1; %)]d% =
E[j X" (u , s) , X" (t , s) jp Vi;i(s1; %)]d% =
E[j X" (u , s , "2%) , X" (t , s , "2%) jp Vi (s1 , %)Vei ]d%;
The last two equalities follow from the stationarity of the process
Z u,s,2 "2 %
fj " t,s,""2% V(%0 + r)d%0 jp Vi;i(s1 + r; % + r)gr0
"2
cf Lemma 1. The left hand side of (59) is equal to
2"2
Z t,2s
"
0
2"2
Let
and
ds1
Z t,s
"2
0
d Z s1 ,(k,1)T
X
k=1 s1 ,kT
Efj X" (u , s , "2%) , X" (t , s , "2%) jp
(60)
Vi(s1 , %)Vei gd% =
ds1
d Z s1 ,(k,1)T
X
k=1 s1 ,kT
Efj X" (u , s , "2% , (k , 1)"2T ),
X" (t , s , "2% , (k , 1)"2T ) jp Vi(s1 , % , (k , 1)T )Qk,1Vei gd%:
,1 =j X" (u , s , "2 % , (k , 1)"2T ) , X" (t , s , "2 % , (k , 1)"2 T ) jp
,2 = Vi (s1 , % , (k , 1)T ):
The expectation on the right hand side of (60) can be estimated as follows
E,1 ,2Qk,1Vei =
Z
, , Qk,1 Vei dP +
1 2
Z
,1 k (u,t)
where ; > 0 are to be determined later. Note that
j
Z
,1 k (u,t)
,1 ,2 Qk,1 Vei dP
Z
k (u , t)f
j,2 j>k
j k (u , t)
j ,2 Qk,1Vei j dP +
27
k,1 Ve dP;
i
,1 ,2 Q
,1 >k (u,t)
Z
Z
j,2 jk
j ,2Qk,1 Vei j dP =
j ,2Qk,1Vei j dP g (61)
(62)
k (u , t)[(
Z
1 =2
, dP )
j,2 j>k 2
2
kQk,1Vei kL2 + kkQk,1Vei kL1 ]:
Using, again, the fact that ,2 is a Gaussian random variable with zero mean and variance
EVei2 in combination with Lemma 10 and Lemma 7 5) we get that (62) can be estimated
from above by kC2 (u , t) . The second term on the right hand side of (61) can be estimated
from above as follows.
j
Z
k,1 Vei dP
,1 ,2 Q
,1 >k (u,t)
j (P [,1 > k (u , t)])1
1
(E,21=p)p=2(E,22 )2 kQk,1 Vei k
for some 1; 2; 3 > 0. Notice that
1
L 3
(63)
;
P [,1 > k (u , t)] k2=p (u1, t)2=p E,21=p k2C=p (u , t)1,2=p:
Here we used the fact that , according to (56),
(E,21=p )p=2 C (u , t)p=2
for some constant C . Choosing such that
1 , 2=p > 0
and such that
1 1
p
we get an estimate of (63) from above by C (uk,2 t) , where > 0. Combining this with the
bound for the rst term of (60) obtained before we get an estimate of the expression on
the left hand side of (60) from above by
Z t,2s
"
2
C"
0
ds1
s1 ]
[X
T
k=1
(u , t) C 0 (t , s)(u , t) :
k2
This proves tightness of fX" (t)gt0; " > 0, 0 t L in light of the following lemma.
Lemma. 12 Suppose that fY"(t)gt0; " > 0 is a family of processes with trajectories in
C [0; +1) such that for any L > 0 there exist p; C; > 0 so that for 0 s t u L
E j Y" (t) , Y" (s) j2j Y" (u) , Y"(t) jp C (u , s)1+
and Y" (0) = 0; " > 0. Then fY" (t)gt0 ; " > 0 is tight.
Proof. It is enough to prove tightness of the above family on C [0; L], for any L > 0
(see [30]). From the proof of Theorem 15.6 p. 128 of [5] we obtain that for any "; > 0
there is > 0 such that P [wY00" ( ) > "] < : Here
wY00" ( ) = sup
sup min[j Y" (u) , Y" (t) j; j Y" (t) , Y" (s) j]:
0u,s stu
28
One can prove that because t 7! Y" (t) is continuous wY00" ( ) < wY" ( ) < 4wY00" ( ); where
wY" ( ) = sup0u,s j Y"(u),Y" (t) j : By Theorem 8.2 p. 55 of [5] the family fY"(t)gLt0 ; " >
0 is tight over C [0; L]2
Now after we have established that fX" (t)gt0 ; " > 0 is weakly compact we have to
prove that there is only one process whose law can be a weak limit of the laws of the
processes from the family. The rst step in that direction is the following lemma. First
let us denote by (X"1(t); ; X"d(t)) the components of the process fX" (t)gt0; " > 0.
Lemma. 13 For any 2 (0; 1), M a positive integer, : (Rd )M ! R+ continuous,
0 s1 sM s and i = 1; ; d there is C > 0 such that for all " > 0 and
0 s t L we have
j Ef(X"i(t + " ) , X"i (t)) (X"(s1); ; X"(sN )g j C":
(64)
Proof. The expression under the absolute value in (64) is equal to
( "Z t ,2
2 +"
E "
"
t
"2
#
Vi (%)d% ("
Z s21
"
0
V(%)d%; ; "
Z sM2
"
0
)
V(%)d%) :
(65)
Using Lemma 1 we get that (65) equals to
E["
where
= ("
Z s1 ,2 t
"
, "t2
Z " ,2
0
Vi(%)d%]
V(%)d%; ; "
is V,1;0 -measurable. But (66) is equal to
,2
[ "X
T ] Z " ,2
k=1
"
0
Z sM2,t
"
, "t2
(66)
V(%)d%)
E[Vi(%)]d% + O("):
(67)
Using Lemma 8, the fact that Q1 = 1 in combination with Lemma 2 and Lemma 4 we
get that (67) equals to
,2
[ "X
T ] Z " ,2 ,(k,1)T
k=1
"
" ,2 ,kT
E[Vi (% , (k , 1)T )Qk,1( , E)]d% + O("):
By Lemma 10 and estimates identical with those applied to prove (57) we get the asserted
estimate2
Lemma. 14 Under the assumptions of Lemma 13 the following estimates hold. There
exists C > 0 such that for all " > 0
j Ef[(X"i(t + " ) , X"i (t))(X"j (t + " ) , X"j (t)) , ij ] (X" (s1); ; X"(sN ))g j
C";
for i; j = 1; ; d. Here ij is given by (11).
29
Proof. Note that
Ef(X"i (t + " ) , X"i (t))(X"j (t + " ) , X"j (t)) (X"(s1); ; X"(sN ))gd%0 =
Z " ,2
"2 E[
0
d%
Z%
0
Z " ,2
Vi (%; )Vj (%0)d%0 ] + "2E[
0
d%
Z%
0
(68)
Vj (%)Vi (%0 )d%0 ];
where is as in the previous lemma. Consider only the rst term. We can rewrite it as
being equal to
Z " ,2 Z %
2
"
d% E[Vi (%)Vj (%0)]d%0E+
(69)
0
Z " ,2
Z%
0
E[Vi(%)Vj (%0)( , E)]d%0:
Using Lemma 1 and the change of variables "2 % 7! % the rst term of (69) can be written
"2
d%
0
as
Z "
0
0
Z %2
"
d%
0
E[Vi (%0)Vej ]d%0E:
(70)
Using a computation identical with that to obtain (58) we get that the dierence between
(70) and
Z +1
"
E[Vi(%; X(%))Vej ]d%
0
is less than or equal to
Z "
0
Z "
0
d%
d% j
Z +1
%
"2
E[Vi(%0)Vej ]d%0 j
+
1 Z "%2 ,(k,1)T
X
[Vi (%0
%
,
kT
k=[ "2%T ] "2
Z " +
1 1
X
jE
C
0
2
k=[ "2%T ] k
, (k , 1)T )Qk,1 Vej ] j d%0 d% C":
The second term of (69) is estimated essentially with the help of the same argument as the
one used in the proof of the previous lemma. We consider two cases. First when %0 "1,
then by Lemma 10 we get the desired estimate. Secondly when %0 < "1, we use Schwartz
inequality and reach the estimate as claimed in the lemma2
Lemma. 15 For any 2 (0; 1) and 0 < 0 < there is a constant C so that
E j X" (t + " ) , X" (t) j4 C"20 :
(71)
Proof. The proof of (71) reduces to an estimation of
"4
Z " ,2
0
d%1
Z %1
0
d%2
Z %2
0
d%3
Z %3
0
E[Vi;i;i(%2; %3; %4)Vei]d%4:
(72)
If %4 ", , > 0 we can use Lemma 10 and estimate the same way as we have done to
obtain estimate (58).
30
For %4 ", we get that (72) is equal to
"4
"4
where
Z " ,2
0
Z " ,2
0
d%1
d%1
Z %1
Z %1
0
0
d%2
d%2
Z %2
Z %2
0
0
d%3
d%3
Z %3 ^",
0
Z %3 ^",
0
E[Vi;i (%2; %3)W ]d%4+
E[Vi;i(%2; %3)]E[Vi(%4)Vei]d%4;
W = Vi (%4 )Vei , E[Vi (%4 )Vei]:
The second term above is estimated easily with the help of Lemma 10 as we have done in
the proof of (56). The estimation of the rst term can be done separately for %3 , %4 ",
and then for %3 , %4 ", . In the rst case we estimate with the help of Lemmas 8 and
10 precisely as we have done before. For %3 , %4 ", we can estimate the corresponding
integral by
C"4 "2(,2)",2 = C"2( ,) 2
Let f 2 C01 (Rd ). Consider as in Lemma 14. Let ti = s + i" . Then using Taylor's
formula we get
E[f (X" (t)) , f (X"(s))] =
(73)
X
Ef([X"(ti+1 ) , X" (ti)]; (rf )(X"(ti )))g+
i:s<ti <t
1
Ef([X"(ti+1 ) , X" (ti)] [X"(ti+1) , X" (ti)]; (r rf )(X" (ti)))g+
2 i:s<ti <t
1 X Ef([X (t ) , X (t )] [X (t ) , X (t )] [X (t ) , X (t )];
" i+1
" i
" i+1
" i
" i+1
" i
6 i:s<ti <t
(r r rf )(i ))g:
Here i is a point on the segment [X" (ti ); X" (ti+1 )]. The last term of (73) can be estimated
by
C hE j X (t ) , X (t ) j4 i3=4
" i+1
" i
"
with the help of the0 Schwartz lemma. Using Lemma 15 we get that this term is of order
of magnitude C"3 =2, = o(1). The rst term is estimated with the help of Lemma 13 by
C"1, . The second term of (73) can be rewritten as being equal to
1 X Ef([X (t ) , X (t )] [X (t ) , X (t )] , " ; (r rf )(X (t )))g+
" i+1
" i
" i+1
" i
" i
i;j
2 i:s<ti <t
X
where
1 X Ef(" ; (r rf )(X (t )))g:
" i
i;j
2 i:s<ti <t
"i;j = Ef[X" (ti+1 ) , X" (ti )] [X" (ti+1 ) , X" (ti )]g:
The rst term of the above sum tends to zero faster than " . The latter is equal to
ij " E
31
up to a term of magnitude o(" ).
Summarizing, any weak limit of fX" (t)gt0; " > 0 must have a law in C ([0; +1); Rd )
such that for any f 2 C01 (Rd )
f (x(t)) , 21
X
i;j =1;d
ij
Zt
0
@ij2 f (x(%))d%; t 0
is a martingale. This fact according to [34] identies the measure uniquely and thus
concludes the proof of Theorem 1.
References
[1] R. J. Adler. Geometry Of Random Fields. Wiley, New York (1981).
[2] R. J. Adler. An Introduction To Continuity Extrema And related Topics For General
Gaussian Processes. Inst. Of Math. Stat., Hayward Lecture Notes vol. 12.
[3] M. Avellaneda, F. Elliot Jr., C. Apelian. Trapping, Percolation and Anomalous Diffusion of Particles in a Two-Dimensional Random Field, Journal of Statist. Physics,
72, 1227-1304 (1993).
[4] M. Avellaneda, A. J. Majda. An Integral Representation And Bounds On The Effective Diusivity In Passive Advection By Laminar And Turbulent Flows, Commun.
Math.Phys. 138, 339-391 ( 1991).
[5] P. Billingsley. Convergence of Probability Measures 1968 . New York: Wiley.
[6] T. Brox. A One Dimensional Process In A Wiener Medium. Ann Of Prob. 14, 12061218 (1986).
[7] R. Carmona, L. Xu. Homogenization For Time Dependent 2-d Incompressible Gaussian Flows. Preprint (1996).
[8] A. Chorin. Vorticity And Turbulence. (1994) Berlin, Springer.
[9] P. Doukhan. Mixing. Properties And Examples. (1994) Berlin-New York. SpringerVerlag.
[10] N. Dunford, J. T. Schwartz. Linear Operators I. Wiley (1988).
[11] A. Fannjiang, G. C. Papanicolaou . Convection Enhanced Diusion For Periodic
Flows. SIAM J. Appl. Math. 54, 333-408 (1994).
[12] A. Fannjiang, G.C. Papanicolaou. Convection Enhanced Diusion For Random Flows.
Preprint 1996.
[13] A. Fannjiang, G.C. Papanicolaou. Diusion In Turbulence. Probability Theory and
Related Fields 105 (1996) pp. 279-334.
[14] S. Foguel. Ergodic Theory Of Markov Processes. New York, Van Nostrand (1969).
32
[15] D. Gemam, J. Horowitz. Random Shifts Which Preserve Measure, Proc. Amer. Math.
Soc. Vol. 49, 143-150, (1975).
[16] M. B. Isichenko. Percolation, Statistical Topography And Transport In Random Media,
Rev. Mod. Phys 64, 961-1043 (1992).
[17] K. Kawazu, Y. Tamura, H. Tanaka Limit Theorems For One Dimensional Diusions
And Random Walks In Random Enviroments. Prob. Theor. And Rel. Fields 80, pp
501-541 (1989).
[18] H. Kesten, G. C. Papanicolaou. A Limit Theorem For Turbulent Diusion, Comm.
Math. Phys. 65, 97-128(1979).
[19] R. Z. Khasminskii. A Limit Theorem For Solutions of Dierential Equations With A
Random Right Hand Side. Theor. Prob. Appl. 11,390-406 (1966).
[20] T. Komorowski. Diusion Approximation For The Advection Of Particles In A
Strongly Turbulent Random Environment, To Appear In Ann. Of Prob.
[21] S. M. Kozlov .The Method Of Averaging And Walks In Inhomogeneous Environments.
Russian Math. Surveys. 40, pp 73-145, (1985).
[22] R. Kraichnan.Diusion in a Random Velocity Field, Phys. Fluids 13, 22-31(1970).
[23] S. Molchanov.Lectures on Random Media, in: P. Bernard, ed., Lectures On Probability Theory Ecole d'Ete de Probabilite de Saint-Flour XXII pp 242-411. Springer
(1992), Lecture Notes in Mathematics 1581.
[24] H. Osada.Homogenization Of Diusion Processes With Random Stationary Coecients.in: Proc. Of The 4-th Japan-USSR Symposium On Probability Theory. pp
507-517, (1982), Springer, Lecture Notes in Mathematics 1021.
[25] K. Oeschlager. Homogenization Of A Diusion Process In A Divergence Free Random
Field. Ann. Of Prob. 16, pp. 1084-1126 (1988).
[26] G. C. Papanicolaou, S. R. S. Varadhan . Boundary Value Problems With Rapidly
Oscillating Random Coecients. In: J. Fritz, J. L. Lebowitz edts., Random Fields
Coll. Math. Soc. Janos Bolyai. 27, pp. 835-873, (1982) North Holland.
[27] S. C. Port, C. Stone. Random Measures And Their Application To Motion In An
Incompressible Fluid. J. Appl. Prob. 13, 1976 499-506, (1976).
[28] Y. G. Sinai, The The Limiting Behavior Of A One Dimensional Walk In A Random
Medium. Theory Prob. Appl. 27, 256-268 (1982).
[29] M. M. Stanisis. The Mathematical Theory Of Turbulence. 2-nd edition, SpringerVerlag, (1988).
[30] C. J. Stone, Weak Convergence Of Stochastic Processes Dened On Semi-innite
Time Intervals, Proc. Amer. Math. Soc. 14, 694-696, (1963).
33
[31] G. I. Taylor. Diusions By Continuous Movements, Proc. London Math. Soc. Ser. 2,
20, 196-211 (1923).
[32] A. S. Monin, A. M. Yaglom. Statistical Fluid Mechanics of Turbulence, Vols I,II, MIT
Press Cambridge,(1971), (1975)
[33] Yu. A. Rozanov. Stationary Random Processes, Holden-Day, (1969).
[34] D. Strook, S. R. S. Varadhan Multidimensional Diusion Processes,Berlin, Heidelberg, New York: Springer-Verlag, (1979).
34