Large Deviations for Quantum Spin probabilities at

Transcription

Large Deviations for Quantum Spin probabilities at
Large Deviations for Quantum Spin
probabilities at temperature zero
Artur O. Lopes, Jairo K. Mengue and Joana Mohr
Inst. Mat - UFRGS, Brazil
May 6, 2015
Abstract
We consider certain self-adjoint observables for the KMS state associated to the Hamiltonian H = σx ⊗σx over the quantum spin lattice
C2 ⊗ C2 ⊗ C2 ⊗ .... For a fixed observable of the form L ⊗ L ⊗ L ⊗ ...,
where L : C2 → C2 , and for zero temperature one can get a naturally
defined stationary probability µ on the Bernoulli space {1, 2}N . This
probability is not Markov. Anyway, for such probability µ we can
show that a Large Deviation Principle is true for a certain class of
functions. The result is derived by showing the explicit form of the
free energy which is differentiable.
We show that the natural probability µ at temperature zero is
invariant for the shift but it is not mixing. We also prove that µ is
not a Gibbs state for a continuous normalized potential.
1
Introduction
We consider the KMS state associated to the Hamiltonian H = σx ⊗ σx
over the quantum spin lattice C2 ⊗ C2 ⊗ C2 ⊗ ... and we will estimate the
probabilities of the eigenvalues of a fixed general observable which is in the
form L ⊗ L ⊗ L ⊗ ..., where L : C2 → C2 . The (infinite) quantum spin lattice
is the set
( Cd )N = (Cd ⊗ Cd ⊗ .... ⊗ Cd ⊗ ....),
with a natural metric structure (see [5]).
In order to understand this problem (for each fixed L) it is natural to
define a probability µL on the Bernoulli space {1, 2}N . Even if the Hamiltonian we consider here is very particular we get a large class of probabilities
1
µL . This is so because we consider a general L (the mild restriction is given
by Assumption A in section 3). We can ask about the ergodic properties of
each of these µL . We will address here the problem at temperature zero.
The paper [14] is the main motivation for the present work. We recall
the general setting of that paper. For a fixed value d, we will consider a
complex self-adjoint operator H depending on two variables H : (Cd ⊗Cd ) →
(Cd ⊗ Cd ). As H depends just on two coordinates for practical purposes all
definitions will be on sets of the form
( Cd )n = (Cd ⊗ Cd ⊗ .... ⊗ Cd ).
The set of linear operators acting on Cd will be denoted by Md . We will
call ω = ωn : Md ⊗ Md ⊗ ... ⊗ Md → C a C ∗ -dynamical state if ωn (I ⊗ n ) =
|
{z
}
n
1 and ωn (a) ≥ 0, if a is a non-negative element in the tensor product. It
follows that ωn (L1 ⊗ L2 ⊗ ... ⊗ Ln ) is a positive number if all Lj are positive,
j = 1, 2..., n.
The state w that we will consider here is of the form: take β > 0 and
H : (Cd ⊗ Cd ) → (Cd ⊗ Cd ) self-adjoint. For fixed n we will consider
Hn = (Cd )⊗n → (Cd )⊗n , where
Hn =
n−2
∑
I ⊗ j ⊗ H ⊗ I ⊗ (n−j−2) .
j=0
Take
ρω = ρωH,β,n =
1
e− βHn .
−
βH
n
T r (e
)
Finally, we define
ωH,β,n (L1 ⊗ L2 ⊗ .... ⊗ Ln ) =
1
Tr[ e− βHn (L1 ⊗ L2 ⊗ .... ⊗ Ln ) ] =
T r (e− βHn )
= Tr (ρω L1 ⊗ L2 ⊗ .... ⊗ Ln )
The Pauli matrices are
(
)
(
)
0 1
0 −i
x
y
σ =
, σ =
1 0
i 0
(1)
(
and
z
σ =
1 0
0 −1
)
.
A Quantum Ising Chain is defined by the Hamiltonian of the form
H = −J ( σ1x ⊗ σ2x ) − h ( σ1z ⊗ I ),
2
where σix , for example, means that the Pauli matrix σ x act in the position i
of the tensor product. The associated n-Hamiltonian is (we use below a not
so rigorous notation)
Hn =
n
∑
x
[ −J ( σix ⊗ σi+1
) − h ( σiz ⊗ I ) ].
i=1
In the case h = 0 we will say that the Quantum Ising Chain has no magnetic
term. We will consider the Quantum Ising Chain with no magnetic term.
Consider L : Cd → Cd a self-adjoint operator and λ1 , λ2 , ..., λd denote its
real eigenvalues. We suppose that ψj , j = 1, 2, .., d denotes an orthonormal
basis of eigenvectors of L.
We consider the following observable:
L⊗ n = (L ⊗ L ⊗ .... ⊗ L) : (Cd ⊗ Cd ⊗ .... ⊗ Cd ) → (Cd ⊗ Cd ⊗ .... ⊗ Cd ).
Then,
( ψj1 ⊗ ψj2 ⊗ .... ⊗ ψjk ⊗ .... ⊗ ψjn )
is an eigenvector of L⊗ n associated to the eigenvalue λj1 λj2 ... λjk ... λjn . Any
eigenvalue of L⊗ n is of this form.
In the generic case all the possible eigenvalues λj1 , λj2 , ... , λjk , ... , λjn are
different and all possible products are also different. In this case there is a
bijection of the strings λj1 , λj2 , ... , λjk , ... , λjn an eigenvalues of the operator
acting on the tensor product. But this is not an essential issue.
The values obtained by physical measuring (associated to the observable
L⊗ n ) in the finite dimensional Quantum Mechanics setting are (of course)
the eigenvalues of L⊗ n . The relevant information is the probability of each
possible outcome event of measuring L⊗ n .
Given, β > 0, H : Cd ⊗ Cd → Cd ⊗ Cd and fixed L : Cd → Cd we can
define a probability µ = µβ on Ω = {λ1 , λ2 , ..., λd }N . This will be done by
defining µ on cylinders of size n. That is, for each n, we will define µβ,n over
Ωn = {λ1 , λ2 , ..., λd }n and then we use Kolmogorov extension theorem to get
µβ on Ω. The zero temperature case is concerned with the limit of µβ , when
β → ∞.
More precisely, for fixed n, we consider the C ∗ -state w = wn given in (1),
and we denote by Pψj : Cd → Cd the orthogonal projection on the subspace
generated by ψj . Then , we define the probability µ on Ωn = {λ1 , λ2 , ..., λd }n
by
µ( λj1 , λj2 , ..., λjn ) = w( Pψj1 ⊗ Pψj2 ⊗ ... ⊗ Pψjn ).
With this information one can get the probability of each possible outcome event of measuring via L⊗ n . Indeed, for fixed n as the eigenvalues of L⊗ n are of the form λj1 λj2 ... λjk ... λjn we just collect the n-strings
3
λj1 , λj2 , ..., λjn which produce each specific value and add their individual
probabilities.
If the eigenvalues of L are the numbers 1 and −1, then the possible
eigenvalues of L⊗ n are also just the values 1 and −1. This is the case we
consider here. More precisely, we will consider here the Quantum spin Chain
with no magnetic term: H = σ1x ⊗ σ2x and a general L : C2 → C2 (with
eigenvalues reaching only values 1 and −1). We are mainly interested in the
zero temperature limit case.
In the case L is a two by two complex selfadjoint matrix, this µ can be
considered defined on the Bernoulli space {1, 2}N if we identify λ1 with 1 and
λ2 with 2.
A natural question is: what ergodic properties are true for such probability µ on {1, 2}N ? Is it stationary for the action of the shift σ? Is a Gibbs
state for some continuous normalized potential? Is it true a Large Deviation Principle (L.D.P for short) for a certain class of functions? We refer
the reader to [7], [8], [6] and [10] for several results on the topic of Large
Deviations for Quantum Spin Systems.
Given A : {1, 2}N → R, is it true that there exist M > 0 such that for
fixed small ϵ
n−1
{
}
∫
1 ∑
j
µ z such that A(σ (z)) − A(z)dµ(z) ≥ ϵ ≤ e−M n ?
n j=0
This requires to analyze
n−1
( {
})
∫
1∑
1
j
lim log µ z such that A(σ (z)) − A(z)dµ(z) ≥ ϵ
,
n→∞ n
n j=0
or more generally, given a subset B of R, estimate
( {
})
n−1
1∑
1
j
lim log µ z such that
A(σ (z)) ∈ B
.
n→∞ n
n j=0
We say that there exist a large deviation principle for the probability µ
and the function A if there exist a lower-semicontinuous function I : R → R
such that for all intervals B we get
( {
})
n−1
1
1∑
j
lim log µ z such that
A(σ (z)) ∈ B
= inf I(s).
n→∞ n
s∈B
n j=0
4
A more precise formulation requires to discriminate if B is open or closed set
(see [3]).
Denote for t ∈ R
∫
1
2
n−1
c(t) = lim
log et ( A(z)+A(σ(z))+A(σ (z))+...+A(σ (z)) ) dµ(z).
n→∞ n
It is a classical result that if c is differentiable then a Large Deviation Principle is true for µ and A (see [3]). In this case I is the Legendre transform
of c. We will show a L.D.P. for µ in the case A depends just of the first
coordinate.
We would like to thanks A. Quas for some comments which help us in
Theorem 13. We also want to thanks C. A. Moreira for helpful conversations.
2
The associated probability µ when H has
no magnetic term
For a question of notation we will use the expression σx instead of σ x (as
defined before).
When H = σx ⊗ σx we want to compute U = ei β H for any complex β.
The relation σx ◦ σx = I is very helpful.
Note that
U =e
i β σx ⊗σx
=
∞
∑
(iβ)j
j=0
j!
(σx ⊗ σx ) =
j
∞
∑
(iβ)j
j=0
j!
(σxj ⊗ σxj ) =
cos(β) (I ⊗ I) + i sin(β) (σx ⊗ σx ).
For Quantum Statistical Mechanics is natural to take β = iβ, where β is
real. We now want to compute B := e− β H , β real.
For β real we get
B = e− β σx ⊗σx = cos(i β) (I ⊗ I) + i sin(i β) (σx ⊗ σx ).
Let us define Bn : (C2 )⊗n → (C2 )⊗n by Bn = e−βHn . In order to help
the reader we will present some examples of the general calculation. As a
particular case, note that σx ⊗ σx ⊗ I ⊗ I commutes with I ⊗ σx ⊗ σx ⊗ I,
etc. In this case we get
B4 = e− β [ (σx ⊗σx ⊗I⊗I)+(I⊗σx ⊗σx ⊗I)+(I⊗I⊗σx ⊗σx ) ] =
e− β (σx ⊗σx ⊗I⊗I) ◦ e− β (I⊗σx ⊗σx ⊗I) ◦ e− β (I⊗I⊗σx ⊗σx ) =
5
[ cos(iβ) (I ⊗ I ⊗ I ⊗ I) + i sin(iβ) (σx ⊗ σx ⊗ I ⊗ I) ] ◦
[ cos(iβ) (I ⊗ I ⊗ I ⊗ I) + i sin(iβ) (I ⊗ σx ⊗ σx ⊗ I) ] ◦
[ cos(iβ) (I ⊗ I ⊗ I ⊗ I) + i sin(iβ) (I ⊗ I ⊗ σx ⊗ σx ) ].
(2)
In this particular case if λ are the eigenvalues of a certain L : C2 → C2 ,
then
µβ,4 (λj1 , λj2 , λj3 , λj4 ) = w( Pψj1 ⊗ Pψj2 ⊗ Pψj3 ⊗ Pψj4 ),
1
where w(·) = c4 T r(e(−βH4 ) ·), and c4 = T r(e−βH
.
4)
For n in the general case one can show that
µβ,n (λj1 , λj2 , λj3 , ..., λjn ) =
cn Tr [ e− β [ (σx ⊗σx ⊗I...I)+(I⊗σx ⊗σx ...I)+...+(I...I⊗σx ⊗σx ) ] ( Pψj1 ⊗Pψj2 ⊗Pψj3 ⊗...⊗Pψjn )] =


(cos(iβ)
(I
⊗
...
⊗
I
)
+
i
sin(iβ)
(σ
⊗
σ
⊗
I
⊗
...
⊗
I))
◦


x
x


| {z }




n



 (cos(iβ) (I ⊗ ... ⊗ I ) + i sin(iβ) (I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I) ◦ 





| {z }


n
,
= cn Tr
◦ ... ◦








(cos(iβ) (I| ⊗ {z
... ⊗ I}) + i sin(iβ) (I ⊗ ... ⊗ I ⊗ σx ⊗ σx )) ◦








n




(Pψj1 ⊗ Pψj2 ⊗ ... ⊗ Pψjn )
(3)
1
where cn = T r(e−βHn ) .
3
The associated probability µ on the zero
temperature limit case
As a question of notation we use the expression σz instead of σ z .
We will present first a simple case for the purpose of getting a clear picture
of the setting for the general L. Suppose L = σz . In this case µ will be the
maximal entropy measure.
Let us take for instance the case n = 4. In this way we have to consider
the observable C = σz ⊗ σz ⊗ σz ⊗ σz .
The eigenvalues of σz are 1 and −1 which are associated respectively to
the eigenvectors e1 = (1, 0) and e2 = (0, 1). We associate 1 to 1 and 2 to −1.
The eigenvectors of C are of the form
ej1 ⊗ ej2 ⊗ ej3 ⊗ ej4 ,
where, (j1 , j2 , j3 , j4 ) ∈ {1, 2}4 . The possible eigenvalues are 1 and −1.
6
We denote by P1 : C2 → C2 the projection on e1 and P2 : C2 → C2 the
projection on e2 . In this way
(
)
(
)
1 0
0 0
P1 =
and
P2 =
.
0 0
0 1
The probability µ of the element (j1 , j2 , j3 , j4 ) is
µβ,4 (j1 , j2 , j3 , j4 ) = w(Pj1 ⊗Pj2 ⊗Pj3 ⊗Pj4 ) = c4 Tr (B4 [ Pj1 ⊗Pj2 ⊗Pj3 ⊗Pj4 ] ),
1
where c4 = T r(e−βH
.
4)
Remember that Tr (A1 ⊗ A2 ⊗ A3 ⊗ A4 ) = Tr(A1 ) Tr(A2 ) Tr(A3 ) Tr(A4 )
and note that
(
)
0 0
σx (P1 ) =
1 0
has trace 0. Moreover,
(
σx (P2 ) =
0 1
0 0
)
has trace 0.
In this way
Tr [ (I⊗I⊗σx ⊗σx )(Pj1 ⊗Pj2 ⊗Pj3 ⊗Pj4 ) ] = Tr(Pj1 ⊗Pj2 ⊗σx (Pj3 )⊗σx (Pj4 )) = 0.
Using a similar reasoning for (I ⊗ σx ⊗ σx ⊗ I), etc... we finally get that
w( Pj1 ⊗Pj2 ⊗Pj3 ⊗Pj4 ) = c4 Tr [ e−βH4 (Pj1 ⊗Pj2 ⊗Pj3 ⊗Pj4 ) ] = c4 [ cos(i β) ]3 .
Therefore µβ,4 (j1 , j2 , j3 , j4 ) has the same value for any (j1 , j2 , j3 , j4 ) ∈ {1, 2}4 .
It is quite easy to extend the above for estimating µβ,n (j1 , j2 , j3 , ..., jn ),
n ∈ N. In this way we get that µβ is the independent probability on {1, 2}N ,
with P (1) = p1 = 1/2 and P (2) = p2 = 1/2.
Now we will consider a more rich and interesting example. We will define
µ on {1, 2}N via another general observable.
Assumption A: We consider the self-adjoint operators (observable) of
the form
(
)
cos2 (θ) − sin2 (θ) 2 cos(θ) sin(θ)
L=
,
2 cos(θ) sin(θ) sin2 (θ) − cos2 (θ)
where θ ∈ (0, π/2), and the corresponding associated observable
Cn = C = L
L ⊗ ... ⊗ L} .
| ⊗ L ⊗ {z
n
7
The eigenvalues of L are λ1 = 1 and λ2 = −1 which are associated
respectively to the unitary eigenvectors v1 = (cos(θ), sin(θ)) ∈ C2 and v2 =
(− sin(θ), cos(θ)) ∈ C2 which are orthogonal.
The case θ = 0 corresponds to L = σz and the case θ = π/2 corresponds
to L = −σz . We will exclude this cases from now.
We associate 1 to λ1 and 2 to λ2 . The eigenvectors of C are of the form
vj1 ⊗ vj2 ⊗ vj3 ⊗ vj4 ⊗ ... ⊗ vjn ,
where, (j1 , j2 , j3 , j4 , ..., jn ) ∈ {1, 2}n .
We denote by P1 : C2 → C2 the projection on v1 and P2 : C2 → C2 the
projection on v2 . In this way
(
)
cos(θ)2
cos(θ) sin(θ)
P1 =
cos(θ) sin(θ)
sin(θ)2
(
and
P2 =
sin(θ)2
− cos(θ) sin(θ)
− cos(θ) sin(θ)
cos(θ)2
)
.
Note that Tr P1 = Tr P2 = 1. Moreover
)
(
cos(θ) sin(θ)
sin2 (θ)
σx (P1 ) =
cos2 (θ)
cos(θ) sin(θ)
has trace β1 := sin(2 θ) ∈ R and
)
(
− cos(θ) sin(θ)
cos2 (θ)
σx (P2 ) =
sin2 (θ)
− cos(θ) sin(θ)
has trace β2 := − sin(2 θ) ∈ R. Therefore, Tr (σx (P2 )) = β2 = −β1 . Note
that, if θ ̸= π4 then β1 , β2 both have modulus smaller than 1. The case
θ = π/4 produces some results that differ essentially of other parameters.
The probability of the element [j1 , j2 , j3 , j4 , .., jn ] (cylinder) is given by
µβ,n (j1 , j2 , j3 , j4 , .., jn ), see expression (3).
We point out that the factor of normalization is cn =
Indeed,
1
Tr(Bn )
=
1
.
cosn−1 (iβ)2n
Tr(Bn ) = Tr [ e− β [ (σx ⊗σx ⊗I...I)+(I⊗σx ⊗σx ...I)+...+(I...I⊗σx ⊗σx ) ] (I ⊗ ... ⊗ I)] =
|
{z
}
n
8
= Tr

(cos(iβ) (I| ⊗ {z
... ⊗ I}) + i sin(iβ) (σx ⊗ σx ⊗ I ⊗ ... ⊗ I)) ◦




n




(cos(iβ)
(I
⊗
...

| {z ⊗ I}) + i sin(iβ) (I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I) ◦












◦ ... ◦




(cos(iβ) (I| ⊗ {z
... ⊗ I}) + i sin(iβ) (I ⊗ ... ⊗ I ⊗ σx ⊗ σx )) ◦




n


(I ⊗ I ⊗ ... ⊗ I)










n
which is a sum of 2n−1 terms. Only the term cosn−1 (iβ) (I ⊗ ... ⊗ I) do
not contain a product of σx . As Tr (σx (I)) = 0, any term which is not
cosn−1 (iβ) (I ⊗ I ⊗ I ⊗ ... ⊗ I) will produce a null trace. Moreover Tr (I) = 2,
therefore Tr(Bn ) = cosn−1 (iβ)2n .
Using equation (3) we have that
µβ,n (λj1 , λj2 , ..., λjn ) =
=
1
cosn−1(iβ)2n
Tr

(cos(iβ) (I| ⊗ {z
... ⊗ I}) + i sin(iβ) (σx ⊗ σx ⊗ I ⊗ ... ⊗ I)) ◦




n




(cos(iβ)
(I
⊗
...

| {z ⊗ I}) + i sin(iβ) (I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I) ◦












◦ ... ◦




(cos(iβ) (I| ⊗ {z
... ⊗ I}) + i sin(iβ) (I ⊗ ... ⊗ I ⊗ σx ⊗ σx )) ◦




n


(Pj1 ⊗ Pj2 ⊗ ... ⊗ Pjn )










n
Using the relation i sin(βi) = − cos(βi) + e−β we get, when β is large,
=
1
cosn−1(iβ)2n
Tr
µβ,n (λj1 , λj2 , ..., λjn ) ∼

(cos(iβ) (I| ⊗ {z
... ⊗ I}) − cos(iβ) (σx ⊗ σx ⊗ I ⊗ ... ⊗ I)) ◦




n




(cos(iβ) (I| ⊗ {z
... ⊗ I}) − cos(iβ) (I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I) ◦













◦ ... ◦




(cos(iβ) (I| ⊗ {z
... ⊗ I}) − cos(iβ) (I ⊗ ... ⊗ I ⊗ σx ⊗ σx )) ◦




n


(Pj1 ⊗ Pj2 ⊗ ... ⊗ Pjn )










n
Therefore, when β → ∞, the limit measure µn satisfies
9
.

( (I| ⊗ {z
... ⊗ I}) − (σx ⊗ σx ⊗ I ⊗ ... ⊗ I)) ◦




n




(
(I
⊗
...
⊗ I ) − (I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I) ◦

 | {z }
µn (λj1 , λj2 , ..., λjn ) =
1
n
Tr
◦
...
◦
2n 



( (I| ⊗ {z
... ⊗ I}) − (I ⊗ ... ⊗ I ⊗ σx ⊗ σx )) ◦




n


(Pj1 ⊗ Pj2 ⊗ ... ⊗ Pjn )





















(4)
For example, as Tr(σx Pji ) = βji ,
[
]
1
µ2 (j1 , j2 ) = 2 Tr (I ⊗ I − σx ⊗ σx ) ◦ (Pj1 ⊗ Pj2 ) =
2
[
[
]
]
1
1
= 2 Tr Pj1 ⊗ Pj2 − σx Pj1 ⊗ σx Pj2 = 2 1 − βj1 βj2 .
2
2
Note that if θ ̸= π4 , the number 1 − βj1 βj2 is always positive because
|β1 | < 1 and |β2 | < 1.


( (I ⊗ I ⊗ I) − (σx ⊗ σx ⊗ I)) ◦ 

1
( (I ⊗ I ⊗ I) − (I ⊗ σx ⊗ σx ) ◦
µ3 (j1 , j2 , j3 ) = 3 Tr
=


2
(Pj1 ⊗ Pj2 ⊗ Pj3 )
[
]
1
= 3 1 − βj1 βj2 − βj2 βj3 + βj1 βj3 .
2
Proposition 1.
µn+1 (1, j2 , j3 , ..., jn ) + µn+1 (2, j2 , j3 , .., jn ) = µn (j2 , j3 , .., jn ).
Proof: Indeed, using (4) and that P1 ⊗ Pj1 ⊗ Pj2 ⊗ ... ⊗ Pjn + P2 ⊗ Pj1 ⊗
Pj2 ⊗ ... ⊗ Pjn = I ⊗ Pj1 ⊗ Pj2 ⊗ ... ⊗ Pjn , we get
µn+1 (1, j1 , j2 , ..., jn ) + µn+1 (2, j1 , j2 , ..., jn ) =

( (I| ⊗ {z
... ⊗ I}) − (σx ⊗ σx ⊗ I ⊗ ... ⊗ I)) ◦




n+1



(
(I
⊗
... ⊗ I ) − (I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I)) ◦


 | {z }
1
n+1
= n+1 Tr
◦
...
◦

2



((I| ⊗ {z
... ⊗ I}) − (I ⊗ ... ⊗ I ⊗ σx ⊗ σx )) ◦





n+1

(I ⊗ Pj1 ⊗ Pj2 ⊗ ... ⊗ Pjn )
10





















.
As Tr(σx ) = 0










1
Tr

2n+1








−(σx ⊗ σx ⊗ I ⊗ ... ⊗ I) ◦
( (I| ⊗ {z
... ⊗ I}) − (I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I)) ◦
n+1
◦ ... ◦
((I| ⊗ {z
... ⊗ I}) − (I ⊗ ... ⊗ I ⊗ σx ⊗ σx )) ◦
n+1
(I ⊗ Pj1 ⊗ Pj2 ⊗ ... ⊗ Pjn )



















= 0.
Note also that, for any B1 , ..., Bn we have
Tr(I ⊗ B1 ⊗ ... ⊗ Bn ) = 2 Tr(B1 ⊗ ... ⊗ Bn ).
Then
µ(1, j1 , j2 , ..., jn ) + µ(2, j1 , j2 , ..., jn ) =
=
1
2n+1

( (I| ⊗ {z
... ⊗ I}) ◦




n+1



(
(I
⊗
... ⊗ I ) − (I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I)) ◦


 | {z }
Tr
n+1










◦ ... ◦
((I| ⊗ {z
... ⊗ I}) − (I ⊗ ... ⊗ I ⊗ σx ⊗ σx )) ◦










n+1
(I ⊗ Pj1 ⊗ Pj2 ⊗ ... ⊗ Pjn )


( (I| ⊗ {z
... ⊗ I}) − (I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I)) ◦




n+1


1
◦ ... ◦
= n+1 Tr
((I| ⊗ {z
... ⊗ I}) − (I ⊗ ... ⊗ I ⊗ σx ⊗ σx )) ◦

2




n+1

 (I ⊗ P ⊗ P ⊗ ... ⊗ P )
j1
j2
jn

( (I| ⊗ {z
... ⊗ I}) − (σx ⊗ σx ⊗ I ⊗ ... ⊗ I)) ◦




n


1
◦ ... ◦
= n Tr
((I| ⊗ {z
... ⊗ I}) − (I ⊗ ... ⊗ I ⊗ σx ⊗ σx )) ◦

2




n

(Pj1 ⊗ Pj2 ⊗ ... ⊗ Pjn )
= µ(j1 , j2 , ..., jn ).
11







































=
=
=
It follows from the commutativity of the composition of the several terms
in expression (4) that
µn (j1 , j2 , j3 , j4 , .., jn ) = µn (jn , jn−1 , jn−2 , ...j2 , j1 ).
(5)
This sequence µn , n ∈ N, will define a probability µ on the Bernoulli
space. Indeed, the probability µn and µn+1 are compatible in the sense that
µn (j1 , j2 , j3 , .., jn ) = µn+1 (j1 , j2 , j3 , .., jn , 1) + µn+1 (j1 , j2 , j3 , .., jn , 2).
This follows from (5) and Proposition 1.
In this way by Kolmogorov extension theorem we get a probability µ on
the Bernoulli space {1, 2}Z which is stationary for the shift σ (see Proposition
1).
Corollary 2. The probability µ is stationary.
One can easily show that such µ is not a Markov probability. Moreover,
the Stochastic Process associated to µ is not a two step Markov Chain (see
section 7).
4
A Large Deviation Principle for µ
We want to show how to compute µ recursively in cylinders.
Theorem 3. For any n > 0 we have that
µ(k, j1 , j2 , ..., jn ) =
µ(j1 , j2 , ..., jn ) βk βj1
−
µ(j2 , ..., jn )+
2
22
βk βj2
βk βj3
µ(j
,
j
,
...,
j
)
−
µ(j4 , j5 , ..., jn ) + ...
3
4
n
23
24
(−1)n−2 βk βjn−2
(−1)n−1 βk βjn−1 (−1)n βk βjn
µ(jn−1 , jn ) +
+
.
+
2n−1
2n+1
2n+1
(6)
Proof: By equation (4) we have

( (I| ⊗ {z
... ⊗ I}) − (σx ⊗ σx ⊗ I ⊗ ... ⊗ I)) ◦




n+1



(
(I
⊗
... ⊗ I ) − (I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I)) ◦


 | {z }
1
n+1
µ(k, j1 , j2 , ..., jn ) = n+1 Tr
◦
...
◦

2



((I| ⊗ {z
... ⊗ I}) − (I ⊗ ... ⊗ I ⊗ σx ⊗ σx )) ◦





n+1

(Pk ⊗ Pj1 ⊗ Pj2 ⊗ ... ⊗ Pjn )
12






















(+(I| ⊗ {z
... ⊗ I}))◦




n+1




( +(I| ⊗ {z
... ⊗ I}) − (I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I ⊗ I) ) ◦




n+1


1
( + (I| ⊗ {z
... ⊗ I}) − (I ⊗ I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I) ) ◦
= n+1 Tr

2
n+1



◦
...
◦




( + (I| ⊗ {z
... ⊗ I}) − (I ⊗ I ⊗ ... ⊗ I ⊗ σx ⊗ σx ) ) ◦




n+1


(Pk ⊗ Pj1 ⊗ Pj2 ⊗ ... ⊗ Pjn )
















−(σx ⊗ σx ⊗ I ⊗ I ⊗ ... ⊗ I)◦




( + (I| ⊗ {z
... ⊗ I}) − (I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I ⊗ I) ) ◦




n+1



(
+
(I
⊗
... ⊗ I}) − (I ⊗ I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I) ) ◦

| {z
1
+ n+1 Tr
n+1

2

◦
...
◦




( + (I| ⊗ {z
... ⊗ I}) − (I ⊗ I ⊗ ... ⊗ I ⊗ σx ⊗ σx ) )◦





n+1

(Pk ⊗ Pj1 ⊗ Pj2 ⊗ ... ⊗ Pjn )













= 12 µ(j1 , j2 , ..., jn )−

(+ (I| ⊗ {z
... ⊗ I}) − (I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I ⊗ I) ) ◦




n+1



(+
(I
⊗
... ⊗ I}) − (I ⊗ I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I) ) ◦


| {z

1
n+1
− n+1 Tr
◦
...
◦

2



( + (I| ⊗ {z
... ⊗ I}) − (I ⊗ I ⊗ ... ⊗ I ⊗ σx ⊗ σx ) ) ◦





n+1

(σx Pk ⊗ σx Pj1 ⊗ Pj2 ⊗ ... ⊗ Pjn )
= 12 µ(j1 , j2 , ..., jn )−

(+ (I| ⊗ {z
... ⊗ I}) ) ◦




n+1



(+
(I
⊗
... ⊗ I}) − (I ⊗ I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I) ) ◦


| {z

1
n+1
− n+1 Tr
◦
...
◦

2



( + (I| ⊗ {z
... ⊗ I}) − (I ⊗ I ⊗ ... ⊗ I ⊗ σx ⊗ σx ) ) ◦





n+1

(σx Pk ⊗ σx Pj1 ⊗ Pj2 ⊗ ... ⊗ Pjn ))
13














+
























































((I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I ⊗ I) ) ◦




(+ (I| ⊗ {z
... ⊗ I}) − (I ⊗ I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I) ) ◦




n+1
1
+ n+1 Tr ◦ ... ◦

2

( + (I| ⊗ {z
... ⊗ I}) − (I ⊗ I ⊗ ... ⊗ I ⊗ σx ⊗ σx ) ) ◦





n+1

 (σ P ⊗ σ P ⊗ P ⊗ ... ⊗ P ))
x k
x j1
j2
jn



















β β
= 12 µ(j1 , j2 , ..., jn ) − k4 j1 µ(j2 , ..., jn )+


(+ (I| ⊗ {z
... ⊗ I}) − (I ⊗ I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I) ) ◦




n+1


1
◦ ... ◦
+ n+1 Tr
( + (I| ⊗ {z
... ⊗ I}) − (I ⊗ I ⊗ ... ⊗ I ⊗ σx ⊗ σx ) ) ◦

2




n+1

 (σ P ⊗ P ⊗ σ P ⊗ ... ⊗ P ))
x
k
j1
x
j2
jn















(using similar computations)
β β
β β
= 12 µ(j1 , j2 , ..., jn ) − k4 j1 µ(j2 , ..., jn ) + k8 j2 µ(j3 , ..., jn )−


(+ (I| ⊗ {z
... ⊗ I}) − (I ⊗ I ⊗ I ⊗ σx ⊗ σx ⊗ I ⊗ ... ⊗ I) ) ◦




n+1


1
◦ ... ◦
− n+1 Tr
( + (I| ⊗ {z
... ⊗ I}) − (I ⊗ I ⊗ ... ⊗ I ⊗ σx ⊗ σx ) ) ◦

2




n+1

 (σ P ⊗ P ⊗ P ⊗ σ P ⊗ ... ⊗ P ))
x
k
j1
j2
x
j3
jn















=...=
(−1)n−2 βk βjn−2
β β
β β
µ(jn−1 , jn )+
= 12 µ(j1 , j2 , ..., jn )− k22j1 µ(j2 , ..., jn )+ k23j2 µ(j3 , ..., jn )...+
2n−1


(
+
(I
⊗
...
⊗
I
)
−
(I
⊗
I
⊗
...
⊗
I
⊗
σ
⊗
σ
)
)
◦


n−1
x
x
| {z }
(−1)
+
Tr
n+1


2n+1
(σx Pk ⊗ Pj1 ⊗ ... ⊗ σx Pjn−1 ⊗ Pjn ))
1
βk βj
βk βj
= µ(j1 , j2 , ..., jn ) − 2 1 µ(j2 , ..., jn ) + 3 2 µ(j3 , ..., jn ) − ...
2
2
2
+
(−1)n−2 βk βjn−2
(−1)n−1 βk βjn−1 (−1)n βk βjn
µ(j
,
j
)
+
+
.
n−1 n
2n−1
2n+1
2n+1
Some examples of direct computations:
14
µ(k, j1 , j2 ) =
µ(k, j1 , j2 , j3 ) =
µ(k, j1 , j2 , j3 , j4 ) =
µ(j1 , j2 ) βk βj1 βk βj2
−
+
,
2
8
8
βk βj2
βk βj3
µ(j1 , j2 , j3 ) βk βj1
−
µ(j2 , j3 ) +
−
,
2
4
16
16
µ(j1 , j2 , j3 , j4 ) βk βj1
βk βj2
βk βj3 βk βj4
−
µ(j2 , j3 , j4 )+
µ(j3 , j4 )−
+
.
2
4
8
32
32
Suppose A : {1, 2} → R is a function and we are interested in the
sums of the form A(x0 ) + A(x1 ) + A(x2 ) + ... + A(xn ), n ∈ N, where x =
(x0 , x1 , x2 , ..., xn , ..).
For the purpose of future use in Large Deviations we are interested for
t > 0 in
∫
Qn (t) = et (A(x0 )+A(x1 )+...+A(xn )) dµ(x) =
∑∑ ∑
...
et (A(j0 )+A(j1 )+...+A(jn )) µ(j0 , j1 , ...jn ).
j0
Denote α(t) =
j1
∑
jn
βjk et A(jk ) and δ(t) =
jk
|δ(t)|.
∑
et A(jk ) . Note that |α(t)| <
jk
Theorem 4. For any n we have that
1
1
1
1
Qn (t) = δ(t) Qn−1 (t)− α(t)2 Qn−2 (t)+ δ(t) α(t)2 Qn−3 (t)− δ(t)2 α(t)2 Qn−4 (t)+
2
4
8
16
1
(−1)n−1
(−1)n
δ(t)3 α(t)2 Qn−5 (t)−...+ n−2 δ(t)n−4 α(t)2 Q2 (t)+ n−1 δ(t)n−3 α(t)2 Q1 (t).
32
2
2
(7)
As an example we will compute Q3 (t). From the equation
µ(j0 , j1 , j2 , j3 ) =
µ(j1 , j2 , j3 ) βj0 βj1
βj βj
βj βj
−
µ(j2 , j3 ) + 0 2 − 0 3
2
4
16
16
we get
Q3 (t) =
∑ ∑ ∑∑
j0
j1
j2
et (A(j0 )+A(j1 )+A(j2 )+A(j3 )) µ(j0 , j1 , j2 , j3 ) =
j3
15
[ ∑
] ∑ ∑∑
1
t A(j0 )
et (A(j1 )+A(j2 )+A(j3 )) µ(j1 , j2 , j3 ) +
=
e
2 j
j1
j2
j3
0
[∑
][∑
]∑
1
t A(j0 )
t A(j1 )
−
e
βj0
e
βj 1
et (A(j2 )+A(j3 )) µ(j2 , j3 )
4 j
j
j ,j
0
+
1
2
3
1 ∑ ∑ ∑ ∑ t (A(j0 )+A(j1 )+A(j2 ))+A(j3 )
e
βj0 βj2
16 j j j j
0
1
2
3
1 ∑ ∑ ∑ ∑ t (A(j0 )+A(j1 )+A(j2 ))+A(j3 ))
−
e
βj0 βj3 =
16 j j j j
0
1
2
3
1
1
= δ(t) Q2 (t) − α(t)2 Q1 (t)
2
4
Proof of Theorem 4:
By definition
∑∑ ∑
Qn (t) =
...
et (A(j0 )+A(j1 )+...+A(jn )) µ(j0 , j1 , ...jn )
j0
j1
jn
From the equation (6)
µ(j1 , j2 , ..., jn ) βj0 βj1
−
µ(j2 , ..., jn )+
2
22
βj0 βj2
βj βj
µ(j3 , j4 , ..., jn ) − 0 4 3 µ(j4 , j5 , ..., jn ) + ...
3
2
2
n−2
(−1) βj0 βjn−2
(−1)n−1 βj0 βjn−1 (−1)n βj0 βjn
+
µ(j
,
j
)
+
+
.
n−1
n
2n−1
2n+1
2n+1
µ(j0 , j1 , j2 , ..., jn ) =
Then
Qn (t) =
1 ∑ t A(j0 ) ∑ t (A(j1 )+...+A(jn ))
e
e
µ(j1 , ...jn )−
2 j
j ,...,j
0
−
∑
1 ∑ t A(j0 ) ∑ tA(j1 )
e
β
e
β
et (A(j2 )+...+A(jn )) µ(j2 , ...jn )
j
j
0
1
22 j
j
j ,...,j
0
+
n
1
1
2
n
∑
1 ∑ tA(j0 ) ∑ tA(j1 ) ∑ tA(j2 )
e
β
e
e
β
et (A(j3 )+...+A(jn )) µ(j3 , ...jn )
j
j
0
2
3
2 j
j
j
j ,...,j
0
... +
1
2
3
n
∑
∑
(−1)n ∑ tA(j0 ) ∑ tA(jn )
tA(j2 )
e
β
e
β
e
...
etA(jn−1 )
j
j
n
0
2n+1 j
j
j
j
0
n
2
16
n−1
(following the definitions of α(t) and δ(t))
1
1
1
1
= δ(t) Qn−1 (t)− α(t)2 Qn−2 (t)+ δ(t) α(t)2 Qn−3 (t)− δ(t)2 α(t)2 Qn−4 (t)+
2
4
8
16
1
(−1)n−1
δ(t)n−4 α(t)2 Q2 (t)
δ(t)3 α(t)2 Qn−5 (t) − ... +
32
2n−2
(−1)n
(−1)n−1
(−1)n
2
n−2
+ n−1 δ(t)n−3 α(t)2 Q1 (t) +
α(t)
δ(t)
+
α(t)2 δ(t)n−2
2
2n+1
2n+1
1
1
1
1
= δ(t) Qn−1 (t)− α(t)2 Qn−2 (t)+ δ(t) α(t)2 Qn−3 (t)− δ(t)2 α(t)2 Qn−4 (t)+
2
4
8
16
(−1)n−1
(−1)n
1
δ(t)3 α(t)2 Qn−5 (t)−...+ n−2 δ(t)n−4 α(t)2 Q2 (t)+ n−1 δ(t)n−3 α(t)2 Q1 (t).
32
2
2
Proposition 5.
δ(t)2 − α(t)2
Qn (t).
4
√ 2
δ(t) −α(t)2
We get as a corollary that Qn (t) growths like
. In this way for
2
each fixed t:
√
δ(t)2 − α(t)2
1
c(t) = lim log Qn (t) = log
n→∞ n
2
([ ∑
]2 [ ∑
]2 )
1
t A(j0 )
t A(j0 )
= log
e
−
βj 0 e
− log(2)
(8)
2
j
j
Qn+2 (t) =
0
0
is clearly differentiable on t.
Proof. (of the proposition)
We have that
1
1
1
1
Qn (t) = δ(t) Qn−1 (t)− α(t)2 Qn−2 (t)+ δ(t) α(t)2 Qn−3 (t)− δ(t)2 α(t)2 Qn−4 (t)+
2
4
8
16
(−1)n−1
(−1)n
1
δ(t)3 α(t)2 Qn−5 (t)−...+ n−2 δ(t)n−4 α(t)2 Q2 (t)+ n−1 δ(t)n−3 α(t)2 Q1 (t).
32
2
2
and using the same formula applied in Qn−1 (t):
1
1
1
1
Qn−1 (t) = δ(t) Qn−2 (t)− α(t)2 Qn−3 (t)+ δ(t) α(t)2 Qn−4 (t)− δ(t)2 α(t)2 Qn−5 (t)+
2
4
8
16
17
1
(−1)n−2
(−1)n−1
δ(t)3 α(t)2 Qn−6 (t)−...+ n−3 δ(t)n−5 α(t)2 Q2 (t)+ n−2 δ(t)n−4 α(t)2 Q1 (t).
32
2
2
If we change Qn−1 in the first equation for the right hand side of the second
equation we get:
δ(t)2 − α(t)2
Qn (t) =
Qn−2 (t).
4
Consider
1
c(t) = lim log
n→∞ n
(∫
t (A(x0 )+A(x1 )+...+A(xn ))
e
)
dµ(x) .
c(t) is sometimes called the free energy on the point t for the probability µ
and the classical observable A (see [3]).
As we said before if c(t) is differentiable for all t ∈ R, then, a Large
Deviation Principle is true (see [3]) for the sums
(
)
(
)
1
1
n
A(x0 )+A(x1 )+...+A(xn ) =
A(x)+A(σ(x))+...+A(σ (x)) .
n+1
n+1
We just show that a L.D.P. is true for such class of A because we show
the explicit form of c and it is differentiable (see (8)).
5
µ is not mixing
We will show that the probability µ is not mixing.
Theorem 6. For any n ≥ 2 we have that
2
∑
µ(a, b, j2 , ..., jn , c, d) = µ(a, b)µ(c, d) +
j2 ,..,jn =1
(−1)n (βb − βa )(βc − βd )
24
Proof: Using theorem 3 we get
µ(a, b, j2 , ..., jn , c, d) =
+
µ(b, j2 , ..., jn , c, d) βa βb
− 2 µ(j2 , ..., jn , c, d)+
2
2
βa βj 2
βa βj
µ(j3 , j4 , ..., jn , c, d) − 4 3 µ(j4 , j5 , ..., jn , c, d) + ...
3
2
2
n
(−1) βa βjn
(−1)n+1 βa (βc − βd )
+
µ(c,
d)
+
2n+1
2n+3
18
Note that
2
∑
βa βj2
µ(j3 , j4 , ..., jn , c, d) = 0,
3
2
j =1
2
as β2 = −β1 and j2 only appears in βj2 . And in general, for k = 2, ..., n
2
∑
βjk (...) = 0.
jk =1
Therefore we will separate all terms that contains βjk ,
µ(a, b, j2 , ..., jn , c, d) =
µ(b, j2 , ..., jn , c, d) βa βb
− 2 µ(j2 , ..., jn , c, d)+
2
2
(−1)n+1 βa (βc − βd ) ∑
βjk (...) =
+
2n+3
k=2
n
+
[
]
1 µ(j2 , ..., jn , c, d) βb βj2
=
−
µ(j3 , ..., jn , c, d) +
2
2
22
[
]
1 βb βj3
βb βj 4
+
µ(j4 , ..., jn , c, d) − 4 µ(j5 , ..., jn , c, d) + ...
2 23
2
[
]
1 (−1)n−1 βb βjn
(−1)n βb (βc − βd )
+
µ(c, d) +
−
2
2n
2n+2
βa βb
(−1)n+1 βa (βc − βd ) ∑
βjk (...) =
µ(j
,
...,
j
,
c,
d)
+
+
2
n
22
2n+3
k=2
n
−
=
µ(j2 , ..., jn , c, d) βa βb
− 2 µ(j2 , ..., jn , c, d)+
22
2
(−1)n βb (βc − βd ) (−1)n+1 βa (βc − βd ) ∑
+
+
+
βjk (...) =
2n+3
2n+3
k=2
n
(−1)n (βb − βa )(βc − βd ) ∑
+
βjk (...).
2n+3
k=2
n
= µ(a, b)µ(j2 , ..., jn , c, d) +
Then
µ(a, b, j2 , ..., jn , c, d) = µ(a, b)µ(j2 , ..., jn , c, d)+
19
n
(−1)n (βb − βa )(βc − βd ) ∑
+
βjk (...).
2n+3
k=2
We know that µ is stationary, so
∑
µ(j2 , ..., jn , c, d) = µ(c, d).
j2 ,...,jn
It follows that
∑
j2 ,...,jn
(−1)n (βb − βa )(βc − βd )
µ(a, b, j2 , ..., jn , c, d) = µ(a, b)µ(c, d) +
.
24
From the above for (a, b) = (1, 2) and (c, d) = (2, 1) we get that
lim µ(σ −2n (c, d) ∩ (a, b) ) ̸= µ(a, b) µ(c, d),
n→∞
and this shows that µ is not mixing (see [15] or [12]). This also implies that
for some continuous functions there is no decay of correlation to 0.
6
µ is not Gibbs
Now we investigate other kind of questions.
Consider a generic element x = (k0 , k1 , k2 , ..., kn , ...) ∈ {1, 2}N .
Expression (6) can be written as
µ(kn , kn−1 , kn−2 , ..., k1 , k0 ) =
1
µ(kn−1 , kn−2 , ..., k1 , k0 )
2
βkn βkn−1
µ(kn−2 , kn−3 , ..., k1 , k0 )+
4
βkn βkn−2
βk βk
µ(kn−3 , kn−4 , ..., k1 , k0 ) − n n−3 µ(kn−4 , kn−5 , ..., k1 , k0 ) + ...
8
16
βkn βk1
βkn βk0
βkn βk2
µ(k1 , k0 ) + (−1)n−1 n+1
+ (−1)n n+1
.
+(−1)n−2 n−1
2
2
2
Therefore, from (5) we get that
−
µ(k0 , k1 , k2 , ..., kn−1 , kn ) =
−
1
µ(k0 , k1 , k2 , ..., kn−2 , kn−1 )
2
βkn βkn−1
µ(k0 , k1 , k2 , ..., kn−3 , kn−2 )+
4
20
βk βk
βkn βkn−2
µ(k0 , k1 , k2 , ..., kn−4 , kn−3 ) − n n−3 µ(k0 , k1 , k2 , ..., kn−5 , kn−4 ) + ...
8
16
βkn βk2
βkn βk1
βkn βk0
+(−1)n−2 n−1
µ(k0 , k1 ) + (−1)n−1 n+1
+ (−1)n n+1
.
(9)
2
2
2
Note that
µ(k0 , k1 , k2 , ..., kn−1 , kn )
µ(k0 , k1 , k2 , ..., kn−2 , kn−1 )
=
βkn
2βkn
βk
− n−1 µ(k0 , k1 , k2 , ..., kn−3 , kn−2 )+
4
βk
βkn−2
µ(k0 , k1 , k2 , ..., kn−4 , kn−3 ) − n−3 µ(k0 , k1 , k2 , ..., kn−5 , kn−4 ) + ...
8
16
βk1
βk0
βk2
µ(k0 , k1 ) + (−1)n−1 n+1
+ (−1)n n+1
.
(10)
+(−1)n−2 n−1
2
2
2
For example
µ(k0 , k1 , k2 , k3 )
µ(k0 , k1 , k2 )
βk
βk
βk
=
− 2 µ(k0 , k1 ) + 1 − 0 ,
βk3
2βk3
4
16
16
and
2µ(k0 , k1 , k2 , k3 , k4 )
µ(k0 , k1 , k2 , k3 ) βk3
βk
βk βk
=
−
µ(k0 , k1 , k2 )+ 2 µ(k0 , k1 )− 1 + 0
βk4
βk4
2
4
16 16
If we add this two last equations we get
(
)
(
)
βk4
1
βk3
2µ(k0 , ..., k4 ) = 1 −
−
µ(k0 , k1 , k2 )
µ(k0 , ..., k3 ) + βk4
βk3
2βk3
2
This can be generalized for each n as below
Proposition 7.
(
)
(
)
βkn−1
βkn
1
2µ(k0 , k1 , ..., kn ) = 1−
µ(k0 , ..., kn−1 )+βkn
−
µ(k0 , ..., kn−2 )
βkn−1
2βkn−1
2
We can also use the symmetric expression (5) in order to write
Proposition 8.
)
(
(
)
βk0
βk1
1
βk0
1
µ(k1 , ..., kn ) +
−
µ(k0 , k1 , ..., kn ) =
1−
µ(k2 , ..., kn )
2
βk1
2 2βk1
2
21
Example: We remember that β1 = sin(2θ) and β2 = −β1 . In the particular case that θ = π/4 we have β1 = 1, β2 = −1 and, using proposition
7,
(
)
βkn
1
1−
µ(k0 , k1 , ..., kn ) =
µ(k0 , ..., kn−1 )
2
βkn−1
If kn = kn−1 then µ(k0 , k1 , ..., kn ) = 0. We conclude that µ is supported in
the periodic orbit of the point (0, 1, 0, 1, 0, 1, ...).
We denote
(
)
1
βk0
a(k0 , k1 ) =
1−
,
2
βk1
(
)
1
1
b(k0 , k1 ) = βk0
− βk1 ,
4
βk1
where k0 , k1 ∈ {1, 2} and
(
)
1
2
1 − β1 = µ(1, 1).
γ=
4
From the above proposition
µ(k0 , k1 , ..., kn ) = a(k0 , k1 )µ(k1 , ..., kn ) + b(k0 , k1 )µ(k2 , ..., kn ).
(11)
The possible values of a(k0 , k1 ) and b(k0 , k1 ) are
a) if k0 = k1 , then a(k0 , k1 ) = 0 and 0 < b(k0 , k1 ) = 14 (1 − β12 ) = γ < 14
b) if k0 ̸= k1 , then a(k0 , k1 ) = 1 and − 14 < b(k0 , k1 ) = 14 (β12 − 1) = −γ < 0
Proposition 9. Suppose θ ̸= π/4. Then µ is positive on cylinders sets.
Proof. The probability of cylinders of size 1, 2 and 3 are not zero. Suppose
that µ(x1 , ..., xn ) > 0 for any cylinder set of size n. As µ is stationary
µ(x0 , ..., xn ) = µ(1, x0 , x1 , ...xn ) + µ(2, x0 , x1 , ..., xn ) ≥ µ(x0 , x0 , x1 , ...xn )
= a(x0 , x0 )µ(x0 , ..., xn ) + b(x0 , x0 )µ(x1 , ..., xn ) = γµ(x1 , ..., xn ) > 0.
Then the result follows by induction.
We get also a corollary about the Jacobian J of µ. Define for x =
(x0 , x1 , x2 , ...),
µ(x0 , ..., xn )
J(x) = lim
n→∞ µ(x1 , ..., xn )
22
if the limit exists. It is known that J is well defined for µ a.e. x and can
be seen as the Radon-Nikodyn derivative of µ over the inverse branches of σ
(see [11], [15] or [12]).
From (11)
µ(k0 , k1 , ..., kn )
µ(k2 , ..., kn )
= a(k0 , k1 ) + b(k0 , k1 )
,
µ(k1 , ..., kn )
µ(k1 , ..., kn )
(12)
then, we get the following
Corollary 10.
J(k0 , k1 , k2 , ...) = a(k0 , k1 ) + b(k0 , k1 )
1
.
J(k1 , k2 , k3 , ...)
(13)
According to (13) J : {1, 2}N → R is an unknown function which satisfies
the property:
J(k0 , k1 , k2 , ...) = a(k0 , k1 ) + b(k0 , k1 )
a(k0 , k1 ) + b(k0 , k1 )
get
1
=
J(k1 , k2 , k3 , ...)
1
.
a(k1 , k2 ) + b(k1 , k2 ) J(k2 ,k13 ,k4 ,...)
Remark: We have that γ ≤ J ≤ 1 − γ.
0 ,...,xn )
, x = (x0 , x1 , ...). From (12) we
Indeed, if we denote by J n (x) := µ(x
µ(x1 ,...,xn )
J n (x0 , x1 , x2 , ...) = a(x0 , x1 ) + b(x0 , x1 )
1
.
J n−1 (x1 , x2 , x3 , ...)
It follows that
1 ≥ J n+1 (x0 , x0 , x1 , x2 , x3 , ...) = γ
1
J n (x0 , x1 , x2 , x3 , ...)
(14)
and then
J n (x0 , x1 , ...) ≥ γ
for any n and x = x0 , x1 , ....
As the sum J n (1, x2 , x3 , ..) + J n (2, x2 , x3 , ..) = 1 we get that also J n ≤
1 − γ.
Proposition 11. J is not defined in 1∞ = (1, 1, 1, 1, 1...).
23
Proof. from (12) we get
µ(k0 , k1 , ..., kn )
µ(k2 , ..., kn )
= a(k0 , k1 ) + b(k0 , k1 )
,
µ(k1 , ..., kn )
µ(k1 , ..., kn )
(15)
where a(1, 1) = 0 and 0 < b(1, 1) = γ < 1.
and then
µ(1, ..., 1)
µ(1, ..., 1)
| {z }
| {z }
n+1
n−1
=γ
,
µ(1, .., 1)
µ(1, ..., 1)
| {z }
| {z }
n
Note that
n
µ(11)
= 2 γ.
µ(1)
By induction
µ(1, ..., 1)
| {z }
n+1
µ(1, .., 1)
| {z }
= 1/2,
n
for n even, and
µ(1, ..., 1)
| {z }
n+1
µ(1, .., 1)
| {z }
= 2 γ,
n
for n odd. Then, J(1∞ ) does not exist.
One can consider on Ω = {1, 2}N the metric d(x, y) = α−n , where n is
the first symbol of x = (x0 , x1 , ..) and y = (y0 , y1 , ..) such that they disagree,
and α > 1. Let h(µ) be the Kolmogorov-Sinai entropy of µ.
We will use the following definition of Gibbs state:
Definition 12. Suppose f is a∑
continuous potential f : Ω = {1, 2}N → R,
such that, for all y ∈ Ω we have σ(y)=x ef (y) = 1. Then, if ν is a σ-invariant
probability such that:
{
}
∫
∫
sup h(ρ) + f dρ = h(ν) + f dν,
ρ
we say that ν is a Gibbs state for f .
24
Such probabilities are also called g-measures.
There exists a continuous function f as above such that has more than
one Gibbs state (see [13] and references there).
Theorem 13. µ is not a Gibbs state.
Proof:
Suppose by contradiction that µ is Gibbs for a continuous function f .
First note that
{
}
∫
sup h(ρ) + f dρ = 0.
ρ
It is known that h(ρ) =
and Lemma 3.4 in [11] that
∫
log Jρ dρ, and then, it follows from Lemma 3.3
{
sup
}
∫
h(ρ) +
f dρ
ρ
By hypothesis h(µ) +
This shows the claim.
∫
≤ 0.
f dµ = 0.
Denote by Lf the Ruelle operator for f (see [11]). Note that Lf (1) = 1.
It follows from Proposition 3.4 in [11] that µ is a fixed point of the dual L∗f (it
is used just the fact that f is continuous). This means that for any function
φ we have
∫
∫
Lf (φ) dµ = φdµ.
Taking φ = I(i0 ,i1 ,...,in ) we get that Lf (φ)(x) = I(i1 ,i2 ,...,in ) (x) ef (i0 x) .
In this way we get
∫
µ(i0 , i1 , ..., in ) =
ef (i0 x) dµ(x),
(i1 ,i2 ,...,in )
and hence
µ(i0 , i1 , ..., in )
=
µ(i1 , i2 , ..., in )
∫
(i1 ,i2 ,...,in )
ef (i0 x) dµ(x)
µ(i1 , i2 , ..., in )
.
The right hand side is an average of values of ef over decreasing cylinder
sets.
Since ef is assumed to be a continuous function, this converges uniformly
in the sequence (i0 , i1 , ..., in ..) to ef (i0 ,i1 ,...,in ..) .
Then, for any i ∈ {1, 2}, we have that for any x = (i, x1 , ..., xn , ...) the
limit exists
ν(i, x1 , ..., xn )
= ef (i,x1 ,...,xn ,...) .
lim
n→∞ ν(x1 , ..., xn )
25
But this property is not true for x = (1∞ ) as we just showed above (see
Proposition 11).
We will make some final remarks about J
Definition 14. Given (k0 , k1 , k3 , ..., kn , ...) we denote J˜ the continuous fraction expansion expression
1
1
a(k1 , k2 ) + b(k1 , k2 ) a(k2 ,k3 )+b(k
1
2 ,k3 ) ...
(16)
if this converges, that is, if there exist the limit of the finite truncation.
˜ 0 , k1 , k3 , ..., kn , ...) = a(k0 , k1 )+b(k0 , k1 )
J(k
In this sense J˜ has an expression in continued fraction (see expression (b)
in page 2 in [4] for the general setting).
It is easy to see that the function J˜ where defined is a solution of (13).
We conjecture that J = J˜ for µ almost everywhere point.
7
µ is not a two step Markov Process
Suppose by contradiction that µ is a two step Markov measure. Denote by
π11 , π12 , π21 , π22
the initial probabilities.
The transitions are of the form P(a1 ,a2 ),a3 , a1 , a2 , a3 ∈ {1, 2}. In this case
for a1 , a2 fixed P(a1 ,a2 ),1 + P(a1 ,a2 ),2 = 1.
Therefore,
µ(a1 , a2 , a3 ) = πa1 ,a2 P(a1 ,a2 ),a3 ,
µ(a1 , a2 , a3 , a4 ) = πa1 ,a2 P(a1 ,a2 ),a3 P(a2 ,a3 ),a4 ,
and, so on...
Using only the data of the cylinders of order 3, that is, the information
of the µ(a1 , a2 , a3 ) we get before,
π11 = π11 P(11)1 + π11 P(11)2 = µ(1, 1, 1) + µ(1, 1, 2) =
Moreover,
1 − β2
.
4
1 − β2
1 − β2
= µ(1, 1, 1) = π11 P(11)1 =
P(11)1 .
8
4
26
Therefore P(1,1),1 = 12 . In this way we get
µ(1, 1, 1, 1) = π11 · P(1,1),1 · P(1,1),1 =
1 − β2
.
16
This is a contradiction with the Theorem 3, because following the theorem
µ(1, 1, 1, 1) =
=
µ(1, 1, 1) β 2
β2 β2
− µ(1, 1) +
−
2
4
16 16
1 − β 2 β 2 (1 − β 2 )
−
.
16
4
4
References
[1] H. Araki, Gibbs states of a one dimensional quantum lattice, Comm.
Math. Phys, 14, pp 120-157 (1969)
[2] O. Bratelli and D. Robinson, Operator Algebras and Quantum Statistical Mechanics, Vol 1, Springer (2010)
[3] R. Ellis. Entropy, Large Deviations, and Statistical Mechanics. Springer
Verlag. (2005).
[4] H. Wall, Analytic Theory of continued fractions, Chelsea Publishig
(1967).
[5] D. Evans, Quantum Symmetries on Operator Algebras, Oxford Press
(1998)
[6] F. Hiai, Fumio, M. Mosonyi and T. Ogawa, Large deviations and Chernoff bound for certain correlated states on a spin chain. J. Math. Phys.
48 (2007), no. 12, 123301, 19 pp.
[7] J. L. Lebowitz, M. Lenci and H. Spohn, Large deviations for ideal quantum systems, J. Math. Phys. 41: 1224–1243 (2000)
[8] M. Lenci and L. Rey-Bellet, Large deviations in quantum lattice systems: one-phase region. J. Stat. Phys. 119 (2005), no. 3–4, 715-746.
[9] J. Parkinson and D. Farnell, An introduction to quantum spin systems,
Springer Verlag (2010)
[10] Y. Ogata, Large deviations in quantum spin chains. Comm. Math. Phys.
296 (2010), no. 1, 3568.
27
[11] W. Parry and M. Pollicott. Zeta functions and the periodic orbit structure of hyperbolic dynamics. Asterisque, 187–188, (1990).
[12] M. Pollicott and M. Yuri, Dynamical Systems and Ergodic Theory, Academic Press (1998)
[13] A. Quas, Non-ergodicity for C 1 expanding maps and g-measures. Ergodic Theory Dynam. Systems 16, no. 3, 531–543, (1996).
[14] W. de Roeck, C. Maes, K. Netockny and M. Schitz, Locality and nonlocality of classical restrictions of quantum spin systems with applications
to quantum large deviations and entanglement, Arxiv 2013
[15] R. Ma˜
n´e, Ergodic Theory and Differentiable Dynamics, Springer Verlag
(2011)
28

Similar documents