UNIVERSITÀ DEGLI STUDI DELL`AQUILA Facoltà di Scienze

Transcription

UNIVERSITÀ DEGLI STUDI DELL`AQUILA Facoltà di Scienze
UNIVERSITÀ DEGLI STUDI DELL’AQUILA
Facoltà di Scienze Matematiche Fisiche e Naturali
Dottorato di Ricerca in Fisica - XX Ciclo
Neutrino event analysis in the OPERA experiment:
trigger confirmation and vertex location with
nuclear emulsion automatic scanning
Coordinatore
Prof. Guido Visconti
Candidata
Natalia Di Marco
Tutore
Prof. Flavio Cavanna
Relatore
Prof. Piero Monacelli
Gennaio 2008
To Enrica and Luca
Contents
Introduction
1
1 Neutrino physics
1.1 Neutrino masses and mixing . . . . . . . . . . .
1.1.1 Dirac-Majorana mass term . . . . . . . .
1.1.2 The see-saw mechanism . . . . . . . . .
1.1.3 Three-neutrino mixing . . . . . . . . . .
1.2 Neutrino oscillation theory . . . . . . . . . . . .
1.2.1 Phenomenology of three neutrino mixing
1.2.2 Neutrino oscillations in matter . . . . . .
2 Neutrino oscillation experiments
2.1 Solar neutrinos . . . . . . . . . . . . . . . . .
2.1.1 Solar neutrino experiments . . . . . . .
2.1.2 The Kamland experiment . . . . . . . .
2.2 Atmospheric neutrinos . . . . . . . . . . . . .
2.2.1 Atmopheric neutrino experiments . . .
2.2.2 Long baseline experiments . . . . . . .
2.2.3 The reactor experiment Chooz . . . . .
2.3 Short baseline experiments . . . . . . . . . . .
2.4 The global oscillation picture: know unknowns
2.5 Future prospects . . . . . . . . . . . . . . . . .
3 The OPERA experiment
3.1 The CNGS beam . . . . . . . . . . . .
3.2 The OPERA detector . . . . . . . . . .
3.2.1 Target section . . . . . . . . . .
3.2.2 Muon Spectrometers . . . . . .
3.3 Operation mode . . . . . . . . . . . . .
3.4 Physics performances . . . . . . . . . .
3.4.1 τ detection and signal efficiency
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
. 5
. 6
. 7
. 9
. 13
. 15
. 15
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
19
19
20
23
25
26
27
29
31
32
33
.
.
.
.
.
.
.
35
35
36
37
41
42
44
44
.
.
.
.
.
.
.
ii
CONTENTS
3.5
3.4.2 Background estimation . . . . . . . . . . . .
3.4.3 Sensitivity to νµ → ντ oscillation . . . . . . .
3.4.4 Search for the sub-leading νµ → νe oscillation
PEANUT: Petit Exposure At NeUTrino beamline . .
3.5.1 The NuMI beam . . . . . . . . . . . . . . .
3.5.2 The PEANUT detector . . . . . . . . . . . .
4 Nuclear Emulsions
4.1 Basic properties . . . . . . . . . . . . . . . . .
4.1.1 The latent image formation . . . . . . .
4.1.2 The development process . . . . . . . .
4.2 Characteristics of OPERA emulsions . . . . . .
4.2.1 The refreshing procedure at Tono mine
4.2.2 Distortions and shrinkage . . . . . . .
.
.
.
.
.
.
5 The ESS and the LNGS Scanning Station
5.1 The Japanese S-UTS . . . . . . . . . . . . . . .
5.2 The design of the European Scanning System . .
5.3 Hardware components . . . . . . . . . . . . . .
5.3.1 Mechanics . . . . . . . . . . . . . . . .
5.3.2 Optical system . . . . . . . . . . . . . .
5.3.3 The acquisition system . . . . . . . . . .
5.4 The on-line acquisition software . . . . . . . . .
5.4.1 Image processing . . . . . . . . . . . . .
5.4.2 Tracking . . . . . . . . . . . . . . . . .
5.5 The off-line track reconstruction . . . . . . . . .
5.5.1 Base-track reconstruction . . . . . . . . .
5.5.2 Plate intercalibration and particle tracking
5.6 LNGS scanning station and ESS performances . .
6 Search for neutrino events
6.1 Analysis scheme . . . . . . . . .
6.1.1 SFT Predictions . . . . . .
6.1.2 Doublet analysis . . . . .
6.1.3 SFT-CS Matching . . . .
6.1.4 Scan Back and Total Scan
6.2 Analysis of brick BL056 . . . . .
6.3 Analysis of brick BL045 . . . . .
6.4 Vertex reconstruction . . . . . . .
6.5 Data - Monte Carlo comparison .
6.6 Conclusions . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
45
47
47
50
51
53
.
.
.
.
.
.
59
60
60
61
64
65
68
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
71
. . 72
. . 72
. . 74
. . 74
. . 76
. . 78
. . 80
. . 80
. . 82
. . 83
. . 84
. . 86
. . 88
.
.
.
.
.
.
.
.
.
.
91
91
92
95
98
100
103
106
118
121
123
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
iii
Conclusions
127
Bibliography
129
Introduction
During the last decades, research in the field of neutrino physics achieved significant improvements in the knowledge of neutrino properties.
In the Standard Model, neutrinos are classified as massless, left-handed particles. Nevertheless, first the problem of the solar neutrino deficit, observed by
Davis in the late sixties, and then the evidence for the atmospheric neutrino anomaly observed and confirmed by several experiments using different neutrino
sources like SK, KamLAND and K2K, seems by now to have a natural explanation in the neutrino oscillation theory. Consequently, an extension of the Standard
Model in order to accommodate a neutrino mass term (or an indication for physics
beyond the Standard Model), is unavoidable.
Despite the fact that there are many experimental evidences supporting the
νµ → ντ solution for the atmospheric neutrino oscillation channel, a direct evidence of the ντ appearance is still missing. The OPERA experiment aims at
measuring the ντ appearance in an almost pure νµ beam produced at CERN SPS,
732 Km far from the detector. The ντ appearance signal is detected through the
measurement of the decay daughter particles of the τ lepton produced in CC ν τ interaction. Since the short-lived τ particle has an average decay length of ∼1 mm,
a micrometric detection resolution is needed. For this purpose the OPERA detector, placed in the hall ”C” of the Gran Sasso National Underground Laboratories,
requires the use of a large amount of nuclear emulsions, the highest spatial resolution tracking detector.
The basic unit of the OPERA detector is based on the concept of ECC (Emulsion Cloud Chamber), a modular structure composed by passive material (lead)
and nuclear emulsion sheets. Electronic detectors complete the target section of
the apparatus, while muon identification is performed by means of two magnetised
spectrometers. The ECCs are arranged in a compact structure called ”brick” and
composed by 57, 300 µm tick, emulsion films alternated with 56, 1 mm tick, lead
layers. ∼154750 bricks will be produced for a total sensitive mass of ∼ 1.3 kton.
In order to cope with the analysis of the large number of emulsion sheets
related to neutrino interactions, a new generation of fast automatic optical microscopes was developed. The long R&D carried out by the European component of
2
Introduction
the OPERA collaboration, gave rise to the European Scanning System (ESS) with
a scanning speed one order of magnitude bigger (∼ 20 cm2 /h) than that of the systems employed in past experiments. Several test beam exposures were performed
in order to evaluate efficiency, purity, instrumental background and speed of the
automatic microscope.
Considering the first OPERA run in October 2007, and in order to test and optimise the vertex finding and reconstruction chain, a new test beam exposure was
carried out: some OPERA-like bricks were exposed in 2005 to the NuMI ν µ beam.
The analysis scheme of nuclear emulsions exposed in the so called PEANUT exposure test, traces the OPERA one from the electronic trigger confirmation in
nuclear emulsions up to the localisation of the interaction vertex and the study of
the scattering topology. The work presented in this thesis, focuses on the analysis
of CC neutrino interaction recorded in PEANUT bricks, performed at the Gran
Sasso National Laboratory Scanning Station.
The thesis work is organised as follow. A summary of the main neutrino theory properties and a review of the recent experimental results are the subjects of
Chapter 1 and 2 respectively.
In Chapter 3 a detailed description of the OPERA detector, together with the
physical performances of the experiment is presented. The PEANUT exposure
test is also described.
The properties of the nuclear emulsions employed in OPERA are summarised
in Chapter 4, while in Chapter 5 a detailed description of the European Scanning
System (ESS) is presented together with the study of the performances of the
automatic microscopes installed at LNGS Scanning Station.
Finally, in Chapter 6 the methods and the results of the analysis of two PEANUT
bricks are presented.
Chapter 1
Neutrino physics
The study of neutrinos properties represents, by now from decades, a fundamental
field of particle physics.
The existence of neutrinos was postulated by W. Pauli in 1930 as an attempt
to explain the continuous spectrum of β-decay [1]:
“I have hit upon a desperate remedy to save the exchange theorem of
statistics and the law of conservation of energy. Namely, the possibility that there could exist in the nuclei electrically neutral particles,
that I wish to call neutrons, which have spin 1/2 and obey the exclusion principle and which further differ from light quanta in that
they do not travel with the velocity of light. The mass of the neutrons
should be of the same order of magnitude as the electron mass and in
any event not larger than 0.01 proton masses . . . ”
The estimate of the cross-section was suggested by the old idea that particles
emitted in β-decay were previously bound in the parent nucleus (as happens in
α-decay), rather than created in the decay process. In a 1934 paper containing
”speculations too remote from reality” (and therefore rejected by the journal Nature) Fermi overcame this misconception and introduced a new energy scale (the
”Fermi” or electroweak scale) in the context of a model able of predicting neutrino couplings in terms of β-decay lifetimes. Following a joke by Amaldi, the
new particle was renamed neutrino after that the true neutron had been identified
by Chadwick in 1932. Neutrinos were finally directly observed by Cowan and
Reines in 1956 in a nuclear reactor experiment and found to be left–handed in
1958 [2].
Later it was established that there were two different types of neutrino, one
associated with the electron and one with the muon. A muon neutrino beam was
made using the π → µνµ decays. The νµ interacted in a target producing muons
and not electrons νµ + p → µ− + n [3]. These experiments, along with many
4
Neutrino physics
others, have experimentally established that νe and νµ are the neutral partners of
the electron and muon, respectively, and helped to shape our understanding of
weak interactions in the Standard Model (SM).
During the sixties and seventies, electron and muon neutrinos of high energy
were used to probe the composition of nucleons. The experiments gave evidence
for quarks and established their properties.
In 1970, Glashow, Illiopoulos and Maiani made the hypothesis of the existence of a second quark family, which should correspond to the second family of
leptons; this hypothesis was confirmed by experiments at the end of 1974.
In 1973 neutral currents (neutrino interaction with matter where neutrino is
not transformed into another particle like muon or electron) were discovered at
CERN and confirmed at Fermilab.
In 1977 the b quark, that is one quark of the third quark family, was discovered
at Fermilab, almost at the same time that Martin Perl discovered the τ lepton at
SLAC. The corresponding neutrino ντ was finally observed experimentally only
in 2001 at Fermilab by the DONUT experiment [4].
A complete knowledge of weak interactions came after the discoveries of the
W and the Z bosons in 1983; in 1989 the study of the Z boson width allowed to
show that only three lepton families (and then three types of neutrinos) exist [5].
Precision confirmations of the validity of the SM at low and high energy were
experimentally given in the 90s at LEP.
The work that led to the first evidence for a neutrino anomaly was done by
Bahcall et al. (that predicted the solar νe flux) and by Davis et al. (that, using a
technique suggested by Pontecorvo, since 1968 measured a ν e flux smaller than
the predicted one) [6]. Despite significant efforts, up to few years ago, it was not
clear if there was a solar neutrino problem or a neutrino solar problem. Phenomenologists pointed out a few clean signals possibly produced by oscillations, but
could not tell which ones are large enough to be detected. Only in 2002 two of
these signals have been discovered. The SNO solar experiment found evidence
for νµ,τ appearance and the KamLAND experiment confirmed the solar anomaly
discovering disappearance of νe from terrestrial (japanese) reactors. In the meantime, analyzing the atmospheric neutrinos, originally regarded as background for
proton decay searches, in 1998 the japanese (Super)Kamiokande experiment established a second neutrino anomaly, confirmed around 2004 by K2K, the first
long base-line neutrino beam experiment [8].
Even so, the high energy physics community started turning toward the search
for physics beyond the SM, in particular for a non zero neutrino mass.
1.1 Neutrino masses and mixing
5
1.1 Neutrino masses and mixing
In the 60’s, on the basis of the knowledge available at that time on the existing
elementary particles and their properties, the Standard Model was formulated.
Following the so-called two-component theory of Landau [12], Lee and Yang
[13], and Salam [14], neutrinos were though to be massless and were described
by left-handed Weyl spinors. This description has been reproduced in the Standard
Model of Glashow [9], Weinberg [10] and Salam [11] assuming the non existence
of right-handed neutrino fields, which are necessary in order to generate Dirac
neutrino masses with the same Higgs mechanism that generates the Dirac masses
of quarks and charged leptons.
As will be discussed in Chapter 2, in recent years neutrino experiments have
shown convincing evidences of the existence of neutrino oscillations, which is a
consequence of neutrino masses and mixing. Therefore, the SM should be revised
in order to take into account neutrino masses.
Considering for simplicity only one neutrino field ν, the standard Higgs mechanism generates the Dirac mass term
LD = −mD ¯νν = −mD (νR νL + νL νR )
(1.1)
ψ = ψc
(1.2)
√
with
m
=
yv/
2 where y is a dimensionless Yukawa coupling coefficient and
D
√
v/ 2 is the Vacuum Expectation Value of the Higgs field. ν L and νR are, respectively, the chiral left-handed and right-handed components of the neutrino field.
Unfortunately, the generation of Dirac neutrino masses through the standard
Higgs mechanism is not able to explain naturally why the neutrino are more than
five order of magnitude lighter than the electron, which is the lightest of the other
elementary particles (the neutrino masses are experimentally constrained below
about 1-2 eV). Then, there is no explanation of why the neutrino Yukawa coupling coefficients are more than five order of magnitude smaller than the Yukawa
coupling coefficients of quarks and charged leptons.
In 1937 Majorana [15] discovered that a massive neutral fermion as a neutrino
can be described by a spinor ψ with only two independent components imposing
the so-called Majorana condition
T
where ψc = Cγ0 ψ∗ is the operation of charge conjugation.
Decomposing the Eq. 1.2 into left-handed and right-handed components, ψ L +
ψR = ψcL + ψcR and acting on both members of the equation with the right-handed
projector operator PR , we obtain
ψR = ψcL
(1.3)
6
Neutrino physics
Thus, the right-handed component ψR of the Majorana neutrino field ψ is not independent, but obtained from the left-handed component ψ L through charge conjugation and the Majorana field can be written as
ψ = ψL + ψcL
(1.4)
This field depends only on the two independent components of ψ L . Therefore the
Majorana mass term can be written as
1
L M = − m M (ψcL ψL + ψL ψcL )
2
(1.5)
1.1.1 Dirac-Majorana mass term
In addition to the Dirac mass term (Eq. 1.1), if both the chiral left-handed and
right-handed fields exist and are independent, also the Majorana mass terms for
νL and νR are allowed:
1
1
LLM = − mL (νcL νL + νL νcL ), LRM = − mR (νcR νR + νR νcR )
2
2
Then the total Dirac+Majorana mass term can be written as
mL mD ! νL !
1 c
D+M
L
=−
+ H.c.
ν L νR
m D mR
νR c
2
(1.6)
(1.7)
Since the chiral fields νL and νR are coupled by the Dirac mass term, it is plain that
they do not have a definite mass. In order to find the fields with definite masses
it is necessary to diagonalize the mass matrix in Eq. 1.7. For this purpose, it is
convenient to write the Dirac+Majorana mass term in the matrix form
1
LD+M = NLc MNL + H.c.
2
where
M=
mL mD
m D mR
!
,
NL =
(1.8)
νL
νcR
!
(1.9)
The column matrix NL is left-handed, because it contains left-handed fields, and
can be written as
!
ν1L
NL = UnL ,
with nL =
(1.10)
ν2L
where U is the unitary mixing matrix (U † = U −1 ) and nL is the column matrix of
the left-handed components of the massive neutrino fields. The Dirac+Majorana
mass term is diagonalized requiring that
!
m1 0
T
U MU =
(1.11)
0 m2
1.1 Neutrino masses and mixing
7
with mk real and positive for k = 1, 2. Let us consider the simplest case of a real
mass matrix M. Since the values of m L and mR can be chosen real and positive by
an appropriate choice of phase of the chiral fields ν L and νR , the mass matrix M is
real if mD is real. In this case, the mixing matrix U can be written as
U = Oρ
(1.12)
where O is an orthogonal matrix and ρ is a diagonal matrix of phases:
!
!
ρ1 0
cos θ sin θ
,
,
ρ=
O=
0 ρ2
− sin θ cos θ
with |ρk |2 = 1. The orthogonal matrix O is chosen in order to have
!
m01 0
T
O MO =
0 m02
(1.13)
(1.14)
leading to
2mD
tan 2θ =
,
mR − m L
m02,1
"
#
q
1
2
2
=
mL + mR ± (mL − mR ) + 4mD
2
(1.15)
Since mL and mR have been chosen positive, m02 is always positive, while m01 is
negative if m2D > mL mR . Therefore, since the the Eq. 1.11 can be written as
T
T
T
U MU = ρ O MOρ =
ρ21 m01
0
2 0
0
ρ 2 m2
!
(1.16)
we have ρ22 = 1 always, and ρ21 = 1 if m01 ≥ 0 or ρ21 = −1 if m01 < 0.
Therefore, being the diagonalized Dirac+Majorana mass term a sum of Majorana mass terms for the massive Majorana neutrino fields ν K = νkL + νckL (k=1,2)
LD+M =
1X
mk νckL νkL + H.c.,
2 k=1,2
(1.17)
the two massive neutrinos are Majorana particles.
1.1.2 The see-saw mechanism
It can be demonstrated that the Dirac+Majorana mass term leads to maximal mixing (θ = π/4) if mL = mR , or to so-called pseudo-Dirac neutrinos if m L and mR are
much smaller that |mD | [16]. However, the most interesting case is the so-called
8
Neutrino physics
”see-saw” mechanism [17], which is obtained considering m L = 0 and |mD | mR .
In this case
m2
mD
m1 ' D | mD |,
m 2 ' mR ,
tan θ '
1,
ρ21 = −1 (1.18)
mR
mR
Being suppressed by the small ratio m D /mR , from Eq. 1.18 follows that m1 is much
smaller than mD . Since m2 is of order mR , a very heavy ν2 corresponds to a very
light ν1 , as in a see-saw. Since m D is a Dirac mass, presumably generated with
the standard Higgs mechanism, its value is expected to be of the same order as
the mass of a quark or the charged fermion in the same generation of the neutrino
we are considering. Hence, the see-saw explains naturally the suppression of m 1
with respect to mD , providing the most plausible explanation of the smallness of
neutrino masses.
The small value of the mixing angle θ in Eq.1.18 implies that ν1L ' −νL and
ν2L ' νcR . This means that the light neutrino ν1 is the only neutrino participating
to weak interactions, while the heavy neutrino ν2 is practically decoupled from
interactions with matter.
As it happens in the general case of a Dirac+Majorana mass term, another
important consequence of the see-saw mechanism is that massive neutrinos are
Majorana particles. This is a very important indication that strongly encourages
the search for the Majorana nature of neutrinos (mainly performed through the
search of neutrinoless double-β decay, see section 2.5).
The see-saw mechanism is based on the two assumptions m L = 0 and |mD | mR .
The first one is a consequence of the gauge symmetries of the Standard Model; in
fact νL belongs to a weak isodoublet of the Standard Model:
!
νL
LL =
(1.19)
lL
Since νL has third component of the weak isospin I3 = 1/2, the combination
νcL νL = −νTL C † νL in the Majorana mass term in eq. 1.6 has I3 = 1 and belongs to a
triplet. Since in the Standard Model there is no Higgs triplet that could couple to
νcL νL in order to form a Lagrangian term invariant under a S U(2) L transformation
of the Standard Model gauge group, a Majorana mass term for ν L is forbidden.
On the other hand, m D is allowed in the Standard Model, because it is generated through the standard Higgs mechanism, and mR is also allowed, because νR
and νcR νR are singlets of the Standard Model gauge symmetries. Hence, quite unexpectedly, we have an extended Standard Model with massive neutrinos that are
Majorana particles and in which the smallness of neutrino masses can be naturally
explained through the see-saw mechanism.
The only assumption which remains unexplained in this scenario is the heaviness of mR with respect to mD . This assumption cannot be motivated in the framework of the Standard Model, but if we believe that the Standard Model is a theory
1.1 Neutrino masses and mixing
9
that describes the world only at low energies, it is quite natural to expect that the
mass mR is generated at ultra-high energy by the symmetry breaking of the theory
beyond the Standard Model. Hence, it is plausible that the value of m R is many
orders of magnitude larger than the scale of the electroweak symmetry breaking
and of mD , as required for the working of the see-saw mechanism [18].
1.1.3 Three-neutrino mixing
In the previous sections we considered the existence of only one neutrino, but
it is well known from a large variety of experimental data that there are three
neutrinos that participate to weak interactions: νe , νµ , ντ . In particular, from the
precise measurement of the invisible width of the Z-boson produced by the decays
P
Z → α να να , we also know that the number of active flavor neutrinos is exactly
three [19] excluding the possibility of existence of additional heavy active flavor
neutrinos.
The active flavor neutrinos take part in the charged-current (CC) and neutral
current (NC) weak interaction Lagrangians
X
g CC ρ
CC
j
W
+
H.c.,
with
j
=
2
ναL γρ αL ,
(1.20)
LCC
=
−
√
ρ
ρ
I
2 2
α=e,µ,τ
X
g
NC
NC ρ
NC
LI = −
ναL γρ ναL ,
(1.21)
j Z + H.c.,
with jρ =
2 cos θW ρ
α=e,µ,τ
NC
where jCC
ρ and jρ are, respectively, the charged and neutral leptonic currents, θ W
is the weak mixing angle (sin2 θW ' 0.23) and g = e/ sin θW (e is the positron
electric charge).
The Dirac+Majorana mass term, given by Eq. 1.7, considering three lefthanded chiral fields νeL , νµL , ντL (that describe the three active flavor neutrinos)
and three corresponding right-handed chiral fields ν s1 R , ν s2 R , ν s3 R (that describe
three sterile neutrinos, which do not take part in weak interactions), can be written
as
X
D
LD = −
ν sR M sβ
νβL + H.c.
(1.22)
s,β
1X c L
ν M νβL + H.c.
2 α,β αL αβ
1X c R
= −
ν M ν s0 R + H.c.
2 s,s0 sR αβ
LLM = −
(1.23)
LRM
(1.24)
where M D is a complex matrix and M L , M R are symmetric complex matrices.
Then, following the procedure illustrated in section 1.1.1, the Dirac+Majorana
10
Neutrino physics
mass term can be written as the one in eq. 1.8 with the column matrix of lefthanded fields


 c 
!
 νeL 
 ν s1 R 
νL


 c 
c

ν
,
with
ν
=
NL =
and
ν
=
(1.25)
L
 µL 
 ν s2 R 
R
νcR
c
ντL
ν s3 R
and the 6 × 6 mass matrix
M=
M L (M D )T
MD
MR
!
(1.26)
The matrix M is digonalised by a unitary transformation analogous to the one in
eq. 1.10:


 ν1L 


(1.27)
NL = VnL
with
nL =  ... 


ν6L
where V is the unitary (6 × 6) mixing matrix and nkL are the left-handed components of the massive neutrino fields. The mixing matrix V is determined by the
diagonalization relation
VT MV = diag(m1 , · · · , m6 )
(1.28)
with mk real and positive for k = 1, · · · , 6.
Therefore the Dirac+Majorana mass term can be written as
1X
=−
mk νckL νkL + H.c.
2 k=1
6
L
D+M
(1.29)
which is a sum of Majorana mass terms for the massive Majorana neutrino fields
νK = νkL +νckL , (k = 1, · · · , 6). Hence, as we have already seen, in the case of one
neutrino generation, a Dirac+Majorana mass term implies that massive neutrinos
are Majorana particles. The mixing relation 1.27 can be written as
ναL =
6
X
k=1
Vαk νkL
(α = e, µ, τ),
νcsR
=
6
X
V sk νkL
(s = s1 , s2 , s3 )
k=1
(1.30)
which shows that active and sterile neutrinos are linear combinations of the same
massive neutrino fields. This means that in general active-sterile oscillations are
possible.
The so called ”see-saw” mechanism, that allows to explain the smallness of
the light neutrino masses, can be applied also in this case. Assuming that M L = 0
1.1 Neutrino masses and mixing
11
and that the eigenvalues of M R are much larger than those of M D (as expected if
the Majorana mass term 1.24 for the sterile neutrinos is generated at a very high
energy scale characteristic of the theory beyond the Standard Model), the mixing
matrix V can be written
V = WU
(1.31)
where both W and U are unitary matrices, and use W for an approximate blockdiagonalization of the mass matrix M at leading order in the expansion in powers
of (M R )−1 M D :
!
Mlight
0
T
(1.32)
W MW '
0
Mheavy
It can be demonstrated that the two 3 × 3 mass matrices Mlight and Mheavy are given
by
Mlight ' −(M D )T (M R )−1 M D ,
Mheavy ' M R
(1.33)
Therefore, the see-saw mechanism is implemented by the suppression of M light
with respect to M D by the small ratio (M D )T (M R )−1 . For the low-energy phenomenology it is sufficient to consider only the light 3 × 3 mass matrix M light
which is diagonalized by the 3 × 3 upper-left submatrix of U that we call U, such
that
U T Mlight U = diag(m1 , m2 , m3 )
(1.34)
where m1 , m2 , m3 are the three light neutrino mass eigenvalues. Neglecting the
small mixing with the heavy sector, the effective mixing of the active flavor neutrinos relevant for the low-energy phenomenology is given by
ναL =
3
X
Uαk νkL
(α = e, µ, τ)
(1.35)
k=1
where ν1L , ν2L , ν3L are the left-handed components of the three light massive Majorana neutrino fields. This scenario, called ”three-neutrino mixing”, can accommodate the experimental evidences of neutrino oscillations in solar and atmospheric
neutrino experiments.
The 3 × 3 unitary mixing matrix U can be parameterized in terms of 3 2 = 9
parameters which can be divided in 3 mixing angles and 6 phases. However,
only 3 phases are physical. This can be seen by considering the charged-current
Lagrangian 1.201 , which can be written as
LCC
I
1
3
g X X
=−√
αL γρ Uαk νkL Wρ† + H.c.
2 α=e,µ,τ k=1
(1.36)
Unitary mixing has no effect on the neutral-current weak interaction Lagrangian, which is
diagonal in the massive neutrino fields (GIM mechanism)
12
Neutrino physics
in terms of the light massive neutrino fields νk (k = 1, 2, 3). Three of the six
phases in U can be eliminated by rephasing the charged lepton fields e, µ, τ , whose
phases are arbitrary because all other terms in the Lagrangian are invariant under
such change of phases ( [20], [21], [22], [23]). On the other hand, the phases of the
Majorana massive neutrino fields cannot be changed, because the Majorana mass
term in eq. 1.29 are not invariant2 under rephasing of νkL . Therefore, the number
of physical phases in the mixing matrix U is three and it can be shown that two
of these phases can be factorized in a diagonal matrix of phases on the right of U.
These two phases are usually called ”Majorana phases”, because they appear only
if the massive neutrinos are Majorana particles (if the massive neutrinos are Dirac
particles these two phases can be eliminated by rephasing the massive neutrino
fields, since a Dirac mass term is invariant under rephasing of the fields). The
third phase is usually called ”Dirac phase”, because it is present also if the massive
neutrinos are Dirac particles, being the analogous of the phase in the quark mixing
matrix. These complex phases in the mixing matrix generate violations of the CP
symmetry ( [24], [25], [26], [27], [28], [16]).
The most common parameterization of the mixing matrix is
U = R23 W13 R12 D(λ21 , λ31 )




c13
0 s13 e−iϕ13   c12 s12 0   1
0  
 1 0

 


0
1
0
=  0 c23 s23  
  −s12 c12 0   0


0
0
0 1
−s13 eiϕ13 0
c13
0 −s23 c23




c12 c13
s12 c13
s13 e−iϕ13   1


iϕ13
iϕ13
c12 c23 − s12 s23 s13 e
s23 c13   0
=  −s12 c23 − c12 s23 s13 e


s12 s23 − c12 c23 s13 eiϕ13 −c12 s23 − s12 c23 s13 eiϕ13 c23 c13
0
(1.37)

0
0 

eiλ21 0 

0 eiλ31

0
0 

eiλ21 0 

0 eiλ31
with ci j = cosθi j , si j = sin θi j , where θ12 , θ23 , θ13 are the three mixing angles, ϕ13
is the Dirac phase, λ21 and λ31 are the Majorana phases. In eq. 1.37 Ri j is a real
rotation in the i− j plane, W13 is a complex rotation in the 1−3 plane and D(λ21 λ31 )
is the diagonal matrix with the Majorana phases.
Let us finally remark that, although in the case of Majorana neutrinos there is
no difference between neutrinos and antineutrinos and one should only distinguish
between states with positive and negative helicity, it is a common convention to
call neutrino a particles created together with a positive charged lepton and having almost exactly negative helicity, and antineutrino a particles created together
with a negative charged lepton and having almost exactly positive helicity. This
2
In Field Theory, Noether’s theorem establishes that invariance of the Lagrangian under a
global change of phase of the fields corresponds to the conservation of a quantum number: lepton
number L for leptons and baryon number B for quarks. The non-invariance of the Majorana mass
term in eq. 1.29 under rephasing of νkL implies the violation of lepton number conservation.
Indeed, a Majorana mass term induces | ∆L |= 2 processes as neutrinoless double-β decay
1.2 Neutrino oscillation theory
13
convention follows from the fact that Dirac neutrinos are created together with a
positive charged lepton and almost exactly negative helicity, and Dirac antineutrinos are created together with a negative charged lepton and almost exactly positive
helicity.
1.2 Neutrino oscillation theory
Consider a neutrino beam created in a charged current interaction along the antilepton α. As discussed in section 1.1.3, by definition, the neutrino created is
called να . In general this is not a physical particle but rather is a superimposition
of the physical fields νk with masses mk . Therefore, the normalized state describing a neutrino with flavor α is
|να i =
3
X
k=1
?
Uαk
|νk i
(1.38)
This state describes the neutrino at the production point at the production time.
The state describing the neutrino at detection, after a time T at a distance L of
propagation in vacuum, is obtained by acting on |να i with the space-time translab + iPL),
b where E
b and P
b are the energy and momentum
tion operator exp(−iET
operators, respectively. The resulting state is
|να (L, T )i =
3
X
k=1
? −iEk T +ipk L
e
|νk i
Uαk
(1.39)
where E k and pk are, respectively, the energy and momentum of the massive neutrino νk .
Inverting the Eq. 1.35 it can demonstrated that, at detection, the state is a
superposition of different neutrino flavors:

 3
X X


?
−iE
T
+ip
L
k
k
(1.40)
Uβk  |νβ i
|να (L, T )i =
 Uαk e
β=e,µ,τ
k=1
The coefficient of |νβ i is the amplitude of να → νβ transitions, then the probability
can be obtained as follows:
3
2
X ? −iEk T +ipk L 2
Pνα →νβ (L, T ) = |hνβ |να (L, T )i| = Uαk e
Uβk (1.41)
k=1
The transition probability 1.41 depends on the space and time of neutrino propagation, but in real experiments the propagation time is not measured. Therefore it
is necessary to connect the propagation time to the propagation distance.
14
Neutrino physics
In order to obtain an expression for the transition probability depending only
on the known distance between neutrino source and detector we can use the approximation:
E k t − p x x ' (E k − pk )L =
E k2 − p2k
m2k
m2
L=
L' kL
E k + pk
E k + pk
2E
(1.42)
where E is the neutrino energy in the massless limit. This approximation for the
phase of the neutrino oscillation amplitude is very important, because it shows
that the phase of ultrarelativistic neutrinos depends only on the ratio m 2k L/E and
not on the specific values of E k and pk , which in general depend on the specific
characteristics of the production process. The resulting oscillation probability is,
therefore, valid in general, regardless of the production process.
With the approximation 1.42, the transition probability in space can be written
as
2
X ? −im2 L/2E
(1.43)
Uαk e k
Uβk Pνα →νβ (L) = k


X
X
 ∆m2k j L 
2
2
?
?

=
|Uαk | |Uβk | + 2Re
Uαk Uβk Uα j Uβ j exp −i
2E
k
k> j
where ∆m2k j ≡ m2k − m2j . Equation 1.43 shows that the constants of nature that
determine neutrino oscillations are the elements of the mixing matrix and the differences of the squares of the neutrino masses. Different experiments are characterized by different neutrino energy E and different source-detector distance L.
In the simplest case of two-neutrino mixing between να , νβ there is only one
squared-mass difference ∆m2 ≡ ∆m221 ≡ m22 − m21 and the mixing matrix can be
parameterized in terms of one mixing angle:
!
cos θ sin θ
U=
(1.44)
− sin θ cos θ
The resulting transition probability between different flavors can be written as
!
2
2
2 ∆m L
Pνα →νβ (L) = sin 2θ sin
(1.45)
4E
This expression is historically very important, because the data of neutrino oscillation experiments have been always analyzed as a first approximation in the
two-neutrino mixing framework using eq. 1.45. The two-neutrino transition probability can also be written as
!
(∆m2 /eV 2 )(L/Km)
2
2
Pνα →νβ (L) = sin 2θ sin 1.27
(1.46)
(E/GeV)
1.2 Neutrino oscillation theory
15
1.2.1 Phenomenology of three neutrino mixing
The explanation in terms of neutrino oscillation of the recent solar and atmospheric neutrino experimental results can be accommodated in the framework of
three-neutrino mixing illustrated in section 1.1.3.
In the mixing matrix U (Eq. 1.37), the mixing angle θ12 is associated with
the solar neutrino oscillations, and the masses m1 and m2 are separated by the
smaller interval ∆m2sun (we shall assume, by convention, that m2 > m1 ) while m3
is separated from the 1, 2 pair by the larger interval ∆m2atm , and can be either
lighter or heavier than m1 and m2 . The situation where m3 > m2 is called ”normal
hierarchy”, while the ”inverse hierarchy” has m3 < m1 .
The transition probability expressed in Eq. 1.43 can be simplified in several cases of practical importance; using the empirical evidence that ∆m 2atm ∆m2sun and considering distances comparable to the atmospheric neutrino oscillation length, only three parameters are relevant at the lowest order of approximation: the angles θ23 , θ13 and ∆matm ≡ ∆m2atm L/4E ν . However, corrections of
the first order in ∆m sun L/4E ν should be also considered, while some of the terms
with ∆ sun are further reduced by the presence of the empirically small sin2 2θ13 .
Therefore we can write the following transition probabilities:
P(νµ → ντ ) ≈ cos4 θ13 sin2 2θ23 sin2 ∆atm
(1.47)
2
2
2
2
2
−∆ sun cos θ13 sin 2θ23 (cos θ12 − sin θ13 sin θ12 ) sin 2∆atm
−∆ sun cos δ cos θ13 sin 2θ12 sin 2θ13 sin 2θ23 cos 2θ23 sin 2∆atm /2
+∆ sun sin δ cos θ13 sin 2θ12 sin 2θ13 sin 2θ23 sin2 ∆atm
P(νµ → νe ) ≈ sin2 2θ13 sin2 θ23 sin2 ∆atm
(1.48)
2
2
2
2
−∆ sun sin θ23 sin θ12 sin 2θ13 sin 2∆atm
+∆ sun cos δ cos θ13 sin 2θ13 sin 2θ23 sin 2θ12 sin 2∆atm /2
−∆ sun sin δ cos θ13 sin 2θ12 sin 2θ13 sin 2θ23 sin2 ∆atm
where δ is the CP phase.
1.2.2 Neutrino oscillations in matter
The presence of matter between the neutrino source and the detector can modify
the oscillation pattern of traveling neutrinos, due to their coherent forward scattering from particles. This is true even if, as in the Standard Model, forward scattering of neutrinos from other particles does not by itself change neutrino flavor. The
flavor change in matter is know as the Mikheyev-Smirnov-Wolfenstein (MSW)
effect [29] and depends by an interplay between flavor non changing neutrinomatter interactions and neutrino mass and mixing.
16
Neutrino physics
In the two-neutrino approximation, a ν can be described by a column vector in
flavor space
!
ae (t)
(1.49)
aµ (t)
where ae (t) is the amplitude for the neutrino to be a νe at time t, and similarly for
the other flavor.
The neutrino propagation through matter, can be described with a good approximation via a Schrodinger equation in which the Hamiltonian H is a 2 × 2
matrix that acts on this column vector. If the neutrino is traveling in vacuum, the
mixing is described by the vacuum mixing matrix
!
cos θV sin θV
(1.50)
UV =
− sin θV cos θV
in which θV is the mixing angle in vacuum. Then the Hamiltonian H in vacuum
is
!
∆m2V − cos 2θV sin 2θV
HV =
(1.51)
sin 2θV cos 2θV
4E
where ∆m2V ≡ m22 − m21 and E is the neutrino energy. In matter, in order to take into
account the contribute of the W-exchange-induced coherent forward scattering of
νe from ambient electrons, an interaction energy V to the νe → νe element of H,
has to be added. The V term can be written as
√
(1.52)
V = 2G F Ne
where G F is the Fermi constant and Ne is number of electrons per unit volume.
Thus, the 2 × 2 Hamiltonian in matter is
!
!
∆m2V − cos 2θV sin 2θV
V 0
(1.53)
+
H=
0 0
sin 2θV cos 2θV
4E
Adding to this H the multiple −V/2 of the identity, we may rewrite it as
!
∆m2M − cos 2θ M sin 2θ M
H=
(1.54)
sin 2θ M cos 2θ M
4E
p
where ∆m2M = ∆m2V sin2 2θV + (cos 2θV − x)2 is the effective mass splitting in
matter, and
sin2 2θV
2
(1.55)
sin 2θ M =
sin2 2θV + (cos 2θV − x)2
is the effective mixing angle in matter. The factor
x≡
V
∆m2V /2E
(1.56)
1.2 Neutrino oscillation theory
17
is a dimensionless measure of the relative importance of the interaction with matter in the behavior of the neutrino.
If the matter traversed by the neutrino is of constant density then the Hamiltonian (Eq. 1.54) is a position-independent constant: the analytical expression is
the same as the vacuum Hamiltonian, except that the vacuum mass splitting and
mixing angle are replaced by their values in matter. As a result, the oscillation
probability is given by the Eq. 1.45, but with the mass splitting and mixing angle replaced by their values in matter. The latter values can largely differ from
their vacuum counterparts. A striking example is the case where the vacuum mixing angle θV is very small, but x ≈ cos 2θV . Then, as we see from eq. 1.55,
sin2 2θ M ≈ 1. Interaction with matter has modified a very small mixing angle with
a maximal one.
18
Neutrino physics
Chapter 2
Neutrino oscillation experiments
Historically, disappearance of solar νe gave the first signal of a neutrino anomaly,
that was therefore named ”solar anomaly”: as will be discussed in the next section,
the measured Cl rate in the Homestake experiment, was found to be about 3 times
lower than the predicted value, suggesting an intriguing discrepancy between a
pioneering experiment and supposedly accurate enough solar models. In 1972
Pontecorvo commented:
“It starts to be really interesting! It would be nice if all this will end
with something unexpected from the point of view of particle physics.
Unfortunately, it will not be easy to demonstrate this, even if nature
works that way”
About 15 years were necessary for a second experiment, and 30 for finally establishing solar oscillations. In the meanwhile several experiments demonstrate also
an atmospheric neutrino anomaly.
In this chapter we review the main results of the oscillation experiments which
are connected with the existing model-independent evidences in favor of oscillations of solar and atmospheric neutrinos and the interpretation of the experimental
data in the framework of three neutrino mixing.
2.1 Solar neutrinos
Solar neutrinos are produced by nuclear fusion processes in the core of the sun,
which yield exclusively electron neutrinos. The expected spectral composition of
solar neutrinos are indicated in Fig. 2.1 as a function of neutrino energy. The low
energy p-p neutrinos are the most abundant and, since they arise from reactions
that are responsible for most of the energy output of the sun, the predicted flux
of these neutrinos is constrained very precisely (±2%) by the solar luminosity.
20
Neutrino oscillation experiments
Figure 2.1: The predicted unoscillated spectrum dΦ/dE ν of solar neutrinos, together
with the energy thresholds of the experiments performed so far and with the best-fit oscillation survival probability Pee (Eν ) (dashed line).
The higher energy neutrinos are more accessible experimentally, but the fluxes
are known with a larger uncertainty.
2.1.1 Solar neutrino experiments
At the end of the 60's the radiochemical Homestake experiment [30] began the
observation of solar neutrinos through the charged-current reaction
νe +37 Cl →37 Ar + e−
(2.1)
th
with a threshold ECl
= 0.814 MeV which allows to observe mainly 7 Be and 8 B
neutrinos produced, respectively, in the reactions e− +7 Be →7 Li + νe (E = 0.8631
MeV) and 8 B →8 Be? + e+ + νe (E . 15 MeV) of the thermonuclear pp cycle that
produces energy in the core of the sun.
The Homestake experiment is called ”radiochemical” because the 37 Ar atoms
were extracted every ∼ 35 days from the detector tank containing 615 tons of tetrachloroethylene (C2 Cl4 ) through chemical methods and counted in small proportional counters which detect the Auger electron produced in the electron-capture
of 37 Ar. As all solar neutrino detectors, the Homestake tank was located deep
underground (1478 m) in order to have a good shielding from cosmic ray muons.
The Homestake experiment detected solar electron neutrinos for about 30 years,
measuring a flux which is about one third of the one predicted by the Standard
2.1 Solar neutrinos
21
Solar Model (SSM):
Hom
φCl
SSM
φCl
= 0.34 ± 0.03
(2.2)
This deficit was called ”the solar neutrino problem”.
The next radiochemical experiments, SAGE and Gallex/GNO [31] (respectively located in the Baksan and Gran Sasso underground laboratories in Soviet
Union and Italy) employed the reaction νe +71 Ga →71 Ge + e− which has the
lowest threshold reached so far, E νe > 0.233 MeV. As a consequence more than
half of the νe induced events is generated by pp neutrinos (Fig: 2.1). Their total
flux can be reliably approximated from the solar luminosity and can be predicted
by solar models with 1% error. After a half-live of 16.5 days the inverse β-decay
of 71Ge produces observable Auger electrons and X-rays with the typical L-peak
and K-peak energy distributions, giving two different signals used to infer the flux
of solar νe . The combined results of the three Gallium experiments confirm the
solar neutrino problem:
φGa
= 0.56 ± 0.03
(2.3)
SSM
φGa
The solar neutrino anomaly was also confirmed in the late 80's by the realtime water Cherenkov Kamiokande experiment [32] (3000 tons of water, 1000 m
underground) which observed solar neutrinos through the elastic scattering (ES)
reaction ν + e− → ν + e− which is mainly sensitive to electron neutrinos, whose
cross section is about six time larger than the cross section of muon and tau neutrinos. The experiment is called ”real-time” because the Cherenkov light produced
in water by the recoil electron in the ES reaction is observed in real time. The
solar neutrino signal is separated statistically from the background using the fact
that the recoil electron preserves the directionality of the incoming neutrino. The
energy threshold of the Kamiokande experiment was 6.75 MeV, allowing only
the detection of 8 B neutrinos. After 1995 the Kamiokande experiment has been
replaced by the bigger Super-Kamiokande experiment [33] (50 ktons of water,
1000 m underground) which has measured with high accuracy the flux of solar 8 B
neutrinos with an energy threshold of 4.75 MeV, obtaining:
φSESK
φSESS M
= 0.451 ± 0.005
(2.4)
Although it was difficult to doubt of the Standard Solar Model, which was
well tested by helioseismological measurements, and it was difficult to explain
the different suppression of solar νe ’s observed in different experiments with astrophysical mechanisms, a definitive model-independent proof that the solar neutrino
problem is due to neutrino physics was lacking until the real-time heavy-water
22
Neutrino oscillation experiments
Cherenkov detector SNO [34]. With respect to the Super-Kamiokande experiment, the crucial improvement is that SNO employs 1 kton of salt heavy water
rather than water, so that neutrinos can interact in different ways, allowing to measure separately the νe and νµ , ντ fluxes: in this sense SNO is the first solar neutrino
appearance experiment. SNO observes solar 8 B neutrinos through the interactions
1. ES: νe,µ,τ + e− → νe,µ,τ + e− .
Like in SK, νe,µ,τ can be detected (but not distinguished) thanks to CC and
NC scattering on electrons;
2. CC: νe + d → p + p + e−
Only νe can interact with CC reaction. SNO sees the scattered electron and
measures its direction and energy;
3. NC: ν + d → p + n + ν
All active neutrinos can break deuterons. The cross section is equal for all
flavours and has a E ν > 2.2 MeV threshold. About one third of the neutrons
are captured by deuterons and give a 6.25 MeV γ ray: observing the photopeak SNO can detect n with ∼ 15% efficiency. Adding salt allowed to tag
the n with enhanced ∼ 45% efficiency, because neutron capture by 35Cl
produces multiple γ rays.
Several handles allow to discriminate ES from CC from NC events. ES events
are not much interesting and can be subtracted since, unlike CC and NC events,
ES events are forward peaked. CC/NC discrimination was performed in different
ways before (phase 2) and after (phase 3) adding salt to heavy water: in phase
2 SNO mostly discriminated CC from NC events from their energy spectra: NC
events produce a γ ray of known average energy (almost always smaller than 9
MeV). The spectrum of CC events can be computed knowing the spectrum of 8 B
neutrinos (oscillations only give a minor distortion). Phase 2 SNO data imply
106
cm2 s
106
= 5.09 ± 0.44(stat) ± 0.46(syst) 2
cm s
φνe = 1.76 ± 0.06(stat) ± 0.09(syst)
φνe,µ,τ
(2.5)
The total flux agrees with the value predicted by solar models, and the reduced ν e
flux gives a 5σ evidence for νe → νµ,τ transitions.
After adding salt, SNO could statistically discriminate events from the pattern
of photomultiplier tube hits: NC events produce multiple γ rays and consequently
a more isotropic Cherenkov light than the single e− produced in CC and ES scat-
2.1 Solar neutrinos
23
Figure 2.2: Best-fit regions at 90, 99 and 99.73% CL obtained fitting solar ν data (red
dashed contours); reactor ¯ν data that do not distinguish θ from π/2 − θ (blue dotted contours); all data (shaded region). Dots indicate the best fit points [8].
terings. Phase 3 SNO data imply
106
cm2 s
106
φνe,µ,τ = 5.21 ± 0.27(stat) ± 0.38(syst) 2
(2.6)
cm s
giving a more accurate and independent measurement of total ν e,µ,τ flux. SNO
finds φνe /φνe,µ,τ < 1/2, that can be explained by oscillations enhanced by matter effects. In a successive phase, by adding 3 He SNO will be able of tagging
NC events on an event-by event basis by detecting neutrons via the scattering
n3 He → p3 H : proportional counters allow to see both p and 3 H. SNO (like SK)
can also search for energy-dependent or time-dependent effects. The day/night
asymmetry of the νe flux is found to be
φνe = 1.59 ± 0.08(stat) ± 0.08(syst)
ACC
DN = 7.0 ± 5.1%
(2.7)
assuming zero day/night asymmetry in the νe,µ,τ flux (the direct measurement of
this asymmetry is consistent with zero up to a ∼ 15% uncertainty).
2.1.2 The Kamland experiment
The result of the global analysis of all solar neutrino data in terms of the simplest
hypothesis of two-neutrino oscillations favors the so-called Large Mixing Angle
24
Neutrino oscillation experiments
Figure 2.3: Left: the E vis = E ¯ν+ me energy spectrum measured by KamLAND. Right:
history of reactor experiments and reduction in the reactor ν e flux as predicted at 1, 2, 3σ
by a global oscillation fit of solar data.
(LMA) region, as shown in Fig. 2.2. A spectacular proof of the correctness of the
LMA region has been obtained in the KamLAND long-baseline νe disappearance
experiment.
KamLAND [35] is a Cherenkov scintillator composed by 1 kton of a liquid
scintillator (the number of protons, 8.6 × 1031 , is about 200 times larger than in
CHOOZ, see section 2.2.3) contained in a spherical balloon surrounded by inert oil that shields external radiation. KamLAND detects νe emitted by terrestrial (mainly japanese) reactors using the νe + p → e+ + n reaction. The detector can see both the positron and the 2.2 MeV γ ray from neutron capture
on proton. By requiring their delayed coincidence, being located underground
and having achieved sufficient radio-purity, KamLAND reactor data are almost
background-free. As illustrated in Fig. 2.3, KamLAND only analyzes ν e events
with E vis = E e+ + me > 2.6 MeV (i.e. E ν > 3.4 MeV) in order to avoid a poorly
predictable background of νe generated by radioactive elements inside the earth.
Above this energy threshold KamLAND should detect, in absence of oscillations,
about 500 events per kton · yr, depending on operating conditions of reactors.
Thanks to previous reactor experiments the unoscillated νe flux is known with
∼ 3% uncertainty.
The KamLAND efficiency is about 90%. Using as fiducial volume only an inner fraction of the detector (known with 4.7% uncertainty), the 2004 data showed
258 events instead of the 365 ± 24 events expected in absence of oscillations. This
gives a 4σ evidence for a 68.6 ± 4.4(stat) ± 4.5(syst)% reduction in the ν e rate. As
illustrated in Fig. 2.3, this is consistent with expectations from solar data.
More importantly, KamLAND data allow to test if the νe survival probabil-
2.2 Atmospheric neutrinos
25
ity depends on the neutrino energy as predicted by oscillations.√In fact, KamLAND can measure the positron energy with a σ E /E = 7.5% E/MeV error,
then the neutrino energy is directly determinated by E ¯ν≈ E e+ + mn − m p . Present
KamLAND spectral data (Fig. 2.3) give a 3σ indication for oscillation dips:
the first one at E vis ∼ 7 MeV (where statistics is poor) and the second one at
E vis ∼ 4 MeV. Taking into account the average baseline L ∼ 180 km, this second
dip fixes ∆m2S un ≈ 8 × 10−5 eV2 . The global fit of Fig. 2.2 shows that ∆m2S un is
presently dominantly fixed by KamLAND data, which however still allow 3 different best-fit regions: the oscillation dip most likely identified as the second one
could instead be the first or the third one. Solar data help in resolving this ambiguity and dominantly fix the solar mixing angle θS un (which is more precisely
measured by SNO).
2.2 Atmospheric neutrinos
Atmospheric neutrinos are generated by collisions of primary cosmic rays, mainly
composed by H and He nuclei yielding respectively ∼ 82% and ∼ 12% of the nucleons. Heavier nuclei constitute the remaining fraction. The production process
can be schematised in 3 steps:
1. Primary cosmic rays hit the nuclei of air in the upper part of the earth atmosphere, producing mostly pions (and some kaon).
2. Charged pions decay promptly generating muons and muonic neutrinos:
π + → µ + νµ
π − → µ − νµ
(2.8)
The total flux of νµ ,νµ neutrinos is about 0.1 cm−2 s−1 at E ν ∼ GeV with
a ∼ 20% error (mostly due to the uncertainty in the flux of cosmic rays
and in their hadronic interactions). At higher energy the flux dφ/d ln E ν
approximately decreases as E ν−2±0.05 . The few kaons decay like pions, except
that K → πe+ νe decays are not entirely negligible.
3. The muons produced by π decays travel a distance
d ≈ cτµ γµ ≈ 1Km
Eµ
0.3GeV
(2.9)
here τµ is the muon life-time and γµ = E µ /mµ is the relativistic dilatation
factor. If all muons could decay
µ + → e + νe νµ
µ − → e − νe νµ
(2.10)
26
Neutrino oscillation experiments
Figure 2.4: Flux of atmospheric neutrinos in absence of oscillations.
one would obtain a flux of νµ and νe in proportion 2 : 1, with comparable energy, larger than ∼ 100 MeV. However, muons with energy above few GeV
typically collide with the earth before decaying, so that at higher energy the
νµ : νe ratio is larger than 2.
The fluxes predicted by detailed computations is shown in Fig. 2.4, at SK location,
averaged over zenith angle and ignoring oscillations.
2.2.1 Atmopheric neutrino experiments
The traditional way that has been followed for testing the atmospheric neutrino
flux calculation is to measure the ratio of ratios
R≡
[N(νµ + νµ )/N(νe + νe )]data
[[N(νµ + νµ )/N(νe + νe )]theo
(2.11)
If nothing happens to neutrinos on their way to the detector R should be equal to
one.
Atmospheric neutrinos are observed through high-energy charged-current interactions in which the flavor, direction and energy of the neutrino are strongly
correlated with the measured flavor, direction and energy of the produced charged
lepton.
In 1988 the Kamiokande [36] and IMB [37] experiments measured a ratio of
ratios significantly lower than one.
2.2 Atmospheric neutrinos
27
Also the Soudan-2 experiment [38] observed a value of R significantly lower
than one (R = 0.69 ± 0.12), and the MACRO experiment [39] measured a disappearance of upward-going muons.
Although data of the above experiments suggest an evidence of an atmospheric
neutrino anomaly probably due to neutrino oscillations, they are not completely
model-independent.
The breakthrough in atmospheric neutrino research occurred in 1998, when
the Super-Kamiokande Collaboration [40] discovered the up-down asymmetry of
high-energy events generated by atmospheric νµ 's, providing a model independent
proof of atmospheric νµ disappearance. The SuperKamiokandeI data are shown
in Fig. 2.5 (1489 days of data taking, terminated by an accident). The ”multiGeV µ + PC” data sample shows that a neutrino anomaly is present even without
relying our knowledge of atmospheric neutrino fluxes. The crucial point is that
since the Earth is a good sphere, in absence of oscillations the neutrino rate would
be up/down symmetric, i.e. it depends only on | cos θ|3 The dN/d cos θν spectrum
would be flat, if one could ignore that horizontal muons have more time for freely
decaying before hitting the earth, while vertical muons cross the atmosphere along
the shortest path. This effect produces the peak at cos θν ∼ 0 visible in Fig. 2.5b.
While the zenith-angle distribution of µ events is clearly asymmetric, e-like
events show no asymmetry. The flux of up-ward going muons is about two times
lower than the flux of down-ward muons. Therefore the data can be interpreted
assuming that nothing happens to νe and that νµ oscillate into ντ (or into sterile ν s ).
A global fit (performed including the results of the atmospheric neutrino experiment discussed in next section) gives the best-fit values shown in Fig. 2.6.
2.2.2 Long baseline experiments
Long Baseline experiments employs an artificial neutrino source to study the atmospheric neutrino anomaly.
In the K2K experiment , an artificial long-baseline νµ pulsed beam is sent from
KEK to the SK detector, located L = 250 km away in the Kamioka mine. Since
the beam is pulsed, SK can discriminate atmospheric νµ from KEK νµ , both detected using charged-current scattering on nucleons, as previously discussed. The
neutrino beam was produced by colliding a total of 9 × 1019 accelerated protons
on a target; a magnetic field is used to collect and focus the resulting π + , obtaining
from their decays a 98% pure νµ beam with an average energy of E ν ∼ 1.3 GeV.
The base-line L and the energy E ν have been chosen such that
• ∆m2atm L/E ν ∼ 1 in order to sit around the first oscillation dip;
• E ν ∼ m p in order to have large opening angles between the incoming neutrino and the scattered µ: θµν ∼ 1.
28
Neutrino oscillation experiments
Figure 2.5: The main SK data: number of e± (red) and of µ± (blue) events as function of
direction of scattered lepton. The horizontal axis is cos θ, the cosine of the zenith angle
ranging between -1 (vertically up-going events) and +1 (vertically down-going events).
The right panel shows high-energy through-going muons, only measured in the up direction. The crosses are the data and their errors, the thin lines are the best-fit oscillation
expectation, and thick lines are the no-oscillation expectation: these are roughly up/down
symmetric. Data in the multi-GeV muon samples are very clearly asymmetric, while data
in the electron samples (in red) are compatible with no oscillations.
Since the direction of the incoming neutrino is known (unlike in the case of atmospheric neutrinos), measuring E µ and θµν SK can reconstruct the neutrino energy
Eν =
mN E l − m2µ /2
mN − E l + pl cos θµν
(2.12)
having assumed that νµ n → µp is the dominant reaction. Since the neutrino flux
and the νµ N cross section are not precisely computable, small detectors (mainly a
1 kton WČ and fine-graned systems) have been built close to the neutrino source
in KEK, so that oscillations can be seen by comparing SK data with near detectors. νµ → ντ oscillations at the atmospheric frequency should give an energydependent deficit of events in the far detector.
The 2004 K2K results, shown in Fig. 2.6 are consistent with the expectations based on SK atmospheric data and contain a 4σ indication for oscillations.
Concerning the total rate, one expects in absence of oscillations 151 ± 12 fully
contained events in the SK fiducial volume (the uncertainty is mainly due to the
far/near extrapolation and to the error on the fiducial volume). SK detected 107
events of this kind. In view of the poorer statistics the atmospheric mixing angle
is determined much more precisely by SK than by K2K.
The most important K2K result is the energy spectrum: K2K is competitive on
the determination of ∆m2atm because, unlike SK, K2K can reconstruct the neutrino
energy and data show a hint of the spectral distortion characteristic of oscillations.
As a consequence K2K suggests a few different local best-fit values of ∆m 2atm , and
the global best fit lies in the region suggested by SK (Fig: 2.6b) [41].
2.2 Atmospheric neutrinos
29
Figure 2.6: The left panel shows the K2K data, and the expectation in absence of oscillations. The right panel shows the best-fit ranges at 90% CL from SK, K2K and NuMi.
The running NuMi experiment [42] is similar to K2K: a dominantly ν µ pulsed
beam is sent from FermiLab to the Minos detector, located 735 km away. Thanks
to the longer base-line, the NuMi neutrino beam (see section 3.5.1) has a larger
mean energy (around a few GeV) than K2K. A near detector, functionally identical to the far detector, allows to predict the non-oscillation rate. Both detectors
consist of magnetized steel plates alternated to scintillator strips. The far detector
has a 5.4 kton mass and a magnetic field B ∼ 1.2 Tesla: this allows to discriminate
particles from anti-particles, and to discriminate NC from CC scatterings. First
results indicate that 92 events have been observed at energies lower than 10 GeV,
showing a 5σ deficit with respect to the number of events expected in absence of
oscillations, 177 ± 11. Like K2K data, NuMi data also contain a hint of the spectral distortion predicted by oscillations, and point to a best-fit region similar to the
K2K best-fit region (see Fig. 2.6). First NuMi data (combined with the SK measurement of the atmospheric mixing angle) provide the best single measurement
of the mass splitting, ∆m2atm = (2.7 ± 0.4) × 10−3 eV2 . Future NuMi data should
reduce the uncertainty on ∆m2atm by a factor of few, achieve a sensitivity to θ23
slightly worse than SK and a sensitivity to νµ → νe slightly better than CHOOZ
(see section 2.2.3).
2.2.3 The reactor experiment Chooz
CHOOZ was a long-baseline reactor νe disappearance experiment [43] which did
not observe any disappearance of electron neutrinos at a distance of about 1 km
from the source. In spite of such negative result, the CHOOZ experiment is very
30
Neutrino oscillation experiments
Figure 2.7: Left: Allowed region obtained from the analysis of Super-Kamiokande atmospheric and K2K data in terms of νµ → ντ oscillations. Right: CHOOZ exclusion
curves confronted with the Kamiokande allowed regions [18].
important, because it shows that the oscillations of electron neutrinos at the atmospheric scale of ∆m2 are small or zero. This constraint is particularly important in
the framework of three-neutrino mixing.
The CHOOZ detector consisted in 5 tons of liquid scintillator in which neutrinos were revealed through the inverse β-decay reaction νe + p → n + e+ , with a
threshold E th = 1.8 MeV.
The ratio of observed and expected number of events in the CHOOZ experiment is
CHOOZ
Nobserved
CHOOZ
Nexpected
= 1.01 ± 0.04
(2.13)
showing no indication of any electron antineutrino disappearance.
The right panel in Fig. 2.7 shows the CHOOZ exclusion curves confronted
with the Kamiokande allowed regions for νµ → νe transitions. The area on the
right of the exclusion curves is excluded. Since the Kamiokande allowed region lies in the excluded area, the disappearance of muon neutrinos observed in
Kamiokande (and IMB, Super-Kamiokande, Soudan-2 and MACRO) cannot be
due to νµ → νe transitions. Indeed, νµ → νe transitions are also disfavored by
Super-Kamiokande data, which prefer the νµ → ντ channel.
The results of the CHOOZ experiment have been confirmed, albeit with lower
accuracy, by the Palo Verde experiment [44].
2.3 Short baseline experiments
31
Experiment
Bugey
CDHS
CCFR
LSND
KARMEN
NOMAD
CHORUS
NuTeV
Channels
ν e → νe
(−) (−)
νµ →νµ
(−) (−) (−) (−) (−) (−) (−) (−)
νµ →νµ , νµ → νe , νe → ντ , νe → νe
νµ → νe ,νµ → νe
ν µ → νe
ν µ → νe , νµ → ντ , νe → ντ
ν µ → ντ , νe → ντ
(−) (−)
νµ → νe
Table 2.1: Short-baseline experiments (SBL) whose data give the most stringent constraints on different oscillation channels.
2.3 Short baseline experiments
The SBL experiments whose data give the most stringent constraints on the different oscillation channels are listed in table 2.1. All the SBL experiments in
table 2.1 did not observe any indication of neutrino oscillations, except the LSND
experiment [45], which has presented evidence for νµ → νe oscillations at the
∆m2 ∼ 1 eV2 scale; this result could be accommodated together with solar and
atmospheric neutrino oscillations in the framework of four-neutrino mixing, in
which there are three light active neutrinos and one light sterile neutrino. However, the global fit of recent data in terms of four-neutrino mixing is not good [49],
disfavoring such possibility. Also, a large part of the region in the sin 2 θ-∆m2 plane
allowed by LSND has been excluded by the results of other experiments which are
sensitive to similar values of the neutrino oscillation parameters (KARMEN [46],
CCFR [47], NOMAD [48]).
The recent MiniBooNE experiment, running at Fermilab, was motivated by
LSND results. MiniBooNE is located at 541 m from the front of the target of the
Fermilab neutrino beam. The detector is a spherical tank of inner radius 610 cm
filled with 800 tons of pure mineral oil (CH2 ); charged particles passing through
the oil can emit both directional Cherenkov light and isotropic scintillation light.
An optical barrier separates the detector into two regions, an inner volume with a
radius of 575 cm and an outer volume 35 cm thick. The optical barrier supports
1280 equally-spaced inward-facing photomultiplier tubes (PMTs). An additional
240 tubes are mounted in the outer volume, which acts as a veto shield, detecting
particles entering or leaving the detector.
The MiniBooNE collaboration has recently provided first results [50]: as shown
in fig. 2.8, the LSND 90% CL allowed region is excluded at the 90% CL. While
there is a presently unexplained discrepancy with data lying above background at
32
Neutrino oscillation experiments
Figure 2.8: The MiniBooNE 90% CL limit and sensitivity (dashed curve) for events with
475 < EνQE < 3000 MeV within a two neutrino oscillation model. Also shown is the limit
from the boosted decision tree analysis (thin solid curve) for events with 300 < E νQE <
3000 MeV. The shaded areas show the 90% and 99% CL allowed regions from the LSND
experiment [50].
low energy, there is excellent agreement between data and prediction in the oscillation analysis region. If the oscillations of neutrinos and antineutrinos are the
same, this result excludes two neutrino appearance-only oscillations as an explanation of the LSND anomaly at 98% CL.
2.4 The global oscillation picture: know unknowns
As discussed in previous sections the ∆m2 responsible of the atmospheric anomaly
is larger than the one responsible of the solar anomaly. Therefore we identify:
|∆m213 | ∼ |∆m223 | = ∆m2atm ∼ (2.5 ± 0.2)10−3 eV2 ,
∆m212 = ∆m2sun ∼ (8.0 ± 0.3)10−5 eV2
(2.14)
A positive ∆m223 means that the neutrinos separated by the atmospheric mass splitting are heavier than those separated by the solar mass splitting: this is usually
named ”normal hierarchy”. At the moment this cannot be distinguished from the
opposite case usually named ”inverted hierarchy”.
2.5 Future prospects
33
Oscillation Parameter
Central Value
2
solar mass splitting
∆m12 = (8.0 ± 0.3)10−5 eV2
atmospheric mass splitting |∆m223 | = (2.5 ± 0.2)10−3 eV2
solar mixing angle
tan2 θ12 = 0.45 ± 0.05
atmospheric mixing angle
sin2 2θ23 = 1.02 ± 0.04
”CHOOZ” mixing angle
sin2 2θ13 = 0 ± 0.05
99% CL range
(7.2 ÷ 8.9)10−5 eV2
(2.1 ÷ 3.1)10−3 eV2
30◦ < θ12 < 38◦
36◦ < θ23 < 54◦
θ13 < 10◦
Table 2.2: Summary of present information on neutrino masses and mixings from oscillation data.
As explained in section 1.1.3, the neutrino mixing matrix contains 3 mixing
angles: two of them (θ23 and θ13 ) produce oscillations at the larger atmospheric
frequency, one of them (θ12 ) gives rise to oscillations at the smaller solar frequency. Solar data want a large mixing angle. The CHOOZ constraint tells that
νe can only be slightly involved in atmospheric oscillations, and SK finds that atmospheric data can be explained by νµ → ντ oscillations with large mixing angle.
These considerations single out the global solution
θ23 = θatm ∼ 45◦ ,
θ12 = θ sun ∼ 30◦ ,
θ13 . 10◦ ,
δ = unknown (2.15)
Nothing is known on the CP-violating phase δ.
If θ13 = 0 the solar and atmospheric anomalies depend on different set of parameters; there is no interplay between them. A θ13 , 0 would affect both solar
and atmospheric data. Both data provide some upper bound on θ 13 , preferring
θ13 = 0. The strongest bound on θ13 is directly provided by the CHOOZ experiment.
In conclusion, a summary of present information on neutrino masses and mixings from oscillation data is given in table 2.2.
2.5 Future prospects
Neutrino beam experiments are considered the main next step of oscillation studies. K2K (in Japan), NuMi (in USA) and CNGS (in Europe) are the first longbaseline experiments. While K2K and NuMI projects are disappearance experiments, the CNGS project employs a higher E ν , somewhat above the ντ → τ
production threshold, with the goal of directly confirming the ν µ → ντ character
of atmospheric oscillations by detecting a few τ appearance events. The OPERA
experiment will be described in detail in chapter 3.
However, the evolution of neutrino physics demands new schemes to produce
intense, collimated and pure neutrino beams. New possibilities have been studied
in the last few years: neutrino beams from a Neutrino Factory, Beta-Beams and
34
Neutrino oscillation experiments
Super-Beams. The current Neutrino Factory concept implies the production, collection, and storage of muons to produce very intense beams of muon and electron
neutrinos with equal fluxes through the decays 2.10. Research and development
addressing the feasibility of a Neutrino Factory are currently in progress. The
Beta-Beam concept is based on the acceleration and storage of radioactive ions.
The β-decay of these radioactive ions can produce a very intense beam of electron
neutrinos or antineutrinos with perfectly known energy spectrum.
A next-generation neutrino oscillation experiment using reactor antineutrinos
could give important information on the size of the mixing angle θ 13 . Reactor
experiments can give a clean measure of the mixing angle without ambiguities
associated with the size of the other mixing angles, matter effects, and effects due
to CP violation.
However, the search for |U 13 | and CP violation in the lepton sector does not
cover all the items in today neutrino physics. Let us emphasize that still several
fundamental characteristics of neutrinos are unknown. Among them, the Dirac or
Majorana nature of neutrinos, the absolute scale of neutrino masses, the distinction between the normal and inverted schemes and the electromagnetic properties
of neutrinos. The answer to one of the most important question in today neutrino physics, i.e. if neutrino are massive Majorana particles, can be resolved if
neutrinoless beta decay will be observed.
Chapter 3
The OPERA experiment
The OPERA (Oscillation Project with Emulsion-tRacking Apparatus) [51] experiment is motivated by recent results about atmopheric neutrinos anomaly.
The aim of the experiment is the direct observation of the ντ appearance in an
almost pure νµ beam (the CNGS neutrino beam). The detector is located at the
Gran Sasso Underground Laboratory at a distance of 732 Km from CERN, where
a facility producing muon neutrino has been realised.
Looking for the direct observation of the νµ → ντ appearance, OPERA will
constitute a milestone in the study of neutrino oscillations.
3.1 The CNGS beam
The CNGS neutrino beam [52] was designed and optimized for the study of
νµ → ντ oscillations in appearance mode, by maximizing the number of charged
current (CC) ντ interactions at the LNGS site.
A 400 GeV proton beam is extracted from the CERN SPS in 10.5 µs short
pulses with a design intensity of 2.4 x 1013 proton on target (p.o.t) per pulse. The
proton beam is transported through the transfer line TT41 to the CNGS target T40.
The target consists of a series of thin graphite rods. Secondary pions and kaons of
positive charge produced in the target are focused into a parallel beam by a system
of two magnetic lenses, called horn and reflector (Fig. 3.1).
A 1000 m long decay-pipe allows the pions and kaons to decay into muonneutrinos and muons. The remaining hadrons (protons, pions, kaons, ... ) are
absorbed by an iron beam-dump. The signals induced by muons (from π, k meson
decays) in two arrays of silicon detectors placed in the hadron stopper is used for
the on line monitoring and the tuning of the beam (steering of the proton beam on
target, horn and reflector alignment, etc.). The separation of the two arrays, 67 m
of passive material equivalent to 25 m of iron, allows a rough measurement of the
36
The OPERA experiment
Figure 3.1: Sketch of the CNGS components at the SPS of CERN
muon energy spectrum and of the beam angular distribution. Further downstream
the muons are absorbed in the rock while neutrinos continue to travel toward Gran
Sasso.
When the neutrino beam reaches Gran Sasso, 732 km from CERN, its diameter
is calculated to be of the order of two kilometres. Due to the Earth curvature
neutrinos from CERN enter the LNGS halls with an angle of about 3 degrees with
respect to the horizontal plane.
The average neutrino energy at the LNGS location is ∼ 17 GeV. The ν µ contamination is ∼ 4%, the νe and νe contaminations are lower than 1%, while the
number of prompt ντ from D s decay is negligible. The average L/E ν ratio is 43
km GeV−1 .
Assuming a CNGS beam intensity of 4.5 x 1019 p.o.t. per year and a five year
run, about 22000 CC plus neutral current (NC) neutrino events will be collected by
OPERA from interactions in the lead-emulsion target. Out of them 67 (152) CC ν τ
interactions are expected for ∆m2 = 2 x 10−3 eV2 (3 x 10−3 eV2 ) and sin2 2θ23 = 1.
Taking into account the overall τ detection efficiency the experiment should gather
10 ÷ 15 signal events with a background of less than one event.
3.2 The OPERA detector
The OPERA experiment is designed starting from the ECC concept, which combines in one cell (Fig. 3.3) the high precision tracking capabilities of the nuclear
emulsions and the large target mass given by the lead plates. By piling-up a series
of cells in a sandwich-like structure one obtains a brick (Fig. 3.4) which consti-
3.2 The OPERA detector
37
Figure 3.2: Shematic drawing of the OPERA experiment
tutes the basic detector unit.
The OPERA apparatus (Fig. 3.2) consists of 2 identical parts called supermodules (SMs). Each super-module consists of ∼ 77375 lead/emulsion bricks
arranged in 29 target planes, each brick wall is followed by two scintillator planes
with an effective granularity of 2.6 × 2.6 cm2 .
These planes serve as trigger devices and allow selecting the brick containing
a neutrino interaction. A muon spectrometer at the downstream end of each SM
allows to measure the muon charge and momentum. A large size anti-coincidence
detector placed in front of the first SM allows to veto (or tag) interactions occurring in the material and in the rock upstream of the target.
The construction of the experiment started in Spring 2003. The first instrumented magnet was completed in May 2004 together with the first half of the
target support structure. The second magnet was completed in the beginning of
2005. In Spring 2006 all scintillator planes were installed. The production of the
ECC bricks started in October 2006 with the aim of completing half target for the
high-intensity run of October 2007.
3.2.1 Target section
The target part is composed of 29 walls (58 in total) and each wall contains a layer
called brick wall and a layer called Target Tracker (TT) wall.
The brick wall contains ∼ 2668 bricks for a total of 154750 bricks in the whole
apparatus. The brick support structure is designed to insert or extract bricks from
the sides of the walls, by using an automated manipulator (BMS).
An R&D collaboration between the Fuji Company and the Nagoya University
38
The OPERA experiment
Figure 3.3: Schematic structure of an ECC cell in the OPERA experiment. The τ
decay kink is reconstructed in space by using four track segments in the emulsion
films.
group allowed the large scale production of the emulsion films needed for the
experiment (more than 9 million individual films) fulfilling the requirements of
uniformity of response and of production, time stability, sensitivity, schedule and
cost [53]. The main peculiarity of the emulsion films used in high energy physics
compared to normal photographic films is the relatively large thickness of the
sensitive layers (∼ 44 µm) placed on both sides of a 205 µm thick plastic base.
A target brick (ECC) consists of 56 lead plates of 1 mm thickness and 57
emulsion films. The plate material is a lead alloy with a small calcium content
to improve its mechanical properties. The transverse dimensions of a brick are
12.7 × 10.2 cm2 and the thickness along the beam direction is 7.5 cm (about 10
radiation lengths). The weight is 8.3 Kg.
The dimensions of the bricks are determined by conflicting requirements: the
mass of the bricks selected and removed for analysis should represent a small
fraction of the total target mass; on the other hand, the brick transverse dimensions should be substantially larger than the uncertainties in the interaction vertex
position predicted by the electronic trackers. The brick thickness in units of radiation lengths is large enough to allow electron identification through their electromagnetic showering and momentum measurement by multiple coulomb scattering following tracks in consecutive cells. An efficient electron identification
requires about 3 ÷ 4 X0 and the multiple scattering requires ∼ 5 X0 . With a 10 X0
3.2 The OPERA detector
39
Figure 3.4: Photography of an OPERA brick delivered by the BAM. The picture
shows the CS box attached to the brick at its downstream side.
brick thickness, for half of the events such measurements can be done within the
same brick where the interaction took place, without the need to follow tracks into
downstream bricks.
The construction of more than 150000 bricks for the neutrino target is accomplished by an automatic machine, the Brick Assembly Machine (BAM), operating
underground in order to minimize the number of background tracks from cosmicrays and environmental radiation. Two Brick Manipulating Systems (BMS) on the
lateral sides of the detector position the bricks in the target walls and also extract
those bricks containing neutrino interactions.
The needs of adequate spatial resolution for high brick finding efficiency, for
good calorimetric measurement of the events, as well as the requirement of covering large surfaces (∼ 6000 m2 ), impose strong requirements on the Target Tracker
(TT). Therefore, the cost-effective technology of scintillating strips with wave
length shifting fiber readout was adopted.
The polystyrene scintillator strips are 6.86 m long, 10.6 mm thick and 26.3
mm wide. A groove in the center of the strip houses the 1 mm diameter fiber.
Multi anode, 64-pixel photomultipliers are placed at both ends of the fibers. A
basic unit of the TT called module consists of 64 strips glued together. One plane
of 4 modules of horizontal strips and one of 4 modules of vertical strips form a
scintillator wall providing X-Y track information (Fig. 3.5).
Simulations have shown that a transverse segmentation below the adopted di-
40
The OPERA experiment
Figure 3.5: Schematic view of the target tracker wall.
mensions for scintillator strips, does not significantly improve the physics performance, in particular the brick finding efficiency. Their energy
resolution is what
√
is expected from a calorimetric sampling (∆E/E ∼ 0.65/ E(GeV) + 0.16). During the run, muons generated in the interaction of CNGS neutrinos in the cavern
rock (”rock muons”), cosmics ray muons, radioactive sources and light injection
systems will be used to calibrate the system.
The selection of the brick containing the neutrino interaction vertex is performed by combining different algorithms based on the observed transverse and
longitudinal event profiles as well as on the presence of individual reconstructed
tracks. As an illustration, Fig. 3.6 shows the longitudinal profile of a simulated ν τ
event with a muonic decay in a projected view.
In order to reduce the emulsion scanning load the use of Changeable Sheets,
successfully applied in the CERN CHORUS experiment [54], was extended to
OPERA. CS doublets are attached to the downstream face of each brick and can
be removed without opening the brick (Fig.3.4). Charged particles from a neutrino
interaction in the brick cross the CS and produce a trigger in the TT scintillators.
Following this trigger the brick is extracted and the CS developed and analyzed in
the scanning facility at LNGS (see chapter 5). The information of the CS is used
for a precise prediction of the position of the tracks in the most downstream films
of the brick, hence guiding the so-called scan-back vertex-finding procedure (see
chapter 6).
3.2 The OPERA detector
41
Figure 3.6: Display of a simulated τ → µ event in the OPERA target. The beam
comes from the left of the figure. The primary vertex occurs in the third brick wall.
Each wall of bricks is followed by a TT plane. These planes are oriented along
the X and Y directions, perpendicular to the beam. The muon track corresponds
to the longest track escaping on the right of the figure.
3.2.2 Muon Spectrometers
Muon spectrometers [55] are conceived to perform muon identification and charge
measurement which are needed for the study of the muonic τ-decay channel and
for the suppression of the background from the decay of charmed particles, featuring the same topology (see section 3.4.1).
Each muon spectrometer (Fig. 3.7) consists of a dipolar magnet made of two
iron arms for a total weight of 990 ton. The measured magnetic field intensity is
1.55 T. The two arms are interleaved with vertical, 8 m long drift-tube planes (PT)
for the precise measurement of the muon-track bending. Planes of Resistive Plates
Chambers (RPCs) are inserted between the iron plates of the arms, providing a
coarse tracking inside the magnet, range measurement of the stopping particles
and a calorimetric analysis of hadrons.
In order to measure the muon momenta and determine their sign with high
accuracy, the Precision Tracker (PT) is built of thin walled aluminum tubes with
38 mm outer diameter and 8 m length [56]. Each of the ∼ 10000 tubes has a
central sense wire of 45 µm diameter. They can provide a spatial resolution better
than 300 µm. Each spectrometer is equipped with six fourfold layers of tubes.
RPCs [57] identify penetrating muons and measure their charge and momentum in an independent way with respect to the PT. They consist of electrode plates
made of 2 mm thick plastic laminate of high resistivity painted with graphite. Induced pulses are collected on two pickup strip planes made of copper strips glued
on plastic foils placed on each side of the detector. The number of individual RPCs
42
The OPERA experiment
Figure 3.7: Top view of an OPERA spectrometer
is 924 for a total detector area of 3080 m2 . The total number of digital channels
is about 25000, one for each of the 2.6 cm (vertical) and 3.5 cm (horizontal) wide
strips.
In order to solve ambiguities in the track spatial-reconstruction each of the two
drift-tube planes of the PT upstream of the dipole magnet is complemented by an
RPC plane with two 42.6◦ crossed strip-layers called XPCs. RPCs and XPCs give
a precise timing signal to the PTs.
Finally, a detector made of glass RPCs is placed in front of the first Super
Module, acting as a veto system for interactions occurring in the upstream rock
[58], [59].
3.3 Operation mode
With the CNGS beam on, OPERA will run in a rather complex mode.
The low data rate from events due to neutrino interactions is in correlation
with the CNGS beam spill. The synchronization with the spill is done off line
via GPS. The detector remains sensitive during the inter-spill time and runs in
a trigger-less mode. Events detected out of the beam spill (cosmic-ray muons,
background from environmental radioactivity, dark counts) are used for monitoring. The global DAQ is built as a standard Ethernet network whose 1147 nodes
are the Ethernet Controller Mezzanines plugged on controller boards interfaced to
3.3 Operation mode
43
Figure 3.8: τ decay length distribution, obtained assuming the CNGS energy spectrum and oscillation parameters coming from atmospheric neutrino experiments.
each sub-detector specific front-end electronics. A general 10 ns clock synchronized with the local GPS is distributed to all mezzanines in order to insert a time
stamp to each data block. The event building is performed by sorting individual
sub detector data by their time stamps.
As already explained above, upon the event trigger, the electronic detectors
provide the trigger for brick extraction, and a probability map on the preceding
brick walls. The brick located in the most probable area is extracted by the BMS
and the interface CS doublets is detached, exposed to X-ray reference mark set,
then developed. In the meanwhile the brick is stored underground, waiting for CS
scanning feedback. If the CSs do not confirm the interaction, the brick is equipped
with two new emulsion sheets and placed back in the detector.
If the CSs confirm the presence of the interaction, the brick is transported to the
outside laboratories and exposed to cosmic-rays inside a dedicated ”pit”, where it
is shielded by 40 cm iron to minimize the electron component at mountain altitude
[60]. Penetrating cosmic-ray muons will allow the sheet-to-sheet alignment with
sub-micrometric precision (see chapter 5).
Subsequently, the brick is exposed to X-ray reference mark set (which will
provide a reference system during the scanning), disassembled and each emulsion
labeled with an Id number. The films are developed with an automatic system in
parallel processing chains and dispatched to the scanning labs.
The expected number of bricks extracted per running-day with the full target
installed and CNGS nominal intensity is about 22. The large emulsion surface to
44
The OPERA experiment
be scanned requires fast automatic microscopes continuously running at a speed of
∼ 20 cm2 film surface per hour. This requirement has been met after R&D studies
conducted using two different approaches by some of the European groups of the
Collaboration (ESS) [61] and by the Japanese groups (S-UTS) [62].
The European Scanning System (ESS) will be described in chapter 5.
3.4 Physics performances
3.4.1 τ detection and signal efficiency
The signal of the occurrence of νµ → ντ oscillation is the charged current interaction of ντ ’s in the detector target (ντ N → τ− X). The reaction is identified by
the detection of the short-lived τ lepton. The τ decay channels investigated by
OPERA are the electron, muon and hadron channels:
BR
17.8%
τ− → e − ντ νe
τ− → µ − ντ νµ
17.7%
−
−
0
τ → h ντ (nπ ) 49.5%
For the typical τ energies expected with the CNGS beam one obtains the decay
length distribution shown in Fig. 3.8, with an average decay length of ∼ 450 µm.
The τ decays inside the ECCs are classified in two categories: long and short
decays. Short decays correspond to the case where the τ decays in the same
lead plate where the neutrino interaction occurred. The τ candidates are selected
on the basis of the impact parameter (IP) of the τ daughter track with respect
to the interaction vertex (IP > 5-20 µm). This is applied only for the electron
and muon channels, since in the hadronic channel the background coming from
hadron reinteractions dominates. In the long τ decays, the decay occurs in the
first or second downstream lead plate. τ candidates are selected on the basis of the
detection of a reasonably large kink angle between the τ and the daughter track
(θkink > 20 mrad).
The analysis of the τ → e channel benefits from the dense brick structure given
by the cell design, which allows the electron identification through its showering
in the downstream cells.
For the muonic decay mode the presence of the penetrating (often isolated)
muon track crossing the whole detector structure allows an easier vertex finding.
The potential background from large angle scattering of muons produced in ν µ CC
interactions can be reduced to a tolerable level by applying cuts on the kink angle
and on the muon transverse momentum at the decay vertex.
3.4 Physics performances
τ→e
τ→µ
τ→h
Total
45
DIS long QE long DIS short Overall
2.7 %
2.3 %
1.3 %
3.4 %
2.4 %
2.5 %
0.7 %
2.8 %
2.8 %
3.5 %
2.9 %
8.0 %
8.3 %
1.3 %
9.1 %
Table 3.1: τ detection efficiencies (including branching ratios) for the OPERA
experiment. Overall efficiencies are weighted sums on DIS and QE events.
Hadronic decay modes have the largest branching ratio but are affected by
background due to hadron interactions. One of the primary hadrons, in fact, can
interact in the first lead plates and it may simulate the decay of the τ. Strong
kinematical cuts will be used to reduce this background.
An important tool for background reduction is the determination of the transverse momentum of the daughter particle with respect to the direction of the τ
track candidate. For τ → e decays the ECC technique is well suited to identify
electrons and to determine their energy by measuring the density of track segments associated to their showering in the brick. For charged hadrons and muons,
the momentum is deduced from the measurement of the multiple scattering in the
lead plates. The muon momentum is also measured by the electronic detectors in
a large fraction of cases.
The overall detection efficiency (including branching ratios), estimated by
evaluating the efficiency related to the various steps of data reconstruction (i.e.
trigger and brick finding, vertex finding, decay detection and kinematical analysis) is reported in Tab. 3.1.
3.4.2 Background estimation
The background evaluation has been performed by means of a full simulation
which includes the beam properties, the physics processes and the detector structure. Background sources are:
• Prompt ντ production in the primary proton target and in the beam dump.
Prompt ντ originate from the decay of τ’s produced in the CNGS target by
the decay of D s mesons. The rate of ντ production from the interaction of
400 GeV/c protons in a Be target and in the downstream beam dump has
been evaluated in [63], [64] for the CERN Wide Band Beam. These results
have been scaled down according to the features of the CNGS beam and the
distance of the experiment from the source. Following the method of [63],
we expect O(10−6 ) × NCC ντ interactions, where NCC is the total number
46
The OPERA experiment
of νµ CC events collected. If one also takes into account the detection efficiency and the fact that the experiment will integrate O(104 ) events, the
contribution to the background is completely negligible.
• One-prong decay of charmed particles.
Charmed particles are produced in CC and NC neutrino interactions through
the reactions:
νµ N → cµX
νµ N → ccµX
νµ N → ccνµ X
(3.1)
(3.2)
(3.3)
Charmed mesons have masses and lifetimes similar to those of the τ lepton.
The above processes may thus constitute a background to the oscillation
signal if one fails to detect the primary muon in the reaction 3.1, the charm
partner in the reaction 3.3 or both (charm and muon) in 3.2. The most
relevant source is given by single charm production, i.e. the first reaction.
• Background from π0 and prompt electrons.
In addition to charm production, other sources must be considered as possible background for τ → e long decay: kink-like events from scattering of
primary electrons produced in νe CC interactions and pion charge exchange
process (π− p → π0 n) in νµ NC interactions.
• Large angle muon scattering.
Muons produced in νµ CC events and undergoing a scattering in the lead
plate following the vertex plate could mimic a muonic τ decay.
• Hadronic reinteractions.
The last source of background, important for all the decay channels, is due
to the reinteraction of hadrons produced in νµ NC and in νµ CC interactions
without any visible activity at the interaction vertex. Hadronic reinteractions constitutes a backgrounds for the hadronic channel if they occur in
νµ NC or in νµ CC events with the muon not identified. Hadron reinteractions constitute also an important source of background for the muonic τ
decay channel. Indeed in νµ NC events an hadron may be misidentified as
a muon or a genuine µ identified in the electronic detector is mismatched
to a hadron track in the emulsions. Finally, hadronic reinteractions could
also be a source of background for the electronic decay channel. This happens when in a νµ NC interaction (or in a νµ CC interaction with the muon
undetected) a hadron from the primary vertex, after having suffered a large
scattering in the first or second downstream lead plate, is misidentified as
an electron.
3.4 Physics performances
Channel
τ→e
τ→µ
τ→h
τ → 3h
T
47
∆m2 (eV 2 )
2.5 · 10−3 3.0 · 10−3
3.5
5.0
2.9
4.2
3.1
4.4
0.9
1.3
10.4
15.0
Background
0.17
0.17
0.24
0.17
0.76
Table 3.2: Summary of the expected numbers of τ events per decay channel in 5 years of
operation and for different ∆m2 , assuming the nominal CNGS beam intensity.
The contribution on the above sources to the total background depends on the
actual decay channel.
The total background from the sources discussed, assuming the nominal beam
intensity, is estimated to be less then ∼ 0.8 events in 5 years of OPERA operations
with a fiducial mass of 1.3 Kton.
3.4.3 Sensitivity to νµ → ντ oscillation
The OPERA performances after 5 years of running with the nominal beam intensity (4.5 × 1019 pot/year) are summarized in Tab. 3.2: the number of expected
signal events from νµ → ντ oscillations is given as a function of the studied channel for two different values of ∆m2 at full mixing. Fig. 3.9 shows the discovery
probability as a function of the ∆m2 .
Fig. 3.10 shows the sensitivity of the OPERA experiment to νµ → ντ oscillation together with the region allowed by the past atmospheric neutrino experiments: the OPERA sensitivity completely covers the allowed region.
3.4.4 Search for the sub-leading νµ → νe oscillation
As already discussed in chapter 2 sub-dominant νµ → νe oscillations at the ”atmospheric scale” are driven by the mixing angle θ13 . The angle is constrained by
reactor experiments to be small [43], [44].
Because of the very good electron identification, OPERA is also sensitive to
νµ → νe oscillations. Together with the νµ → ντ appearance search this measurement also allows to perform an analysis of neutrino oscillation with three-flavour
mixing.
The analysis is based on a search for an excess of νe CC events at low neutrino
energies. The main background comes from the electron neutrino contamination
48
The OPERA experiment
Figure 3.9: OPERA discovery probability vs ∆m2 .
Figure 3.10: The OPERA sensitivity to νµ → ντ oscillations.
3.4 Physics performances
θ13
9
8
7
5
3
sin2 θ13
0.095
0.076
0.058
0.030
0.011
νe CC signal τ → e
9.3
4.5
7.4
4.5
5.8
4.6
3.0
4.6
1.2
4.7
49
νµ CC → νµ NC
1.0
1.0
1.0
1.0
1.0
νµ NC
5.2
5.2
5.2
5.2
5.2
νe CC beam
18
18
18
18
18
Table 3.3: Expected number of signal and background events for OPERA assuming
5 years data taking with the nominal CNGS beam and oscillation parameters ∆m 223 =
2.5 × 10−3 eV2 , θ23 = 45◦ and θ13 ∈ [3◦ ÷ 9◦ ].
present in the beam, which is is relatively small compared to the dominant ν µ
component (νe /νµ = 0.8%). The systematic error associated with the νe contamination plays an important role for the oscillation search, the statistical fluctuation
(∼ 5%) of this component being the irreducible limiting factor. Other sources
of background are the electronic τ decay, the decay of neutral pions produced in
NC interactions, and νµ events with the primary muon not identified and having
another track miming an electron.
The OPERA νµ → νe search seeks for neutrino interactions with a candidate
electron from the primary vertex with an energy larger than 1 GeV (to cut the soft
γ component) and a visible energy smaller than 20 GeV (to reduce the background
due to the prompt component). Moreover, a cut on the number of grains associated
with the track of the candidate electron is also applied. The latter has a strong
impact on the reduction of the background from νµ CC and νµ NC events and
allowed for a softer cut on electron energy. Finally, a cut on missing p T of the
event is applied (pT < 1.5 GeV) to further reduce NC contamination and suppress
τ → e background.
The expected number of signal and background events for OPERA assuming
5 years data taking with the nominal CNGS beam is given in Tab. 3.3 for different values of θ13 . The 90% confidence level limit for the OPERA experiment is
sin2 2θ13 < 0.06.
An increase of sensitivity for νµ → νe oscillation can be obtained by fitting
the kinematical distributions of the selected events. By fitting simultaneously the
E vis , E e , PTmiss distributions, we obtained the exclusion plot at 90% C.L. shown in
Fig. 3.11 under the assumption θ23 = 45◦ [66].
50
The OPERA experiment
Figure 3.11: OPERA sensitivity to the parameter θ 13 at 90% C.L. in a three family
mixing scenario, in presence of νµ → ντ with θ23 = 45◦ . The sensitivity with the higher
intensity beam (×1.5) is also given (dotted line).
3.5 PEANUT: Petit Exposure At NeUTrino beamline
The OPERA experiment will starts data taking in October 2007. During last years
a long R&D program has been carried out in order to study several aspects of the
emulsion scanning and analysis: in particular some exposure tests to pion beam
have been performed in order to mainly check the ESS performances in terms of
efficiency and track reconstruction capability (see section 5).
In order to study and to validate the OPERA analysis scheme, the collaboration decided to expose in 2005 several OPERA-like bricks to the NuMI neutrino
beam in the MINOS Near Detector (ND) hall at Chicago Fermilab. The PEANUT
test has been conceived to reproduce the framework of the OPERA detector: electronic detectors provide the hint to search the track inside an emulsion doublet
(that henceforth will be called CS doublet); the tracks confirmed in the CS are
followed in the brick up to the neutrino interaction points.
The main purpose of the PEANUT analysis is to test and optimize the vertex
finding chain. Contextually the study of neutrino interactions (that can be performed in PEANUT thanks to the high number of recorded events), is a subject
3.5 PEANUT: Petit Exposure At NeUTrino beamline
51
Figure 3.12: The three available energy configuration of the NuMI beam.
of interest for the neutrino community in general; in addition, being the mean energy of NuMI beam lower than the CNGS one, this test allows to characterize the
OPERA performances in the low neutrino energy region. This exposure is useful
for the MINOS collaboration too as an additional input to understand the initial
composition of the neutrino beam.
3.5.1 The NuMI beam
The primary beam system for the NuMI facility consists of the extraction and
transport of 120 GeV primary protons from the Main Injector to the NuMI target. The extracted protons are focused and bent strongly downward by a string
of quadrupoles and bending magnets so that they enter the pre-target hall. For
conventional construction reasons the pre-target and target halls are located in the
dolomite rock formation, requiring that the initial trajectory be bent down more
than is actually required to aim the neutrino beam to Soudan. Another set of bend
magnets brings the protons to the correct pitch of 58 mrad for a zero targeting
angle beam directed toward the experiment. The size and angular dispersion of
the proton beam are controlled by a final set of quadrupoles and are matched to
the diameter of the production target.
52
The OPERA experiment
Figure 3.13: The NuMI beam layout.
Protons that strike the target, produce short lived hadrons that are focused
towards the neutrino experimental areas, and as the hadrons travel through a long
pipe a fraction of them decay to neutrinos and muons. The target is sufficiently
long to enable most of the primary protons from the Main Injector to interact, but
shaped so that secondary interactions of the π’s and K’s are minimized and energy
absorption is low. This is achieved with a target that is long and thin, allowing
secondary particles to escape through the sides.
The focusing is performed by a set of two magnetic horns. These devices are
shaped in such a way that, when a pulse of current passes through them, a magnetic field is generated which focuses particles in the desired momentum range
over a wide range of production angles. The average meson energy is selected by
adjusting the locations of the second horn and target with respect to the first horn.
This allows to range the energy of the meson beam (and therefore of the neutrino
beam) during the course of the experiment. Three configuration of target and horn
spacings were defined as the low-energy (LE), medium-Energy (ME), and highenergy (HE) beams (Fig. 3.12) The higher-energy beams yield larger number of
neutrino interactions as the cross section is higher. During the PEANUT exposure
the NuMI beam was in LE configuration.
The particles selected by the focusing horns (mainly pions with a small component of kaons and uninteracting protons) are then allowed to propagate down
an evacuated beam pipe (decay tunnel) 1 m in radius and 675 m long, placed in
a tunnel, pointing downward towards Soudan. While traversing the beam pipe, a
fraction of mesons decay, yielding forward-going neutrinos. A hadron absorber
(consisting of a water cooled aluminum central core surrounded by steel) is placed
at the end of the decay pipe to remove the residual flux of protons and mesons,
3.5 PEANUT: Petit Exposure At NeUTrino beamline
53
followed by a set of beam monitoring detectors, while the 240 meters of dolomite
rock between the end of the hadron absorber and the near detector is sufficient to
stop all muons coming from the decay pipe (Fig. 3.13).
3.5.2 The PEANUT detector
The PEANUT detector has been conceived to reproduce, in a reduced scale, the
OPERA detector. The apparatus, shown in Fig. 3.14, consists of 4 structures
called ”mini-walls” each housing a matrix of 3×4 OPERA-like bricks, for a total
of 48 bricks.
First, second and third mini-wall are followed by 2 planes of scintillator fiber
trackers (SFT), while the fourth wall is followed by 4 SFT planes. Each SFT plane
is 0.56×0.56 m2 and is composed by horizontal and vertical 500 µm diameter
fibers providing x-y track information (see Fig. 3.14). Between second and third
wall and after the last one a plane of 45◦ oriented fibers, (called U and V plane
respectively), completes the electronic target section. The SFT planes (that are
the same used in DONUT experiment [4]) are readout by image intensifiers and
CCD cameras. A chain of high voltages ranging from 9 to 20 kV is applied to the
image intensifiers continuously. The data from the CCD camera is read out at the
end of thev beam spill by a local PC computer and stored on disk.
The bricks used for PEANUT exposure were made of OPERA Tono-refreshed
emulsion films, sent by plane from Japan to Chicago. The bricks was assembled
at Fermilab using a manual version of BAM (Brick Assembly Machine).
Unavoidably during the flight, the films accumulate cosmic ray radiation: for
this reason the emulsions were shipped in vacuum packed boxes in an order that
henceforth will be called ”transportation order” and assembled at Fermilab in opposite order (”assembly or exposure order”, see Fig. 3.15). This allows to tag
tracks recorded during the flight that constitute an undesirable background for
neutrino events analysis.
As shown in Fig. 3.15, Peanut bricks are composed by 57 emulsion sheets:
out of them 55 are interspaced by passive material plates, while the first two (located in downstream direction) are placed in contact. This configuration allows to
reproduce the Changeable Sheet doublet (CS) of the real OPERA bricks.
A total of 160 bricks have been produced: for 135 of them lead plates have
been used as passive material, in order to achieve the best performance for the
momentum measurement obtained through multiple-scattering and the electron
ID, and reproduce the configuration of the OPERA bricks. However, since the
MINOS detector target material is iron, to check the number of shower tracks
and emission angles for neutrino-iron interaction, 35 iron ECC bricks has been
realised. With iron, the momentum measurement accuracy and electron ID efficiency will decrease due to the 3 times longer radiation length compared to lead.
54
The OPERA experiment
The target (Fig. 3.16) was positioned in the MINOS Near Detector hall (Fig.
3.17) and the data taking started in September 2005. Unlike the OPERA experiment, the bricks are not removed after a trigger from electronic detectors, but
left in the apparatus for a period ranging from few to 100 days of beam exposure. Then they are exposed to cosmic rays in order to perform the plate to plate
alignment, are unpacked, marked with optic reference marks and developed using
standard procedure. Finally the emulsions are shared among the laboratories of
the collaboration for the analysis.
3.5 PEANUT: Petit Exposure At NeUTrino beamline
Figure 3.14: Layout of the PEANUT detector.
55
56
The OPERA experiment
Figure 3.15: On the left an illustration of the film order during the transportation.
Right: the piling order.
3.5 PEANUT: Petit Exposure At NeUTrino beamline
Figure 3.16: A picture of the PEANUT apparatus.
Figure 3.17: The PEANUT detector in the MINOS Near Hall at FermiLab
57
58
The OPERA experiment
Chapter 4
Nuclear Emulsions
In 1896, H. Becquerel observed for the first time a blackening of photo-plates
accidentally in contact with salts of uranium. This event can be considered the
starting point of the use of photographic emulsions in particle physics.
Since then the development of the nuclear emulsion method for recording
high-energy charged particles involved many physicists worldwide during the
1930s: in 1937 Blau and Wambacker reported the first observation of an interaction in emulsions exposed to cosmic rays [67] and in 1947, thanks to the new
”concentrated” emulsions producted by the industrial chemist C.Waller, the pion
was discovered by observing the π → µ → e decay chain [68].
In the 1960s accelerators started to replace cosmic rays as sources of highenergy particles, and fast-response detectors, such as counters and spark chambers, started to replace cloud chambers and nuclear emulsions.
Anyway, nuclear emulsions were not abandoned, because of their unique peculiarities: they are very sensitive and allow to resolve particle tracks to less than
1 µm, and therefore the ideal device to detect short-lived particles.
In fact, nuclear emulsions are still successfully used nowadays, especially in
neutrino experiments: they were employed in experiments like WA17 at CERN
[69], aiming at the search for charmed particles in neutrino charged current interactions, or E531 at Fermilab [70], aiming at the measurement of charmed particle
lifetimes in neutrino interactions, or WA75 at CERN [71], searching for beauty
particle production by a 350 GeV/c π− beam.
Furthermore, the use of nuclear emulsions allowed the first (and still unique)
detection of ντ neutrinos by the DONUT collaboration [4].
The technique of nuclear emulsions has found a large scale application in the
target of the CHORUS experiment [54], in which the automatic scanning of a large
sample of events has first been applied. This technique has been further improved
in OPERA (see Chapter 5) leading to the much larger scale of the OPERA target.
60
Nuclear Emulsions
4.1 Basic properties
Emulsions used in particle physics are usually a mixtures of silver halide microcrystals (typically bromides AgBr) and a gelatin consisting mainly of a variable
quantity of water, small amount of glycerol, and possibly other organic substances.
The energy released by ionizing particles to the crystals produces a latent image which is almost stable in time. After development, followed by fixing and
washing to remove the undeveloped crystals, the gelatin becomes transparent and
with a microscope the paths of charged particles that penetrated the emulsion are
visible as trails of minute dark silver grains.
Nuclear emulsions are very similar to standard photographic emulsions, but
with some peculiar features: in nuclear emulsions silver halide crystals are very
uniform in size and sensitivity; the silver to gelatin ratio is much higher than in a
conventional emulsion; there are very few crystals that may be developed without
exposure to a charged particle; furthermore the film thickness is larger.
The sensitivity of the emulsions depends on the size of the silver halide crystals; large grains are more sensitive to ionizing radiation than small ones. Usually, a low sensitivity emulsion is used to detect low-energy particles, as there is
plenty of energy available to free electrons. However, a more sensitive emulsion
is required to detect high-energy particles as they deposit little energy along their
tracks.
The size of the microcrystals in the OPERA emulsions is ∼ 0.2 µm and is
well controlled by the current industrial technologies developed for photographic
films. A minimum ionizing particle (mip) yields ' 30 grains per 100 µm.
4.1.1 The latent image formation
As already seen above, the property of the crystal that makes it developable is
called latent image: silver halide crystals of an emulsion absorb energy when
excited by light or charged particles. This absorption sensitize the crystals in such
a way that under the action of a chemical reducing agent, conversion of the halide
to metallic silver will proceed more rapidly than in the not irradiated crystal.
Most current theories of latent-image formation are modifications of the mechanism proposed by R. W. Gurney and N. F. Mott in 1938.
When solid silver bromide is formed, as in the preparation of a photographic
emulsion, each silver atom gives up one orbital electron to a bromine atom. The
silver atoms, lacking one negative charge, have an effective positive charge and are
known as silver ions (Ag+ ). The bromine atoms, on the other hand, have gained
an electron - a negative charge - and have become bromine ions (Br − ). A crystal
of silver bromide is a regular cubical array of silver and bromide ions.
4.1 Basic properties
61
A crystal of silver bromide in a photographic emulsion is not perfect; a number
of imperfections are always present. First, within the crystal, there are silver ions
that do not occupy the ”lattice position”, but rather are in the spaces between.
These are known as interstitial silver ions. The number of the interstitial silver
ions is small compared to the total number of silver ions in the crystal. In addition,
there are distortions of the uniform crystal structure. These may be ”foreign”
molecules, within or on the crystal, produced by reactions with the components of
the gelatin, or distortions or dislocations of the regular array of ions. These may
be classed together and called ”latent-images sites”.
The Gurney-Mott theory envisions latent-image formation as a two-stage process. When a photon of light of energy greater than a certain minimum value is
absorbed in a silver bromide crystal, it releases an electron from a bromide ion
(Br− ). The ion, having lost its excess negative charge, is changed to a bromine
atom. The liberated electron is free to wander about the crystal, where it may
encounter a latent image site and be ”trapped” there, giving the latent-image site
a negative electrical charge. This first stage of latent-image formation, involving transfer of electrical charges by means of moving electrons, is the electronic
conduction stage.
The negatively charged trap can then attract an interstitial silver ion because
the silver ion is charged positively. When such an interstitial ion reaches a negatively charged trap, its charge is neutralized, an atom of silver is deposited at the
trap, and the trap is ”reset”. This second stage of the Gurney-Mott mechanism
is called the ionic condition stage, since electrical charge is transferred through
the crystal by the movement of ions. The whole cycle can recur several times at
a single trap, each cycle involving absorption of one photon and addition of one
silver atom to the aggregate.
In this way, the absorption of energy in a excited crystal of silver halide leads
to a concentration of a few silver atoms into an aggregate which can act as a
development center, i. e. a latent image.
The formation and preservation of the latent image depends on external conditions such as temperature, humidity and pressure. As temperature and humidity
increase, the sensitivity decreases and the latent image is less stable (fading). The
fading can be artificially induced in order to erase the image of unwanted tracks
accumulated before the exposition (refresh). Moreover, in particular conditions, it
is possible to refresh emulsions without spoiling their sensitivity.
4.1.2 The development process
The development procedure is a many-step chemical treatment in a darkroom,
allowing the reduction of silver ions to metallic silver: in this way the latent image
in an emulsion is made visible.
62
Nuclear Emulsions
The reducing solution, the developer, is a chemical agent that reduces completely those crystals containing a latent image center, while leaving unchanged
all the others.
An important parameter of the development process is the developing time.
It should be long enough for those crystals with a latent image center to be reduced completely, but not so long that unexposed crystals are developed. In fact,
a certain number of crystals will be developed even if they do not contain a development center. These grains, when developed, constitute what is known as fog or
background.
Developing products may be divided into two main groups, depending on the
source of silver ions for reduction. The first group is known as physical developing agents: in these products silver ions are provided from the solution in the
form of a soluble complex; they are deposited on the latent image center and are
reduced to metallic silver. This produces spherical grains, the precise shape of
which is affected by the pH of the solution. The second group are the chemical
developing agents: in this case, silver ions are provided from the silver halide
crystal containing the latent image center. The action of a chemical developer
produces a mass of filaments bearing little resemblance to the original crystal. If
silver halide solvents such as sulphite are present in a chemical developer, an opportunity exists for some physical development to occur. In this case, the filaments
in the processed plate will be shorter and thicker.
During the development process, to exactly control the development time, one
has to take into account two important parameters: temperature and PH. Chemical
development, like many other chemical reactions, is dependent on temperature. In
general, the development occurs more rapidly at higher temperatures while below
10◦ C the development virtually stops. For this reason it is important to keep the
processing temperature constant during development, otherwise it will not be possible to assess the correct development time. Furthermore the developer maintains
a given activity within a narrow pH range. In general the less alkaline the environment is, the less active the developer will be; for this reason, at the end of the
development an acid stop bath is often recommended. This stops immediately the
process and controls precisely the time.
The developers usually used to process nuclear emulsions are combined chemical and physical agents, sulphite and bromide are solvents of silver halide. Complex silver ions become metallic silver that precipitates in the gelatin allowing to
obtain the silver grains. This causes physical development of the grains and fog.
A succesfull development happens when the reaction in the grains containing the
desired latent image proceed faster than the development of the fog.
The speed of the development often changes owing to the presence in the
developer of the sulfide. The sulfide is required in the development procedure
because it tends to prevent oxidation of the developing agent by dissolved oxygen
4.1 Basic properties
63
Figure 4.1: Top: photograph of the cross section of a machine-coated emulsion
film taken by an electron microscope. Diluted emulsion layers of 43 µm thickness
are coated on both sides of a 205 µm thick triacetate base. Bottom: enlarged view
of the top emulsion layer. A thin (∼ 1 µm) protective film (gelatin) is placed over
the emulsion layer at the same time of coating.
from the air.
After the development, a fixation procedure must be made in order to remove
all the residual silver halides. These, if otherwise left in the emulsion, would
slowly induce the browning and a progressive degradation of the image. The
fixing agent most widely used are sodium or ammonium thiosulphate, which form
thiusulphate complexes with the silver halide. Silver thiosulphate is soluble in
water and so may be removed from emulsions by washing with water. This is
64
Nuclear Emulsions
density
radiation length
(dE/dx)mip
nuclear collision length
nuclear interaction length
ρ = 2.71 g/cm3
X0 = 5.5 cm
1.55 MeV/g/cm2 or 37 keV/100 µm
λT = 33 cm
λ I = 51 cm
Table 4.1: Physics properties of OPERA emulsion film.
the reason why at the end of fixation process emulsions must be washed very
thoroughly. If the washing step is not done correctly, any residual can break down,
producing silver sulphite which is brown and can obscure the image.
During fixing and washing the emulsion can suffer distortions because at that
stage they are soft and fragile; another source of distortions is the drying procedure
but these can be more controllable by the use of alcohol-glycerin baths.
4.2 Characteristics of OPERA emulsions
In the OPERA experiment the biggest amount of nuclear emulsion ever used before, has been employed for the detector target: ∼ 9 millions emulsion films have
been produced.
The emulsions used for the past experiments, were poured by hand following
standard procedures developed in many years of experience. The same procedure
applied to OPERA would be prohibitively time consuming. To solve this problem,
an R&D project has been carried out by Nagoya University and the Fuji Film company; after several tests a good procedure has been established and the OPERA
emulsion film was produced by commercial photographic film production lines.
As opposed to hand-made films, the automatic production allows to precisely
control the film thickness as in the case of commercial photographic films. The
measure of the film emulsion layer thickness after development shows a distribution with σ ∼ 1.3 µm.
Fig. 4.1 shows an electronic microscope photograph of the cross section of an
OPERA emulsion film. Two emulsion layers of 43 µm are coated on both sides of
a 205 µm thick triacetate base. A thin (∼ 1 µm) protective film (gelatin) is placed
over both emulsion layers. This prevents the occurrence of black or grey patterns
on the emulsion surface. These patterns, frequently emerging in the case of handpoured plates, are due to silver chemically deposited during the development. The
removal of these stains had been the most time-consuming task in the emulsion
pre-processing for the experiments performed so far. By means of the protective
coating, surface cleaning is not needed anymore and the pre-processing procedure
4.2 Characteristics of OPERA emulsions
65
Figure 4.2: Crystal diameter distribution of the Fuji emulsion films. The distribution is centered around 0.20 µm.
becomes compatible with the daily handling of thousands of emulsion films, as
in the case of OPERA. In addition, the presence of a thin protective layer allows
direct contact with the lead plates, otherwise chemical reactions could happen
between the lead plates and the silver halides contained in the emulsion.
In Fig. 4.2, the crystal diameter distribution in the emulsion layer is shown: the
distribution is rather uniform with a peak at 0.20 µm. The currently achieved grain
density of the machine-coated emulsion films is 30 grains/100 µm. As already
seen in section 4.1.2, the so-called emulsion fog is due to accidentally developed
grains (Fig. 4.3). In the OPERA emulsions the fog has to be kept at the level of ≤ 5
fog grains /1000 µm3 . This can be achieved by applying a moderate development
to the emulsion films, still keeping a sensitivity of ∼ 30 grains/100 µm, as shown
in Fig. 4.4.
The physics properties of OPERA emulsions are listed in table 4.1
4.2.1 The refreshing procedure at Tono mine
As seen in the previous section, the production of the OPERA emulsions was
committed to Fuji Photo Film Company in Japan. Then the emulsion are shipped
to Gran Sasso for brick assembly.
Since the production, cosmic-rays and ambient radioactivity produce latent
track images on emulsion sheets because of their continuous sensitivity. These
66
Nuclear Emulsions
Figure 4.3: Photograph of a minimum ionising particle (mip) recorded in an emulsion layer. The grain density is defined as the number of grains per 100 µm track;
the fog density as the number of fog grains per 1000 µm3 .
tracks constitute unwanted background. In order to erase this background one
can take advantage of the fading effect discussed in section 4.1.1. In fact the
latent image of particle track gradually fades after exposure and this effect, when
accelerated, can be used to erase tracks. This procedure is known as refreshing.
The refreshing happens by the following oxidation reaction:
Ag4 + O2 + 2H2 O → 4Ag+ + 4OH −
Regulating temperature and humidity is possible to control the velocity of the
process. The best environment conditions to have a succesfull refreshing is RH
≈ 90-95% and T ≈ 20-29◦ C. A strict monitoring of temperature and humidity
must be done during the procedure to avoid an increase of fog and preserve the
emulsion sensitivity.
The refreshing is widely known since the beginning of nuclear emulsion research but the large amount of films that are used in the OPERA experiment has
requested many efforts and intense R&D in order to guarantee stability, reliability
and reproducibility. To reach this goal the Nagoya University group has built a
refreshing facility at Tono mine [72] in Japan, designing and producing several
refreshing units that work in parallel; they have reached a final speed of 150000
4.2 Characteristics of OPERA emulsions
67
Figure 4.4: Time dependence of the developed grain density and fog density. Conditions are: amidol developer at 20◦ C. A development time from 20 to 25 minutes
gives satisfactory results.
refreshed films/week.
Each refresh unit (Fig. 4.5) is a stainless steel chamber where a water supply
on the basement provides humidity; air circulation is realized by a fan and several
holes in the stainless walls guarantee a constant circulation speed. In a refresh
room there are up to 14 chambers and the temperature is kept constant at 27 ◦ C,
warm air can circulate through the chambers and humidification is independently
tuned in each unit by its own water supplier. Emulsions lay on plastic holders
specifically designed in order to not disturb the air circulation and to avoid the
direct contact between them.
The full refreshing cycle has a duration of one week and is realized in three
phases. In the preliminary pre-humidification phase (24 hours long) the films are
stored in the chambers at 27◦ C and low humidity (≈ 60% RH). The air circulation
is kept very fast and there must be a strong regeneration of air inside the chamber
to avoid that a poisoning gas, emitted by emulsions at hight temperature, can produce fog increase and sensitivity reduction. In the refreshing step, that lasts three
days, the emulsions are kept at RH ≈ 85-99% and T ≈ 26-29◦ C. In this case the air
flows only inside the chamber and there is no air regeneration. Finally, during the
drying phase, films need to be gradually conditioned to 20◦ C and 50%RH [73]. In
68
Nuclear Emulsions
Figure 4.5: Schetch of Tono mine underground refreshing facility. On the right a drawing
of a refresh unit.
this operation, three days long, films remain in the chamber but there is no water
supply and there is very quick air circulation and regeneration.
After drying, the films are extracted from the chambers, packed under vacuum
in stacks of 9 ECC basic units and stored in underground till the shipment to
Europe.
A special treatment is reserved for those films intented to be packed as CS
doublet: in this case a second refreshing procedure is done in the Gran Sasso
Underground Refreshing facility.
4.2.2 Distortions and shrinkage
After the development process two effects have to be taken into account to ensure
good resolution measurements: distortions and shrinkage.
Distortion is a phenomenon which shifts the position of the recorded trajectories in the emulsion layer because of stresses accumulated in the gelatin layer. In
handmade emulsion plates, shifts of several µm are frequently observed, caused
by a not uniform drying at the plate production. The simplest form of general distortion is a uniform shear: straight tracks remain rectilinear but their direction and
length change by an amount which depends on the magnitude and direction of the
shear. A more serious source of error is due to differential shear of the emulsion
in which both the magnitude and direction of the shear change with depth. Such
distortion changes the tracks of an energetic particle from a line into a curve.
A typical distortion map measured in an OPERA emulsion is shown in Fig.
4.2 Characteristics of OPERA emulsions
69
Figure 4.6: A typical distortion map of an OPERA nuclear emulsion.
4.6. The arrows indicate the distortion direction. The absolute value of the distortion is indicated by the length of the arrow. The average value of the measured
distortions is ∼ 5 mrad. The use of double-sided emulsions coated on a plastic
support plate improves the angular resolution at a level of 2 mrad, because the
track direction can be defined by the two points near the support plate, which are
practically free of distortion.
The shrinkage effect is due to a reduction of the thickness of the emulsion
sheet after the development process: as we have seen previously, in the developing
process some materials are added in the volume of the emulsion to replace the
silver halide dissolved by the fixer; this process leads to a reduction of thickness
of the emulsion layer.
The shrinkage factor is defined as the ratio between the values of the thickness
of the emulsion before and after the development. This factor is taken into account
by the tracking algorithm (the measured micro track slopes must be multiplied by
this factor to obtain the real value). This effect is sketched in Figure 4.7.
70
Nuclear Emulsions
Figure 4.7: The shrinkage effect: the measured track slope ∆z 0 /∆x does not coincide with
the real slope ∆z/∆x. The shrinkage correction is obtained by multiplying the measured
slope by the shrinkage factor ∆z/∆z 0 .
Chapter 5
The ESS and the LNGS Scanning
Station
The use of nuclear emulsion as charged particle recording device has allowed,
since the 30s of the last century, big improvements in particle and nuclear physics.
It’s nevertheless true that the amount of emulsions used in the early experiments
was relatively small making manual measurements feasible. Significant improvements in the emulsion technique and the development of fast automated scanning
systems during the last two decades has made the use of nuclear emulsions possible in large scale detectors as in the OPERA experiment. In fact with the CNGS
neutrino beam at its nominal intensity, ∼ 22 neutrino selected interactions per
day are expected. Therefore, ∼ 1000 emulsion sheets per day must be (partially)
scanned in order to find the vertex and analyze the event. In total, ∼ 4000 cm 2 per
day (∼ 200 cm2 per brick) have to be analyzed with a sub-micrometric precision
per 5 years of data taking (& 20000 neutrino interactions).
In order to analyze in ”real” time the events and, for some decay topologies,
remove other ECC bricks for a more refined kinematical analysis, a very fast automatic scanning system is needed to cope with the daily analysis of the large number of emulsion sheets associated with neutrino interactions. Taking into account
the need to have a reasonable number of microscopes (∼ 1 microscope/brick/day),
the minimum required scanning speed is about 20 cm2 /h per emulsion layer (44
µm thick). It corresponds to an increase in speed by more than one order of magnitude with respect to past systems.
For this purpose new automatic fast microscopes have been developed: the
European Scanning System (ESS) [61] and the S-UTS in Japan [74].
72
The ESS and the LNGS Scanning Station
5.1 The Japanese S-UTS
The automation of emulsion scanning was pioneered by the group of Nagoya University (Japan) and the first application of an automatic system, called Track Selector TS, was used for the DONUT and CHORUS experiments.
The Track Selector was designed to detect tracks with predicted angles in the
field of view of a CCD camera. The track detection algorithm is simple: 16 tomographic images of (e.g.) 100 µm thick emulsion layers are taken and digitised.
Each image is shifted horizontally respect to the first layer, so that the predicted
tracks become perpendicular to the emulsion surface. Tracks are identified by
superimposing the sixteen shifted digitised images.
The basic TS tracking principle was used in improved versions, New Track Selector (NTS) and Ultra Track Selector (UTS) taking advantage of the implementation of several image processors working in parallel. The maximum scanning
speed was ∼2 cm2 /h.
The succeeding generation of the UTS system is the so-called Super-UTS,
developed to reach the speed needed for OPERA scanning. The key features of
the S-UTS are the high speed camera with 3 kHz frame rate and a piezo-controlled
displacement of the objective lens, synchronized to a continuous stage motion in
order to avoid ”go-stop” of the microscope stage while taking images. The system
uses Fast Programmable Gate Arrays (FPGAs), fast memory and a grabber board
connected to the CCD camera (512 × 512 pixel).
European groups followed a different approach, initiated by the Salerno group
with the SySal system for the CHORUS experiment [60]. With this approach,
called multi-track system, all tracks in each field of view are reconstructed regardless of their slope.
5.2 The design of the European Scanning System
The ESS is being specifically optimized for the scanning of thin emulsions exposed to perpendicularly impinging particles. The goals are high scanning speed,
sub-micron precision, high tracking efficiency and low instrumental background.
The system uses a software-based approach for data processing. This approach
has proven extremely flexible and effective, since new algorithms can be easily
tested and the integration of commercial components has been possible. Therefore, the system can be quickly upgrated as technological improvements become
available.
The main components of the ESS microscope shown in Fig. 5.1 are:
• a high quality, rigid and vibration free support table holding the components
in a fixed position;
5.2 The design of the European Scanning System
73
• a motor driven scanning stage for horizontal (XY) motion;
• a granite arm which acts as an optical stand;
• a motor driven stage mounted vertically (Z) on the granite arm for focusing;
• optics;
• digital camera for image grabbing mounted on the vertical stage and connected with a vision processor;
• an illumination system located below the scanning table.
The emulsion sheet is placed on a glass plate (emulsion holder) and its flatness
is guaranteed by a vacuum system which holds the emulsion at a fixed position
during the scanning.
By adjusting the focal plane of the objective lens through the whole emulsion
thickness, a sequence of equally spaced tomographic images of each field of view
are taken, processed and analysed in order to recognise aligned clusters of dark
pixels (grains) produced by charged particles along their trajectories.
The three-dimensional structure of a track in an emulsion layer (micro-track)
is reconstructed by combining clusters belonging to images at different levels and
searching for geometrical alignments (Fig. 5.2a). Each microtrack pair is connected across the plastic base to form the base-track (Fig. 5.2b). This strongly
reduces the instrumental background due to fake combinatorial alignments, thus
significantly improving the signal to noise ratio, and increases the precision of
track angle reconstruction by minimising distorsion effects.
The ESS microscope has been designed according to the following specifications:
• high-speed computer-controlled precision mechanics for both horizontal
and vertical stages with sub-micron accuracy able to move from one field of
view to the next in less than 0.1 s;
• optical system from standard microscopes, customized to observe the OPERA
emulsion sheets which have two emulsion layers on both sides of a plastic
support for a total thickness of ∼ 300 µm;
• high-resolution camera interfaced with a high-speed frame grabber and a
vision processor able to grab and process images at rates > 350 frames per
second (fps).
The ESS is based on the use of commercial hardware components or developed in collaboration with specialized companies. In the following section an
overview on the main hardware components will be done.
74
The ESS and the LNGS Scanning Station
Figure 5.1: The European Scanning System (ESS) microscope.
5.3 Hardware components
5.3.1 Mechanics
Horizontal and vertical stages
The scanning table and the vertical stage have been developed in collaboration
with the Micos company by modifying commercial products; they are equipped
with stepping motors ”Vexta NanoStep RFK Series 5-Phase Microstepping System” produced by the Oriental Motor company.
The motors are driven by a 4-axis ”FlexMotion PCI-7344” board provided by
National Instruments and inserted into the host PC.
The scanning table is a Micos ”MS-8” scanning table with 20.5 cm range
in both directions. The coordinates are read out by two linear encoders with a
resolution of 0.1 µm. External optical limit switches are mounted on each axis
and manually set.
The motion of the horizontal stage (maximum speed, acceleration, deceleration, ...) was set in order to minimize the time needed to move from one field of
view to the next (typically ∼ 350 µm). The total displacement time is given by the
sum of the rise time, i.e. the time to first reach the ”target point”, and the settling
time, i.e. the time needed to wait for the oscillations to be damped to a predefined
acceptable level. For our working conditions, the settling time is long enough in
order to dump oscillations down ± 0.2 µm, that’s a value smaller than one image
5.3 Hardware components
75
Figure 5.2: (a) Micro-track reconstruction in one emulsion layer by combining clusters belonging to images at different levels. (b) Micro-track connections
across the plastic base to form base tracks.
pixel (0.3 µm).
From the tests performed, we can conclude that the X displacement can be
safely considered finished within ∼ 100 ms, while the time needed for the Y axis
displacements is larger (∼ 140 ms) due to the scanning table design: the Y movements involve the whole table, while the X movements involve only a lighter part
of the table. Therefore, the scanning procedure minimizes the number of Y displacements. Moreover the repeatability to reach a commanded position has been
evaluated, giving a distribution with an RMS < 0.1 µm.
The vertical stage used by the ESS is the Micos ”LS-110” model. It is equipped with a linear encoder, with resolution 0.05 µm, and limit switches. During
data taking, the vertical stage moves at constant speed calculated by taking into
account the camera frame rate, the number of desired frames and the emulsion
thickness (44 µm). With a frame rate of about 400 frames/s and 15 levels per
emulsion layer, each image is acquired at a vertical distance of about 3 µm; the
resulting speed is about 1150 µm/s; the time needed to scan an emulsion layer is
about 55 ms (including the time for acceleration, deceleration and synchronization
with the host).
Thus, the time for a cycle is obtained by adding the time for horizontal displacement (it includes the time the vertical stage takes to reach its starting position) and the time needed for the data acquisition in Z. The insertion of a synchronization time of a few milliseconds before and after the frame grabbing brings to
a ∼ 170 ms cycle time. This value is adequate to reach the requested scanning
76
The ESS and the LNGS Scanning Station
speed of 20 cm2 /h.
5.3.2 Optical system
Objective
Only few objectives on the market completely fulfill all the severe requirements
needed for the ESS. In fact the performances of the objective should cope with the
requirements of a sub-micron resolution, the need to focus at different Z depths
and a magnification of few pixels per micron.
An objective is characterized by the numerical aperture (N.A.), the working
distance (W.D.) and the magnification (M). Moreover, an objective is designed to
operate (or not) in an oil-immersion set-up.
The N.A. defines the ultimate image resolution (the minimal distance between
two points seen as separate) that can be achieved by the objective. Since submicron resolution is needed, the objective is required to have N.A. > 0.8 [75].
Moreover, given the overall thickness of the emulsion layers and of the plastic
support (44 + 205 + 44 ) µm, a W.D. > 0.3 mm is required.
The objective magnification depends on the image sensor size because an image with at least a few pixels per micron is needed. In the case of 20 mm wide
megapixel sensors (see Section 5.3.3), an objective with M >40 is needed. However, the magnification should not be much larger, in order not to reduce the microscope speed.
When the system scans the bottom emulsion layer, the whole plastic support
and the top emulsion layer lay between the objective front lens and the focal plane,
for a total thickness of 0.3 mm. For the scanning of the top emulsion layer there
is no intermediate medium. The main effect of changing an intermediate medium
thickness is to overcorrect or undercorrect the spherical aberration [75]. An oilimmersion objective is the best choice since the oil, the emulsion and the plastic
support have the same refractive index (∼ 1.5) and therefore the optical path is
almost homogeneous.
To cope all these requirements our choice was the Nikon CFI Plan Achromat
50× oil, N.A. = 0.9, W.D. = 0.4 mm used in infinity corrected system with a tube
lens housed in its trinocular tube.
Illumination
To obtain the Koehler configuration [75] a transmitted illumination system, placed
below the scanning table, was designed and developed jointly with Nikon-Italy.
The light comes from a tungsten halogen lamp with a computer controlled
power supply. The image of the lamp filament is focused by a lens (collector) on
5.3 Hardware components
77
the aperture diaphragm of a condenser which concentrates the light into a cone that
illuminates the emulsion sheet. A second diaphragm (field diaphragm) is adjusted
to prevent emulsion illumination (and also heating) outside the field of view. The
condenser numerical aperture should match that of the objective in order to have
a wide illumination cone and an optimal optical resolution.
The final choice was a Nikon achromatic condenser with N.A. = 0.8 and W.D.
= 4.6 mm.
In order to obtain an as uniform as possible illumination over the entire field
of view and to maximize the optical resolution, a green filter and a frosted glass
diffuser can be inserted into the light path.
The emulsion holder and the alignment
One of the main features of the ESS is the angular resolution of few mrad achieved
in the reconstruction of particle track. Therefore the systematic error introduced
in the angular measurement by non planarity of the glass window (which holds the
emulsion) and by misalignments between the optical components and mechanical
stage, has to be kept well below 1 mrad.
The glass window, equipped with a vacuum system to keep the emulsion
steady during the scanning, is 4 mm thick (this is compatible with the condenser
working distance). It has a thickness tolerance of less than 10 µm per 10 cm length
and its deviation from the parallelism is smaller than 1 mrad; the flatness is of a
few fringes per inch (∼ 0.5 µm per 1 cm). A 1 mm wide groove in the glass along
the emulsion edge is connected to a vacuum pump.
The stages and the optical axis are aligned with respect to the glass window
(used as a reference plane). Using a digital micrometric comparator the angles
β and α in Fig. 5.3a between the glass window and the horizontal and vertical
motion directions are adjusted with an accuracy ≤ 0.1 mrad. The ”right angle
bracket” in Fig. 5.3a is aligned using an autocollimator and the final alignment of
the optical axis is ≤ 0.4 mrad (angle γ in Fig. 5.3a). All the optical components
shown in Fig. 5.4 are aligned using a centering telescope.
In Fig. 5.3b is shown the distribution of the difference between measured
and reference track slopes for emulsions vertically exposed to a 10 GeV π-beam.
The reference slopes have been obtained by averaging the 2 slopes before and
after a 180◦ horizontal rotation of the emulsion sheet; the residual mean value of
0.5 mrad is a good estimate of the systematic angular uncertainty arising from
possible misalignments.
78
The ESS and the LNGS Scanning Station
Figure 5.3: (a) The horizontal and vertical motion directions and the optical axis
are aligned with reference to the glass window. The angles α, β and γ are measured using digital comparators and an autocollimator. (b) The distribution of the
difference between measured and reference track slopes. The reference slopes
have been obtained by averaging the 2 slopes measured before and after a 180 ◦
horizontal rotation of the emulsion sheet; the residual mean value of 0.5 mrad is a
good estimate of the systematic uncertainty arising from possible misalignments.
5.3.3 The acquisition system
Camera and grain image
The goal of 20 cm2 /h scanning speed requires a frame acquisition time < 4 ms and
megapixel resolutions.
For this purpose the ESS is equipped with a Mikrotron MC1310 high-speed
megapixel CMOS camera with Full Camera Link interface. Its image sensor is the
Micron MT9M413 which delivers up to 10-bit monochrome 1280 × 1024 images
at over 500 frames per second. The sensor size is 20 mm (along the diagonal) and
its pixels are 12×12 µm2 large.
The optical system and the CMOS camera provide a suitable grain image acquisition in terms of stability, photometric dynamics and resolution. The sensor
size, the objective magnification and the setup conditions give a field of view of
about 390×310 µm2 and image pixels of about 0.3×0.3 µm2 . Consequently, the
image of a focused grain is ∼ 10 pixels large.
The on-line processing board
The frame grabber and the image processor are integrated in the same board, a
Matrox Odyssey Xpro, specifically designed to perform on board image process-
5.3 Hardware components
79
Figure 5.4: Schematic layout of the ESS microscope optical system.
ing. The on board processor is a Motorola G4 PowerPC supported by a Matrox custom parallel processor specifically designed to quickly perform local and
point-to-point operations. It is equipped with a 1 GB DDR SDRAM memory; the
internal I/O bandwidth can achieve over 4 GB per second transfer rate, while the
external rate reaches 1 GB per second. A Full Camera Link connection allows an
acquisition rate from the camera of up to 680 MB/s.
At present, a camera frame rate of 377 fps and 8-bit grey level images are
used corresponding to an acquisition rate of 471 MB/s. By acquiring 15 frames
per 44 µm emulsion layer, an acquisition time of about 40 ms is needed for each
field of view. Considering a synchronization time of 15 ms, a mean time of ∼
90 ms for the field of view change, a field of view of about 390×310 µm 2 and a
superimposition between contiguous fields of 30 µm, a scanning speed of about
22 cm2 /h is obtained. The effective scanning speed is a bit lower (∼ 20 cm 2 /h)
because sometimes the microscope has to scan the full sheet thickness to find the
emulsion surfaces (focusing).
Once grabbed, each image is analyzed using the technique described in section
5.4.
80
The ESS and the LNGS Scanning Station
5.4 The on-line acquisition software
A dedicated software to grab and process the images of nuclear emulsions was
developed. The on-line DAQ program is written in the object oriented C++ language and developed under the Microsoft Visual C++ environment as standard
Windows application with an user-friendly interface. It is based on a modular
structure where each object carries out a well defined task (i.e. image grabbing,
track pattern recognition and fitting, data I/O handling, etc...). Each object has a
corresponding parameter window for configuration setting.
For each field of view the program grabs several images at different depths
(local tomography), recognizes black spots (clusters of dark pixels) in each image,
selects possible track grains, reconstructs 3D sequences of aligned grains, then
extracts a set of relevant parameters for each sequence.
The implementation of a synchronous data taking scheme is relatively simple,
but the use of available resources would not be optimized: every step could act as
a bottleneck; furthermore, the CPU would be idle while waiting for the cycle to
be completed. The significant improvement in terms of speed achieved with the
ESS has been obtained after the implementation of an asynchronous data taking
scheme that allows parallel execution of several tasks. Execution proceeds along
four independent threads that are synchronized at the end of each cycle. In this
way the vertical axis moves through the emulsion without stopping and the storage
of images is not synchronized with the end of its grabbing. All the saved frames
are then ready to be processed.
5.4.1 Image processing
In order to obtain a high grain detection efficiency and an adequate rejection power
of background clusters, a complex image treatment is needed.
Once grabbed images are digitised and converted to a grey scale of 256 levels
(where 0 is black and 255 is white). CMOS sensors are faster than other technologies, but normally have a high noise level: in the conversion from analogue
output to digital grey level, each pixel has its own pedestal. This produces a sand
effect on the image that must be minimised. Spots on the sensor surface can
make some pixels blind, thus mimicking grains in fixed positions on all layers. A
flat-field subtraction technique has been implemented to equalise pixel response
with a pedestal map that is prepared at machine set-up time and is applied to every
image before processing. The map should be re-computed time to time to account
for camera aging and dust accumulation on the sensor surface.
The procedure to enhance dark spots on a light background is based on the
Point Spread Function (PSF) and on the application of a Finite Impulse Response
(FIR) filter [61].
5.4 The on-line acquisition software
81
The PSF (φ(x, y, z)) gives the 3D distribution of the light intensity due to a
point-like obstacle placed at (0,0) on the focal plane z = 0. A real image is the
convolution of the PSF with the real object distribution. The resulting gray level
is obtained integrating over the target volume, the product of the obstacle density
ρ, the flux of light I and the function φ.
To enhance the contrast between focused and unfocused grains, a 2D FIR filter
is applied to each image. The Kernel of the filter is a matrix that can be changed
by the operator in order to obtain the best response of the system in terms of
efficiency and background rejection. After some tests done in the Gran Sasso
scanning station (see section 5.6) the original 6×6 kernel was substituted by the
following 5×5 matrix:









2 4 4 4 2 

4 0 −8 0 4 

4 −8 24 −8 4 

4 0 −8 0 4 

2 4 4 4 2
to take into account a smaller grain size and an incremented background level observed in new OPERA emulsions respect to the standard sample. The filter convolution is a local operation: the output value of the pixel at a specific coordinate
is a weighted sum of the input values of the neighbourhood pixels, the weights are
given by the filter kernel. The convolution extends the original 255-values grey
level scale to a wider one, making the shape of the background flat.
The next step in image processing is the binarisation: pixels with values that
exceeds a threshold are classified as black, the remaining ones as white. Due to
residual uncorrected aberrations and to variations in the field of view illumination,
the PSF function and the light flux, can change from point to point; as a consequence, the filter response can vary inside the field of view and it is not convenient
to apply a fixed threshold to binarise the whole image. Since the point-to-point
variation of the filter response is a reproducible feature of the microscope, it can
be accounted for by applying a threshold map (equalization procedure).
The results of the three described processes is shown in Fig. 5.5.
At each cycle, binarised images are transferred to the host PC memory. They
are processed by a fast algorithm and adjacent pixels above threshold are grouped
together to form clusters. For each cluster, the area and the position of its centre
of gravity are saved. A cut on the cluster area helps to discard background due to
the noise in the camera signal.
82
The ESS and the LNGS Scanning Station
Figure 5.5: Image processing: grabbed image, convolution and threshold.
5.4.2 Tracking
Normally, ∼2000 clusters, almost all due to random background, are found in one
field of view. By applying quality cuts based on shape and size, about 60% of
them are typically selected and used for tracking (grains).
The tracking consists of two main algorithms: track recognition and track
fitting. In the first phase, the algorithm recognizes an array of grains as a track
with geometrical alignments, the track fitting algorithm performs a linear fit of
the position of the clusters and evaluates track slopes. Intercepts are given on the
surface between emulsion and base.
The basic idea of the tracking algorithm is that a track is a straight sequence
of grains lying in different levels. If two grains belonging to a real track are
measured in two non-adjacent levels, the pair is used as a track hint: other grains
of the same track must lie along the line between the two. For our purposes,
the algorithm must take less than 100 ms to examine 20000 grains. Checking
all possible pairs would result in an awesome combinatory, thus two tests are
applied to filter good track hints: an angular acceptance of tan θ < 1 (θ is the
angle between the track direction and the vertical direction) is commonly used in
order to reject track slopes not physically interesting. Therefore, with an emulsion
sensitivity for m.i.p. tracks of 30 grains/100 µm, the number of grains of a track
in each of the two 44 µm-thick emulsion layers of OPERA sheets is distributed
according to the Poisson’s law with an average of about 13 grains; some trigger
levels are defined (Fig. 5.6), i.e. if none of these levels has a grain along the
predicted line, the track search along that line stops immediately.
5.5 The off-line track reconstruction
83
Figure 5.6: A track hint consisting of two grains in levels 1 and 6 is shown; if
the hint is confirmed in at least one of the internal trigger levels, the tracking
procedure is applied to all levels.
Once all clusters have been found, a bidimensional linear fit is performed and
spurious clusters are removed from the tracks. If the number of grains that form
the track is greater than a minimum number (six or seven grains), the track is
saved in the output file. As already seen, a sequence of grains measured in one 44
µm-thick emulsion layer will be referred to as a micro-track.
After micro-track selection, multiple reconstructions are filtered out: if a grain
belongs to two or more micro-tracks, only the track with the highest number of
grains is retained. Moreover, due to shadowing, it may happen that a fully independent set of replicated grains appears with fit parameters very similar to those
of another micro-track. The final pass of tracking accounts for this effect and
removes track duplicates.
A more reliable track reconstruction is obtained by connecting micro-tracks
across the plastic base to form base-tracks, as shown in Fig. 5.7. Usually the
base-track linking is performed in the off-line procedure described in the next
section.
5.5 The off-line track reconstruction
As already explained above, the so-called base-tracks are formed (Fig. 5.7) by
connecting micro-tracks across the plastic support. This strongly reduces the instrumental background due to fake combinatorial alignments, thus significantly
84
The ESS and the LNGS Scanning Station
Figure 5.7: Micro-track connection across the plastic base.
improving the signal to noise ratio, and increases the precision of track angle reconstruction by minimising distorsion effects.
After collecting these base-tracks in a series of emulsion films, all the films
are aligned, with a procedure called intercalibration, and track reconstruction (i.e.
connecting base-tracks between films) is performed.
The off-line reconstruction tool used to perform track finding is FEDRA (Framework for Emulsion Data Reconstruction and Analysis), an object-oriented tool
based on C++ language developed in Root framework [76].
5.5.1 Base-track reconstruction
The base-track reconstruction is performed by projecting micro-track pairs across
the plastic base and searching for an agreement within given slope and position
tolerances. The micro-track slopes are used only to define the angular agreement,
while the base-track is defined by joining the points of intersection of the microtracks with the measured surface of the plastic base.
A micro-track is defined by a series of aligned clusters. The depth in emulsion of a cluster, which is the digitized image of a grain, is randomly distributed
and is affected by the vertical resolution of the microscope (∼ 2.5 µm). So, the
micro-track resolution, defined as the angular difference between a micro-track
and a base-track, is affected by this value. Since the points used for the base-track
definition lie in the surface between the emulsion and the plastic base, they are
almost unaffected by distortion effects: the base-track has an angular resolution
5.5 The off-line track reconstruction
Peak (-0.1,0.) rad
Peak (0.4,0.) rad
hsx1
Entries
0.004263
RMS
0.01057
χ2 / ndf
250
Entries
90
1209
Mean
0.00214
RMS
80
0.02458
χ 2 / ndf
79.78 / 25
291.7 ± 10.8
70
Mean
0.003528 ± 0.000205
60
Sigma
0.007698 ± 0.000199
Constant
200
hsx2
1487
Mean
300
85
62.88 / 37
Constant
81.48 ± 3.13
0.0002605 ± 0.0006780
Mean
0.02229 ± 0.00055
Sigma
50
150
40
30
100
20
50
10
0
-0.1 -0.08 -0.06 -0.04 -0.02
0
0.02 0.04 0.06 0.08 0.1
∆θx (rad)
Peak (-0.1,0.) rad
Mean
400
300
0.02 0.04 0.06 0.08 0.1
∆ θx (rad)
hx2
Entries
1487
-0.4476
RMS
χ2 / ndf
500
0
Peak (0.4,0.) rad
hx1
Entries
600
0
-0.1 -0.08 -0.06 -0.04 -0.02
1.11
68.24 / 13
160
2.788
χ2 / ndf
547.4 ± 19.8
120
Mean
Sigma
-0.38 ± 0.02
100
1209
-0.3226
RMS
140
Constant
0.8272 ± 0.0203
Mean
51.75 / 25
Constant
Mean
Sigma
156 ± 5.8
-0.06799 ± 0.07155
2.37 ± 0.05
80
60
200
40
100
20
0
-20
-15
-10
-5
0
5
10
15
20
∆ x (µm)
0
-20
-15
-10
-5
0
5
10
15
20
∆ x (µm)
Figure 5.8: Angular (top) and position (bottom) micro-track resolution in one (X)
projection and for two different angles: -0.1 rad (left) and 0.4 rad (right).
approximatively one order of magnitude better than the micro-tracks. However,
good micro-track resolution allows to keep the background due to casual match
low. Fig. 5.8 shows the micro-track resolution obtained both in angle and in
position.
For each couple of micro-tracks that satisfy position and slope cuts, a χ is
calculated as
s
1 (S xt − S xB )2 (S xb − S xB )2 (S yt − S yB )2 (S xb − S yB )2
+
+
+
(5.1)
χ=
2
σx
σx
σy
σy
where S x and S y are, respectively, the x and y slopes and σ x , σy are the microtrack angular resolutions. The under-script t (b) refers to top (bottom) micro-track,
while B to base-track.
The linking operation between micro-tracks is usually done by iterations, where the first ones are used for emulsion shrinkage correction (see section 4.2.2) and
data quality check. This permit to improve significantly signal/noise separation
and base-track resolution.
In Fig. 5.9 the χ distribution versus the number of grains that belongs a basetrack is shown. Two populations emerge from the sample: one with large χ value
and a number of grains clearly incompatible with the expected Poissonian law
86
The ESS and the LNGS Scanning Station
Figure 5.9: Rejection of fake base-tracks based on both the slope agreement with
the two micro-tracks (χ, see text) and the number of grains: the cut represented
by the line is applied.
(fake base-tracks, top-left), the other one with small χ value and a number of
grains well within the Poissonian expectations (bottom-right). The cut represented
by the line
χ<α×N +β
(5.2)
where N is the number of grains, is applied to remove fake base-tracks.
5.5.2 Plate intercalibration and particle tracking
In order to define a global reference system, prior to track reconstruction a set
affine transformations (shift, rotation and expansion) relating track coordinates in
consecutive films have to be computed to account for relative misalignments and
deformations. The mechanical accuracy of film piling in brick assembly is indeed
of 50 ÷ 100 µm.
The emulsion plate intercalibration is done by subdividing the scanned area
in several cells and performing, for each cell, a pattern recognition between basetracks of two consecutive emulsion films. One of the two pattern is fixed and the
other is shifted several times, the translation which have the maximum number of
track coincidences is chosen. After this procedure, the algorithm connects basetracks of the two plates, and with this sample of tracks the affine transformation is
5.5 The off-line track reconstruction
Base-track angular resolution vs angle
87
∆θ (rad)
∆x (µm)
Base-track position resolution vs angle
2.8
2.6
0.007
2.4
0.006
2.2
2
0.005
1.8
1.6
0.004
1.4
1.2
0.003
1
0.1
0.15
0.2
0.25
0.3
0.35
0.4
θx (rad)
0.1
0.15
0.2
0.25
0.3
0.35
0.4
θx (rad)
Figure 5.10: Angular (left plot) and position (right plot) base-track resolution as a
function of the measured angles.
calculated as:
xabs
yabs
!
=
a11 a12
a21 a22
!
x stage
y stage
!
+
b1
b2
!
(5.3)
where x stage , y stage , are single film track coordinates and xabs , yabs are the corresponding aligned ones.
By applying the above procedure to each consecutive pair of films, relative
displacements can be reduced to the level of a few µm down to less than 1 µm. To
achieve such precision a minimum track density is needed, while the cosmic rays
and the neutrino flux to which the experiment is exposed are very low and the induced track density in the emulsions is not high enough. One needs, therefore, to
expose the selected bricks to high momentum cosmic rays as reference tracks for
the precise film alignment. For this purpose, a dedicated pit has been excavated at
the external site of the LNGS, suitably shielded by an iron cover for the suppression of the electromagnetic component of cosmic rays [60]. After the cosmic ray
exposure, the density of passing-through tracks should be low enough in order not
to spoil the topological and kinematical reconstruction of neutrino events; on the
other hand, the scanning time is a critical issue and needs to be minimised. Typically, a density of the order of a few tracks/mm2 and scanning surfaces of several
mm2 are a reasonable compromise between these two conflicting requirements.
Once all plates are aligned, the track reconstruction algorithm follows all the
measured base-tracks of an emulsion film to the upstream and downstream ones to
88
The ESS and the LNGS Scanning Station
Figure 5.11: A picture of the LNGS Scanning Station
reconstruct volume-tracks. Track finding procedure consists of three main steps.
The first operation consists in finding all couples of adjacent base-tracks; thus,
long chains of segments, without missing plates, are formed: these chains serve as
a triggers to start the Kalman Filter (KF) procedure for track fitting and following.
The third step is the track propagation taking into account the possibility to loose
segments in one or more plates (usually a gap of 3 consecutive plates is allowed).
The main criteria for tracks/segment acceptance is the probability given by KF.
The resolution of the system and the effects of multiple scattering are taken into
account for the probability and fit calculation [76].
Fig. 5.10 (left plot) shows the angular difference between base-tracks belonging to a given volume-track and the volume-track, as a function of the measured
angle: the so obtained base-track resolution range from 2.5 mrad for θ = 0.1 rad,
to 7.3 mrad for θ = 0.4 rad and depends from the value of the angle following the
empirical relation
σ(θ) = σ(0)(1 + 4 · θ)
(5.4)
The base-tracks position resolution, shown in Fig. 5.10 (right plot ), is given
by the intrinsic resolution (σr = σ(θ) × d, where d is the distance between two
consecutive plates) plus the sheet-to-sheet alignment accuracy (∼ 1 µm).
5.6 LNGS scanning station and ESS performances
Starting from 2004 at the LNGS, a scanning station (see Fig. 5.11) equipped with
6 ESS was prepared with the task of doing the scanning of the European fraction
of Changeable Sheets during the OPERA runs.
In order to evaluate the scanning system efficiency a pion beam exposure of
double-refreshed (CS-like) films was performed at CERN PS-T7 in July 2006:
5.6 LNGS scanning station and ESS performances
89
θy (rad)
Base-track angular distribution
0.4
0.2
0
-0.2
-0.4
-0.4
-0.2
0
0.2
0.4
θx (rad)
Figure 5.12: Base-track angular distribution of pion beam exposure.
pid0
pid0
Entries
Mean x
Mean y
∈
0.98
1
RMS x
RMS y
0
0
0
0
0
0.96
0.94
0.98
0.92
0.9
pid1
pid1
Base-track reconstruction
efficiency
1
1
Entries
Mean x
Mean y
0.98
RMS x
RMS y
0
0
0
0
0
1
0.98
0.96
0.96
0.94
0.94
0.92
0.92
0.9
0.9
0.88
0.88
0.88
0.86
0.86
0.86
0.84
0.84
0.84
0.82
0.8
0
pid3
1
0.98
0.96
0.94
0.82
0.92
0.1
0.2
0.3
0.4
0.5
0.9
0.6
pid3
Entries
Mean x
Mean y
0.88
RMS x
RMS y
0.96
0.94
0
0
0
0
0
0
0
0
0
0
0.82
0.1
0.2
0.3
0.4
0.5
0.8
0
0.6
0.1
0.2
0.3
0.4
0.5
0.6
pid4
pid4
Entries
Mean x
Mean y
1
0.98
RMS x
RMS y
0
0
0
0
0
0.92
0.9
0.88
0.88
0.82
0.86
0.84
0.8
0
RMS x
RMS y
0.94
0.9 0.84
0.82
0
0
0
0
0
Entries
Mean x
Mean y
0.96
0.86
0.92
0.86
0.8
0
pid2
all
Entries
Mean x
Mean y
RMS x
RMS y
pid2
0.84
0.8
0
0.1
0.82
0.1
0.2
0.3
0.4
0.5
0.6
0.2
0.8
0
0.3
0.1
0.2
0.3
0.4
0.4
0.5
0.6
0.5
0.6
θ (rad)
Figure 5.13: Base-track reconstruction efficiency as a function of the base-track angle.
32 emulsion sheets, assembled in a lead-less brick, were exposed to a π − beam
of average energy of ∼ 7 GeV; the brick was tilted by 4 different angle in one
projection (-0.3, -0.1, 0.2, 0.4 rad) in order to study the angular dependence of the
system performances. Fig. 5.12 shows the angular distribution of reconstructed
base-tracks.
In order to evaluate the base-track reconstruction efficiency, a 2 × 2 cm 2 area
scanning on 5 emulsion sheets were performed and, following the procedure illustrated in the previous section, the volume-track reconstruction was done. For each
emulsion sheet, bstr was defined as the number of passing through tracks that were
90
The ESS and the LNGS Scanning Station
measured in the sheet with respect to the total number of passing through tracks.
The obtained efficiencies and their errors are shown in Fig. 5.13 as a function
of the spatial base-track angle θ. The behavior is due to the number of clusters
belonging to the track. The average efficiency is around 90% and correspond to a
micro-track finding efficiency of about 95%.
Chapter 6
Search for neutrino events
In this chapter we will review the main results obtained in the analysis of OPERAlike bricks exposed to NuMI beam.
As already explained in section 3.5, the PEANUT exposure was designed
mainly to test the chain of the neutrino event reconstruction in OPERA. The exposure setup as well as the analysis procedure of PEANUT events, follow the
features and the guidelines of the OPERA experiment.
In the following sections the analysis scheme together with the obtained results
are presented.
6.1 Analysis scheme
As discussed in section 3.3, during OPERA runs TT walls will furnish the trigger for the brick extraction. If the trigger is confirmed in the Changeable Sheet
doublet, the brick is developed and analysed looking for the neutrino interaction.
The CS scanning strategy was defined in several test [77] performed at the LNGS
Scanning Station: the current policy1 is to perform a so called ”general scan” of
5 × 5 cm2 around TT prediction on both CS foils for CC events while for NC
events a general scan of the whole surface is foreseen.
In the PEANUT exposure test, the brick is not removed after a trigger from the
Scintillator Fiber Trackers, (see section 3.5.2), but left in the apparatus until a predetermined exposure to the beam is reached: charged particle tracks, both passing
through or created inside the brick (i.e. candidate CC neutrino interaction), are
uniformly distributed on the whole surface of the emulsion. For this reason, to
select the tracks to be followed upstream, when we look for the vertex position, a
general scan of the CS doublet surface is performed: the tracks reconstructed on
1
further test are in progress, analysing Changeable Sheets extracted during the OPERA run of
October 2007
92
Search for neutrino events
Brick
wall Neutrino Exposure
Cosmic Rays
number
(days)
Exposure (days)
BL056
3
21.3
0.5
BL045
2
20.9
0.08
PoT
SFT
E+17 predictions
135.03
2502
138.34
2254
Table 6.1: Summary of the main exposure info related to the two analysed bricks.
the doublet are compared with the SFT predictions and, as it will be explained in
next sections, only those that are in good agreement both in position and angle,
are selected.
Once the sample of tracks belonging to candidate neutrino interaction has been
determined, the PEANUT analysis scheme traces the OPERA one: an automated
procedure, called scan-back, is performed in order to follow the track plate by
plate inside the brick. When the track fades, a ”stopping point” is defined and the
total scan procedure is applied in order to validate the vertex localisation and to
study his topology (see section 6.1.4).
6.1.1 SFT Predictions
For each brick exposed to NuMI beam, the informations about the exposure, the
position of the brick inside the apparatus and the tracks hitting the brick are stored
in the OPERA Data-Base. In table 6.1 the main information relative to the two
bricks analysed in this thesis are summarised.
The SFT tracks used in the analysis are all those reconstructed by the SFT and
classified as 3D tracks: they must have at least one hit belonging to the U or V
plane (see section 3.5.2). In addition, as the ”first-wall” is the first upstream wall
with respect to all the track hits, a general cut is applied
Nhits ≥ Cif irstwall
where Nhits is the total number of the track hits, C if irstwall = {7, 7, 5, 5} and i =
1, 2, 3, 4 is the wall number. Therefore, considering the above cut and with respect
to a given brick, 3D tracks can be classified as (see Fig. 6.1 and Fig. 6.2):
• passing through tracks: if there are both upstream and downstream hits with
respect to the brick;
• created inside tracks: if there are no upstream hits with respect to the brick
(i.e. the brick belongs to the ”first-wall”);
• upstream of first-wall: if the brick does not belong to the ”first-wall” and
the track is extrapolated upstream up to the brick.
6.1 Analysis scheme
93
Figure 6.1: Event Display for tracks reconstructed by SFT planes. The Viewer shows
respectively a passing through (a) and a created inside (b) track. The light blue rectangle
shows the position of the brick BL045 inside the apparatus. Pink hits belong to the fitted
track, while blue hits do not belong to the reconstructed track.
94
Search for neutrino events
Figure 6.2: Event Display for tracks reconstructed by SFT planes. Example of an upstream of first-wall track (c). The light blue rectangle shows the position of the brick
BL045 inside the apparatus. Pink hits belong to the fitted track, while blue hits do not
belong to the reconstructed track.
Brick number
BL056
Passing Through 1901
Created
382
Upstream
219
Total
2502
BL045
1215
777
262
2254
Table 6.2: Details of the SFT predictions and 3D track classification for the two analysed
bricks.
The 3D tracks that survive the cut are projected on the downstream surface of
the brick allowing 1 cm more of the nominal edge to account for possible misalignments between brick and SFT. Fig. 6.3 and 6.4 show positions and slopes for
tracks crossing the brick BL045, while in table 6.2 the number of SFT predictions
for both bricks are summarised.
In the following sections all the results presented are referred to brick BL045.
6.1 Analysis scheme
95
SlopeY
Figure 6.3: Position distribution of 3D tracks hitting the brick BL045 on wall 2.
The tracks are projected on the downstream surface of the brick, i.e. on the first
sheet of the CS doublet. Note that the intercepted surface is 1 cm larger than
the dimension of the brick (red rectangle) to account for possible misalignments
between brick and SFT.
1.5
1
0.5
0
-0.5
-1
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
SlopeX
Figure 6.4: Slope distribution of 3D tracks hitting the brick BL045 on Wall 2.
Note that the peak is not centered at the origin due to the NuMI beam slope.
6.1.2 Doublet analysis
In order to select the sample of tracks in the CS doublet to be matched with the
SFT predictions, a general scan of about 80 cm2 on each CS sheet was performed.
The base-track searching was done off-line with the FEDRA software following
the procedure illustrated in section 5.5.
96
Search for neutrino events
Figure 6.5: Distribution of the χ variable versus the grain number of the basetracks.
By applying the quality cut in equation 5.2 (χ < α × N + β, with α = 0.25 and
β = −2.5) to select the base-track signal, one obtains the distribution shown in
Fig. 6.5; in the allowed region two populations are well distinguishable: one with
”low” grain number (red line), the other with a ”high” value of N (green line).
This is the result of two different exposures: the base-tracks with a ”low” number of grains belong to tracks hitting the brick during the neutrino exposure and
the succeeding cosmic ray exposure, while the base-tracks with ”high” number of
grains belong to tracks recorded during the transportation from Japan to FermiLab. In fact, in the last case, since the transportation was carried out by plane, the
emulsions were exposed to highly ionizing cosmic rays. These base-tracks constitute an undesirable background that can be erased by taking into account that
the emulsions have been piled, inside the brick, in inverse order with respect to
the transportation order (see section 3.5.2 and Fig. 3.15).
Thus, by taking into account the relative position between emulsion sheets and
by applying the alignment and tracking procedure illustrated in section 5.5.2, we
reconstructed tracks belonging to transportation cosmic rays. The base-tracks related to these tracks were then flagged and, by applying a procedure called Virtual
Erasing, they are not considered during track reconstruction in assembly order.
In order to ensure a better rejection of base-tracks due to transportation, a
80 cm2 general scan of the first 5 plates (CS doublet + 3 upstream plates) was
performed. Fig. 6.6 shows position and slope distribution of reconstructed tracks:
the track density is about 4.9 tracks/mm2 for tracks with
nseg ≥ 3
6.1 Analysis scheme
97
hx
Transportation Order
Entries
15664
Mean -5.792e+004
RMS
2.879e+004
900
Entries
15664
Mean 4.943e+004
RMS
2.14e+004
900
800
800
700
700
600
600
500
500
400
400
300
300
200
200
100
100
0
-120
hy
Transportation Order
-100
-80
-60
-40
-20
×10
0
X (µm)
3
hsx
Entries 15664
Mean 0.01124
RMS
0.2371
Transportation Order
1000
0
10000 20000 30000 40000 50000 60000 70000 80000 90000
Y (µm)
hsy
Entries 15664
Mean 0.00413
RMS
0.2982
Transportation Order
900
800
800
700
600
600
500
400
400
300
200
200
100
0
-0.4
-0.2
0
0.2
0.4
SlopeX
0
-0.6
-0.4
-0.2
0
0.2
0.4
SlopeY
Figure 6.6: Position and slope distribution for recorded tracks during emulsion transportation from Japan to FermiLab.
where nseg is the number of base-tracks fitting to the track. The shape of the
slope distribution in the Y projection is due to the fact that the emulsions, during
transportation, were stored vertically.
Fig. 6.7 shows position and slope distribution for tracks reconstructed in assembly order in the CS doublet after the Virtual Erasing: the track density is
∼0.28 tracks/mm2 . Slope distributions show the position of the NuMI beam peak.
Fig 6.8 shows the grain number distribution (N) and the tracking efficiency
both in assembly and transportation order obtained with a general scan of less
than 2 cm2 of the whole brick (57 plates). In the first case the average efficiency
(∼ 77%) is well below the value of ∼ 90% obtained with reference emulsions
exposed to the pion beam (see section 5.6): this effect can be explained by taking
into account the fading effect due to the high temperature (∼ 20◦ ) of the MINOS
near hall together with the low number of grains (∼ 24) of MIP tracks (both beam
particles and cosmic rays at ground level), while in the second case the higher
value of N, due to highly ionizing particles (mainly protons with energy of about
hundreds of MeV), gives rise to a higher average tracking efficiency (∼ 93%).
98
Search for neutrino events
hx
Assembly Order
Entries
1324
Mean -5.969e+004
RMS
2.872e+004
80
hy
Assembly Order
Entries
1324
Mean 4.897e+004
RMS
2.295e+004
70
70
60
60
50
50
40
40
30
30
20
20
10
10
0
-120
-100
-80
-60
-40
×10
-20
0
X (µm)
3
hsx
Entries
1324
Mean 0.01788
RMS
0.19
Assembly Order
220
0
hsy
Assembly Order
Entries
1324
Mean -0.02477
RMS
0.203
220
200
200
180
180
160
160
140
140
120
120
100
100
80
80
60
60
40
40
20
20
0
10000 20000 30000 40000 50000 60000 70000 80000 90000
Y (µm)
-0.4
-0.2
0
0.2
0.4
SlopeX
0
-0.6
-0.4
-0.2
0
0.2
0.4
SlopeY
Figure 6.7: Position and slope distribution for recorded tracks during neutrino and cosmic ray exposure (Assembly Order).
6.1.3 SFT-CS Matching
The matching between SFT predictions and tracks reconstructed in the CS doublet, consists simply in the alignment and tracking procedure between SFT planes
(that will be treated as an emulsion film placed in contact with the brick) and first
emulsion sheet of the doublet.
As explained in section 5.5.2 the aim of the alignment procedure is to determine the affine transformation between two plates; since the relative position
between SFT planes and brick could be affected by large displacements by the
nominal position, because of the manual insertion of the bricks in the Walls, a
roughly alignment was searched for before applying the FEDRA one. Fig. 6.9
(top plots) shows the result of this procedure: an offset of about -8300 µm and
-2300 µm, in X and Y projection respectively, was found.
By entering these offsets in the parameters b1 and b2 of the equation 5.3 and by
applying the FEDRA intercalibration procedure, a fine alignment was performed.
Once the roto-translation parameters of the affine transformation have been determined, the tracking procedure was done by looking for tracks within tolerances of
1500 µm in position and 0.021 for slopes.
As shown in Fig. 6.9 (bottom plots) the slope residuals of matched tracks are
6.1 Analysis scheme
99
htemp
Entries
3887
Mean
23.45
RMS
3.251
Grain Number - Assembly Order
450
ht
Efficiency vs angle - Assembly Order
Entries
Mean
Mean y
RMS
RMS y
0.8
3887
0.2408
0.7684
0.144
0.1102
400
0.7
350
0.6
300
0.5
250
0.4
200
150
0.3
100
0.2
0.1
50
0
14
16
18
20
22
24
26
28
30
0
0
32
0.1
0.2
0.3
0.4
N
htemp
Entries 10597
Mean
27.34
RMS
3.347
Grain Number - Transportation Order
1200
1000
Efficiency vs angle - Transportation Order
1
0.5
θ (rad)
ha
Entries
Mean
Mean y
RMS
RMS y
10597
0.2823
0.9261
0.1106
0.06463
0.8
800
0.6
600
0.4
400
0.2
200
0
16
18
20
22
24
26
28
30
0
0
32
0.1
0.2
0.3
N
0.4
0.5
θ (rad)
Figure 6.8: Grain number (N) and tracking efficiency for reconstructed tracks
in assembly (top) and transportation (bottom) order respectively. Top plots: the
mean value of grain number (N ∼ 24) is characteristic of MIP particles. The mean
tracking efficiency is ∼ 77%. Bottom plots: The high value of N is due to the
highly ionizing component of the cosmic radiation accumulated during the flight.
The average tracking efficiency is around 93%. Data refers to a general scan of
less than 2 cm2 on 57 plates.
not centered at zero: the slope offsets (∼ 0.004 and ∼ 0.002 in X and Y projection
respectively) are due to the tilted position of the brick with respect to the SFT
plane. An offset of ∼4000 µm was found also in Z direction.
Once these effects have been corrected, the tracking procedure was repeated:
537 matchings were found with the residual distributions shown in Fig. 6.10; the
matching resolutions are:
σ∆X = 523.5 µm σ∆Y = 496.1 µm
σ∆S lopeX = 0.006 σ∆S lopeY = 0.006
(6.1)
(6.2)
The sample of tracks to be followed in the scan back procedure was selected basing on a 3σ cut in position and slope on the above residuals: 243 tracks survived
the cut. The analysis and the results for brick BL045 will be presented in section
6.3.
100
Search for neutrino events
htemp
Entries
3061
Mean
-8312
RMS
1886
Position offset
100
90
htemp
Entries
3061
Mean
-2354
1446
RMS
Position offset
110
100
90
80
80
70
70
60
60
50
50
40
40
30
-12000 -11000 -10000
-9000
-8000
-7000
-6000
Slope offset
-5000
dx (µm)
-5000
0.00835
χ2 / ndf
11.74 / 10
Constant
-1000
53.69 ± 4.85
Mean
0.003827 ± 0.000502
Sigma
0.007692 ± 0.000536
30
0
dy (µm)
hsy
Entries
0.003258
RMS
40
-2000
267
Mean
50
-3000
Slope offset
hsx
Entries
-4000
70
0.002283
RMS
0.007924
χ 2 / ndf
60
Constant
50
267
Mean
14.18 / 9
65.58 ± 5.84
Mean
0.002012 ± 0.000466
Sigma
0.007383 ± 0.000466
40
30
20
20
10
10
0
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
∆SlopeX
0
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
∆SlopeX
Figure 6.9: Top plots: Position offsets (dx ' −8300 µm, dy ' −2300 µm) between brick
and SFT planes, obtained applying a coarse alignment (see text). Bottom plots: slope
offsets due to the tilted position of the brick with respect to SFT planes.
For brick BL056 a sample of 217 tracks was selected for the scan-back (see
table 6.3) after the matching with SFT predictions. The analysis will be presented
in section 6.2.
6.1.4 Scan Back and Total Scan
The coordinates of the tracks selected with the procedure illustrated in previous
sections, are inserted in the OPERA Data-Base: hereafter we refer to them as
scan-back ”predictions”.
As already mentioned, in the Sysal-framework (see section 5.4) the scan-back
procedure is completely automated and Data Base-driven. In order to take into
account the relative misalignment between films inside the brick, the first scan
back operation consists in the intercalibration process: some sample of the emulsion surface are scanned looking for a set of common tracks between consecutive
plates. A strong quality cut is applied in order to reject background and, once
selected the signal (i.e. tracks belonging to cosmic rays), only base-tracks within
given tolerances are selected to calculate the affine transformation parameters.
After the intercalibration, a ”prediction scan” is performed: the objective of
6.1 Analysis scheme
101
SFT Matching
60
50
40
hy
SFT Matching
hx
Entries
216
Mean
-61.29
RMS
513.5
χ2 / ndf
6.287 / 4
Constant
64.42 ± 5.50
Mean
-75.53 ± 38.50
Sigma
523.5 ± 27.9
Entries
216
Mean
-34.33
RMS
472.7
χ2 / ndf
4.457 / 4
Constant
68.67 ± 6.02
Mean
-54.12 ± 35.30
Sigma
496.1 ± 28.9
70
60
50
40
30
30
20
20
10
0
-4000
10
-3000
-2000
-1000
0
1000
2000
SFT Matching
3000 4000
∆X (µm)
80
70
0.0006284
RMS
0.006787
Constant
Mean
50
216
Mean
χ 2 / ndf
60
Sigma
6.627 / 6
77.16 ± 7.38
0.0009761 ± 0.0004624
-2000
-1000
0
1000
2000
hsy
70
0.0004111
RMS
0.006077
Constant
50
Mean
Sigma
40
30
20
20
216
Mean
χ 2 / ndf
60
30
3000 4000
∆ Y (µm)
Entries
80
0.006495 ± 0.000430
40
4.381 / 3
81.41 ± 7.71
0.0005955 ± 0.0004423
0.00624 ± 0.00042
10
10
0
-0.06
-3000
SFT Matching
hsx
Entries
0
-4000
-0.04
-0.02
0
0.02
0.04
0.06
∆SlopeX
0
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
∆SlopeY
Figure 6.10: Position (top) and slope (bottom) residuals between matched tracks for
BL045. The resolutions achieved are: σ ∆X = 523.5 µm, σ∆Y = 496.1 µm, σ∆S lopeX =
0.006, σ∆S lopeY = 0.006.
Brick number
BL056
ν exposure (days)
21.28
CR exposure (days)
0.5
2
Transportation Order (tr/mm )
3.5
2
Assembly Order (tr/mm )
0.97
SFT sample
2502
SFT matches
217
σ∆S lopeX
0.007
σ∆S lopeY
0.007
σ∆X (µm)
473
σ∆Y (µm)
519
BL045
20.9
0.08
4.9
0.28
1992
243
0.006
0.006
523
496
Table 6.3: Summary of the characteristics of the two analysed bricks.
the microscope automatically moves to the predicted position and scans the view
(∼ 400 × 300 µm2 ) around the prediction. The base-track reconstruction is done
102
Search for neutrino events
on-line and, in order to reject background, the quality cut
s < N ∗ 0.13 − 1.3
(6.3)
is applied, where s is defined as
v
t
t
2 v
2
!2 
!2 

 ∆S bk 
∆S t⊥
∆S b⊥
 ∆S tk 
s=
+  T ol  +
+  T ol 
S ⊥T ol
Sk
S ⊥T ol
Sk
(6.4)
∆P⊥ < PT⊥ol ,
∆Pk < PTk ol
(6.5)
∆S ⊥ < S ⊥T ol ,
∆S k < S kT ol
(6.6)
and ∆S ⊥(k) is the transverse (longitudinal) slope difference between top/bottom
T ol
micro-track and base-track and S ⊥(k)
is the corresponding tolerance (see Eq. 6.8
and 6.10). The selected base-tracks are compared with the prediction and a basetrack is considered a ”candidate” if the differences between predicted and measured track coordinates are within given tolerances:
and
for positions and slopes respectively. The transverse tolerances
PT⊥ol = nσ pos
S ⊥T ol = nσ slope
(6.7)
(6.8)
where σ pos(slope) is the position (slope) resolution, are constant while the corresponding longitudinal components are defined taking into account the resolution
degradation with the measured angle:
PTk ol = PT⊥ol + γ1 · tan θ
S kT ol
=
S ⊥T ol
+ γ2 · tan θ
(6.9)
(6.10)
where the parameters γ1 , γ2 as well as the values of the tolerances, are fixed before
starting the scan-back operation.
If more than one candidate base-track is found, the best one is chosen accordingly with the following selection function:
f = (∆S ⊥ /σ slope )2 + (∆S k /(σ slope + γ2 · tan θ))2
(6.11)
by requiring fmin < f < fmax , where fmin , fmax are parameters.
Once selected, the candidate base-track is projected to the upstream plate (by
using the affine transformation calculated during the intercalibration process) and
6.2 Analysis of brick BL056
103
the procedure illustrated before is repeated. If the track is not found, the coordinates of the last found candidate are used to project the track on the following
upstream plate: in order to take into account the inefficiency of the system a maximum number of missing plates is allowed while searching for the track. If the
track is not found in a predefined number of consecutive plates a stopping point
is declared. The scan back procedure stops and usually a visual inspection of the
emulsion plate is performed in order to confirm the disappearance of the searched
track.
Once the position of the candidate vertex (i.e. the point where the scan-back
track fades) has been determined, the total scan procedure is applied in order to
confirm the interaction and study the topology. A general scan of 5 × 5 mm 2 is
performed in a predefined number of plates: typically the stopping plate (the last
plate where the scan-back track was measured) plus 4 downstream plates and 3
upstream. All the tracks contained in the scanned volume are reconstructed offline.
The vertex finding is performed off-line using the so called pair based algorithm implemented in the FEDRA software [76]. The preliminary triggering
operation for the vertex finding is the track-to-track couples search using the minimal distance criteria. Some topological cuts are used for the reduction of combinatorics. Starting from couples, the n-tracks vertices are constructed using the
Kalman Filtering technique. The final vertex selection criteria is based on the
χ2 -probability of the vertex defined by the Kalman Filter.
A different approach for vertex reconstruction (called ”global vertexing algorithm”) also exists (see [78]).
In the following sections the results of the scan back and total scan procedure
for vertex reconstruction will be discussed: in particular in section 6.2 the standard procedure will be presented, while in section 6.3 a different approach will be
illustrated.
6.2 Analysis of brick BL056
For the brick BL056 the number of SFT predictions is 2502: the sample of 217
tracks followed in the scan-back is obtained basing on a 3σ cut in position and
slope on the matching residuals (see table 6.3). In the sample of the scan-back
tracks only 12 are classified as created inside (i.e. candidate neutrino interactions).
The intercalibration was performed sampling the emulsion surface with three
zones, 1 cm2 each, positioned at the three corners of the emulsion plate, as shown
in Fig. 6.14 a). In order to reject background base-tracks, a stronger quality cut
s < N ∗ 0.13 − 1.7 with respect to the prediction scan (see eq. 6.3), was applied.
In addition, in order to have a better precision in the alignment, only tracks with
104
Search for neutrino events
hx
BL056 Intercalibration
Entries
Mean
0.08421
RMS
χ2 / ndf
2500
Mean
Sigma
Entries
3000
Mean
2500
RMS
χ2 / ndf
6.384
97.93 / 11
2660 ± 33.3
Constant
2000
hy
BL056 Intercalibration
10816
0.06728 ± 0.06223
6.43 ± 0.05
10816
0.03711
6.062
116.2 / 10
2782 ± 34.9
Constant
0.03345 ± 0.05938
Mean
Sigma
2000
6.138 ± 0.049
1500
1500
1000
1000
500
0-40
500
-30
-20
-10
0
10
20
30
40
∆X (µm)
hsx
BL056 Intercalibration
Entries
Mean
RMS
2500
χ2 / ndf
Constant
2000
Mean
Sigma
0
-40
0.006954
20
514.6 / 9
Mean
RMS
2500
χ2 / ndf
2448 ± 30.8
2.367e-006 ± 6.677e-005
Constant
2000
Mean
0.006718 ± 0.000053
Sigma
500
500
0.02
10
0.04
0.06
∆SlopeX
0
-0.06
-0.04
-0.02
30
40
∆ Y (µm)
hsy
-6.151e-005
1000
0
0
Entries
1000
-0.02
-10
BL056 Intercalibration
1500
-0.04
-20
10816
1500
0
-0.06
-30
0
0.02
10816
-1.145e-005
0.00672
420.2 / 9
2535 ± 32.9
-5.628e-005 ± 6.452e-005
0.006546 ± 0.000056
0.04
0.06
∆SlopeY
Figure 6.11: Position (top) and slope (bottom) residuals of intercalibration tracks for
brick BL056.
angle θ > 50 mrad were used. Fig. 6.11 shows the results of the intercalibration
procedure for the 57 emulsion plates of the brick; we obtain:
σ∆X = 6.4 µm σ∆Y = 6.1 µm
σ∆S lopeX = 0.007 σ∆S lopeY = 0.006
(6.12)
(6.13)
After the plate by plate alignment, the sample of 217 tracks was followed
from plate 1 to 57 in the upstream direction. Candidate base-tracks were accepted
within the tolerances given in equations 6.7, 6.8 and 6.9,6.10 with the following
parameter values:
PT⊥ol = 80 µm
PTk ol = PT⊥ol + 6 · tan θ
S ⊥T ol
S kT ol
µm
= 0.02
= S ⊥T ol + 0.05 · tan θ
(6.14)
(6.15)
(6.16)
(6.17)
Although the precision with which we expect to find scan-back tracks is of the order of the intercalibration residuals, the value of position and slope tolerances are
one order of magnitude bigger in order to take into account the Multiple Coulomb
6.2 Analysis of brick BL056
105
hx
Scan Back BL056
Entries
2200
Mean
2000
RMS
-0.1542
6.742
χ2 / ndf
1800
1987 ± 32.3
1400
1200
0.08802 ± 0.05910
Mean
1400
Entries
8902
Mean
-2.157
RMS
8.739
χ2 / ndf
237.3 / 34
Constant
1427 ± 21.0
Mean
-2.206 ± 0.083
Sigma
7.75 ± 0.08
1600
495.2 / 33
Constant
1600
hy
Scan Back BL056
8902
5.401 ± 0.065
Sigma
1200
1000
800
1000
800
600
600
400
400
200
200
0
-80
-60
-40
-20
0
20
40
60
80
∆X (µm)
hsx
Scan Back BL056
Entries
3000
2000
0.002946
1800
384.9 / 22
1600
Sigma
-20
0
20
40
2954 ± 44.6
8902
Mean
1.633e-005
RMS
0.004041
χ2 / ndf
130.4 / 24
Constant
9.378e-006 ± 2.495e-005
1400
0.002301 ± 0.000024
1200
60
80
∆ Y (µm)
hsy
Entries
RMS
Mean
-40
Scan Back BL056
0.0001103
Constant
-60
8902
Mean
χ2 / ndf
2500
0
-80
Mean
1883 ± 26.3
3.858e-005 ± 3.968e-005
Sigma
0.003716 ± 0.000033
0.02
0.04
0.06
∆SlopeY
1000
1500
800
1000
600
400
500
200
0
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
∆SlopeX
0
-0.06
-0.04
-0.02
0
Figure 6.12: Position (top) and slope (bottom) residuals of scan-back tracks for brick
BL056.
Scattering in lead plates; at the same time these values should be low enough to
discard background tracks.
Fig. 6.12 shows the resolutions obtained from the differences between predicted and measured tracks during scan-back:
σ∆X = 5.4 µm σ∆Y = 7.7 µm
σ∆S lopeX = 0.002 σ∆S lopeY = 0.004
(6.18)
(6.19)
The efficiency achieved, evaluated on the basis of passing through tracks, is
around 76% as shown in Fig. 6.13 and is compatible with the measurement discussed in section 6.1.2 (see Fig. 6.8).
A stopping point is declared if the track is not measured in 6 consecutive plates
during scan-back: the number of resultant stopping points is 20 (i.e. 20 scan-back
tracks stop in the analysed volume), uniformly distributed in the brick. In order
to save scanning time, before starting with the total scan procedure to confirm
the neutrino interaction, a visual inspection of these tracks was performed; this
procedure allows to recover the inefficiencies of the system or the cases in which,
due to the coulomb scattering, the candidate track is not related to the scan-back
one because of the large displacement (both in position or angle) between the
measured track and the prediction.
106
Search for neutrino events
Figure 6.13: Estimate of the scan-back efficiency. The mean value is ∼ 76%.
The visual inspection was performed in the stopping plate and in the 3 succeeding upstream films: among the 20 stopping points found during scan-back,
17 were recovered and classified as scattering. For the remaining 3 tracks the total
scan procedure was applied: the results will be presented in section 6.4.
6.3 Analysis of brick BL045
The scan-back procedure illustrated in the previous section presents two weak
points. As explained in section 6.1.2, due to the characteristics of the exposure,
the mean efficiency achieved with PEANUT bricks (for tracks reconstructed in
assembly order), is well below the average value of ∼ 90% obtained with reference
emulsions (see section 5.6). On the other hand, the percentage of ”fake stopping
points” (17/20 ∼ 85%), that follows from the visual inspection of the scan-back
results, is high: the purity of the sample of tracks to be analysed with the total
scan procedure is an important task to ensure scanning time saving and a good
event reconstruction efficiency.
The two requirements, good efficiency and good purity, are inevitably related
to each other. Since the probability to define a fake stopping point due to the
base-track reconstruction inefficiency, can be estimated as
P f akestop ∼ (1 − )n
(6.20)
where is the efficiency and n is the number of allowed missing plates before
declaring a stopping point, it is plain that by using a big value of n (as in the case
of scan-back of brick BL056) we obtain a small probability value. Ranging from
n = 3 to n = 6 the value of P f akestop decreases from ∼ 1.4% to ∼ 0.02%. If we
6.3 Analysis of brick BL045
107
Figure 6.14: Sketch of the intercalibration zone position (yellow rectangles) on the emulsion surface. Fig a) refers to the scan back of brick BL056, while Fig. b) refers to the
analysis of brick BL045.
set the number of allowed missing plates to n = 3, which is safer in case of track
scattering, an increase of 20% in the scan-back efficiency gives rise to a decrease
of the probability value from ∼ 1.4% to ∼ 0, 006%.
Since a fake stop can be also due to the coulomb scattering (as in the case of
brick BL056), the choice of the tolerance parameters used in Eq. 6.5 and 6.6, is a
critical point too. Since the probability to choose a background base-track instead
of the true one can be roughly estimated as
PT ol
Pbg ∼
Dview
!2
S T ol
Rθ
!2
dbg
(6.21)
where PT ol and S T ol are the tolerance values, Dview ∼ 300 µm, Rθ ∼ 0.5 are the
position and slope dimension of the measured view and dbg is the background
density (dbg ∼ 12 base-tracks/view), the value of Pbg increases with the increase
of the tolerances. The probability value ranges from ∼ 0.14%, with the tolerances
listed in Eq. 6.14 and 6.16, to ∼ 0.03% by using PT ol = 40 µm and S T ol = 0.018.
This effect is relevant especially when, due to inefficiency, the true base-track is
not measured in one of more of the previous downstream plates: the prediction in
this case is made by using the last measured segment and, especially in case of
scattering, the prediction could be less accurate with the increase of the number
of (allowed) missing plate.
Starting from these simple considerations a new scan-back procedure has been
conceived in order to improve the scanning efficiency and the stopping point sample purity. The simplest way to improve the efficiency consists in accepting as
a candidate, in the case when the base-track is not found, a single micro-track
satisfying specific quality and geometrical cuts. In fact the mean value of the
probability to reconstruct a micro-track, < Pµtr >, can be estimated as the square
108
Search for neutrino events
Intercalibration BL045
4284
Mean
7.361
χ 2 / ndf
1049 ± 25.4
6.041 ± 0.110
Sigma
600
-0.5309
RMS
7.912
χ 2 / ndf
800
235.4 / 17
951.9 ± 22.7
Constant
-0.4651 ± 0.1066
Mean
6.715 ± 0.120
Sigma
600
400
400
200
200
0
-40
1000
0.03611± 0.09588
Mean
4284
Mean
277.4 / 17
Constant
800
Entries
0.02967
RMS
1000
hy
Intercalibration BL045
hx
Entries
1200
-30
-20
-10
0
10
20
Intercalibration BL045
30
40
∆ X ( µ m)
-30
-20
-10
0
10
20
30
40
∆ Y ( µ m)
hty
Intercalibration BL045
htx
Entries
0
-40
4284
Entries
4284
1200
Mean
7.818e-006
RMS
1000
0.00766
χ 2 / ndf
404.3 / 17
1103 ± 28.6
Constant
800
Mean
600
-1.919e-005 ± 9.009e-005
0.005599 ± 0.000114
Sigma
8.562e-005
RMS
0.008054
χ 2 / ndf
316.7 / 17
800
1011 ± 24.9
Constant
Mean
600
7.412e-006 ± 9.930e-005
0.006231 ± 0.000117
Sigma
400
400
200
200
0
-0.04
Mean
1000
-0.03
-0.02
-0.01
0
0.01
0.02
0.03
0.04
∆ SlopeX
0
-0.04
-0.03
-0.02
-0.01
0
0.01
0.02
0.03
0.04
∆ SlopeY
Figure 6.15: Position (top) and slope (bottom) residuals of intercalibration tracks for
brick BL045.
root of the mean value of the probability to reconstruct a base-track:
p
< Pµtr >= < Pbstr >
(6.22)
In our case, being < Pbstr >∼ 76%, < Pµtr >∼ 87%. The consequent increase of
the scan-back efficiency, can be evaluated accordingly with a binomial function
and is around 34%2 . This method, (together with other improvements), was applied to the analysis of neutrino interaction for brick BL045. In the following the
a detailed description of the adopted procedure together with the results obtained
will be presented.
For the brick BL045, as already discussed, the sample of SFT prediction used
to perform the matching with the doublet is 2254. Among them, 243 tracks match
within 3σ with the doublet, with the resolution listed in table 6.3. To save scanning
time we decided to follow in the scan-back only the subsample of 76 out of 243
tracks classified as ”created inside the brick” by the SFT.
Before starting with the prediction scan, the intercalibration procedure was
performed. We sampled the emulsion surface with a single central zone of 1.7×1.7
2
This value is overestimated because we did not take into account the efficiency of the quality
cuts needed to select the micro-track signal.
6.3 Analysis of brick BL045
109
Scan-back BL045: base-tracks (0 holes)
3328
-0.7977
7.876
145.4 / 19
Constant
583.6 ± 14.7
Mean
-0.3682 ± 0.1283
Sigma
6.962 ± 0.125
600
500
400
hy
Scan-back BL045: base-tracks (0 holes)
hx
Entries
Mean
RMS
χ2 / ndf
700
300
Entries
500
3328
Mean
-0.1793
RMS
χ2 / ndf
400
9.634
71.95 / 24
Constant
461.4 ± 10.8
-0.1096 ± 0.1585
Mean
300
9.009 ± 0.141
Sigma
200
200
100
100
0
-80
-60
-40
-20
0
20
40
Scan-back BL045: base-tracks (0 holes)
Entries
1000
Mean
RMS
χ2 / ndf
800
Constant
Mean
600
Sigma
60
80
∆X (µm)
0
-80
-60
-40
-20
0
20
40
60
80
∆ Y (µm)
hsy
Scan-back BL045: base-tracks (0 holes)
hsx
3328
-6.651e-006
Entries
0.003871
145.4 / 17
500
964.8 ± 24.4
-9.596e-006 ± 5.601e-005
3328
Mean
600
-7.815e-005
RMS
0.005315
χ2 / ndf
18.13 / 18
Constant
400
Mean
0.003158 ± 0.000057
Sigma
606 ± 13.3
-8.686e-005 ± 9.111e-005
0.00523 ± 0.00007
300
400
200
200
0
-0.06
100
-0.04
-0.02
0
0.02
0.04
0.06
∆SlopeX
0
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
∆SlopeY
Figure 6.16: Position (top) and slope (bottom) residuals of scan-back tracks found during
first and second scanning (base-track search only) for brick BL045.
cm2 as shown in Fig. 6.14 b). This choice is less time consuming with respect to
the scanning of the 3 zones placed at the corner of the emulsion. The residuals
achieved with the intercalibration tracks are shown in Fig. 6.15; we obtain the
resolutions
σ∆X = 6.0 µm σ∆Y = 6.7 µm
σ∆S lopeX = 0.005 σ∆S lopeY = 0.006
(6.23)
(6.24)
which are compatible with the values obtained in the analysis of brick BL056
using the three zones procedure (see Eq. 6.12 and 6.13).
The intercalibration tracks are also used to evaluate the shrinkage factor discussed in section 4.2.2. The shrinkage cannot be corrected during the prediction
scan due to the low track density of the view scanned around the prediction: in
the intercalibration area the number of tracks is high enough not only to calculate
the affine transformation for the plate to plate alignment, but also to evaluate the
shrinkage factor. For this last task, since the reduction of the emulsion thickness
mostly affects not perpendicular tracks, only tracks with angle θ > 50 mrad were
used.
Once the parameters of the affine transformation were determined, the prediction scan starts automatically scanning the view around the predicted track
110
Search for neutrino events
Scan-back BL045: base-tracks (1 hole)
25
89
Mean
-4.184
RMS
14.65
χ2 / ndf
20
Entries
14
25.82 ± 3.92
Mean
-2.665 ± 1.211
Sigma
9.774 ± 0.997
89
Mean
2.673
RMS
12
21.47
χ2 / ndf
9.94 / 8
Constant
15
hy
Scan-back BL045: base-tracks (1 hole)
hx
Entries
5.062 / 12
Constant
10
Sigma
8
12.86 ± 1.84
2.107 ± 2.327
Mean
20.91 ± 2.01
6
10
4
5
2
0
-80
-60
-40
-20
0
20
40
Scan-back BL045: base-tracks (1 hole)
0.001236
RMS
6.771 / 7
Constant
25.09 ± 3.63
Mean
0.001207 ± 0.000583
Sigma
0.005239 ± 0.000499
0
20
40
60
80
∆ Y (µm)
hsy
Entries
25
0.0004619
RMS
0.006616
χ2 / ndf
20
Constant
Mean
15
89
Mean
Sigma
5.517 / 6
22.4 ± 3.7
0.0002162 ± 0.0007588
0.006071 ± 0.000855
10
10
5
5
0
-0.06
-20
0.00611
χ2 / ndf
15
-40
89
Mean
20
-60
Scan-back BL045: base-tracks (1 hole)
hsx
Entries
25
0
-80
60
80
∆X (µm)
-0.04
-0.02
0
0.02
0.04
0.06
∆SlopeX
0
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
∆SlopeY
Figure 6.17: Position (top) and slope (bottom) residuals of scan-back tracks found during
first and second scanning after a missing plate (base-track search only) for brick BL045.
position. All the micro-tracks measured in the scanned area are reconstructed and
corrected with the shrinkage factor; the base-tracks reconstruction is performed
on-line requiring the best micro-tracks combination and the quality cut
χ < 0.25 ∗ N − 2.5
(6.25)
(see Eq. 5.2) is applied in order to reject background base-tracks.
The candidate base-tracks are selected by evaluating the difference between
measured and predicted track coordinates (P x,y ,S x,y ) with given tolerances
∆P x,y < PiT ol
∆S x,y < S Ti ol (θ = 0)
(6.26)
(6.27)
PiT ol = σ x,y · bi
S Ti ol (θ = 0) = σS x ,S y · ci
S Ti ol (θ) = S Ti ol (θ = 0) · (1 + d i tan θ)
(6.28)
(6.29)
(6.30)
where
and i = 0, 1...n indicates the number of holes, i.e. the number of consecutive
downstream plates in which the track was not measured. This procedure allows
6.3 Analysis of brick BL045
111
to apply stronger tolerances if the track was found in the previous plate (n = 0),
while in the case of missing consecutive plate (n = 1, 2, 3) the tolerances used are
bigger in order to take into account the possible coulomb scattering of the particle
in the lead plates. The position and slope resolutions used in Eq. 6.28 and 6.29
are:
σ x,y = 10 µm
σS x ,S y = 0.006
(6.31)
(6.32)
while the values of the factors are bi = {4, 6, 7, 8}, ci = {3, 3, 3.5, 3.5}, d i =
{4, 5, 5, 5}. Thus the corresponding tolerances are:
PTi ol = {40, 60, 70, 80} µm
S iT ol (θ = 0) = {0.018, 0.018, 0.021, 0.021}
(6.33)
(6.34)
If more than one candidate base-track is found the best one is chosen by min2
imizing the following χCov
function:
−1
−1 2
T − Xf
χCov
= T − X p C −1
+ T − X f C −1
p T − Xp
f
(6.35)
where T ,X p ,X f are linked, predicted and found track coordinate matrices respectively, while C p and C f are the covariance matrices of predicted and found segments.
The scanning efficiency achieved with the method illustrated above is 78.3%
(see Fig. 6.24a) and, as expected, is compatible with the result obtained with the
standard method (see section 6.2 and Fig. 6.13). If the searched base-track is not
found, accounting for possible errors in the scanning (i.e. the emulsion surface
is not focused), the view is scanned a second time. As shown in Fig. 6.24b the
re-scanning procedure allows to obtain an increase up to 84.7% of the scanning
efficiency . Fig. 6.16 shows the difference between predicted and measured basetrack in the first or second scanning; the obtained resolutions are:
σ∆X = 7.0 µm σ∆Y = 9.0 µm
σ∆S lopeX = 0.003 σ∆S lopeY = 0.005
(6.36)
(6.37)
The obtained residuals in the same condition for found tracks after a missing plate
are, as expected, larger (see Fig. 6.17).
If the track is not found in the first nor in the second scanning, the micro-track
search starts. The algorithm of this analysis can be summarised in three steps:
1. among all the reconstructed micro-tracks in the scanned view only those
satisfying an angular cut (∆S < 0.15) are retained. Then a linking between
112
Search for neutrino events
hxt
Scan-back BL045: TOP single-micro-track
Entries
Mean
RMS
90
80
70
χ2 / ndf
19.13 / 9
Constant
86.23 ± 7.52
Mean -0.1974 ± 0.4383
Sigma
7.029 ± 0.450
60
50
40
30
-60
-40
-20
0
20
40
60
60
50
40
30
0
-80
80
∆ X ( µ m)
hyt
Scan-back BL045: TOP single-micro-track
Entries
Mean
RMS
60
50
30
20
-60
-40
-20
0
20
40
60
10
80
∆ X ( µ m)
hyb
Scan-back BL045: BOTTOM single-micro-track
304
0.1636
9.923
χ2 / ndf
7.148 / 10
Constant
66.78 ± 4.69
Mean
0.5516 ± 0.5667
Sigma
9.464 ± 0.376
40
Entries
60
290
Mean
-0.03523
RMS
50
9.867
χ 2 / ndf
40
8.693 / 9
66.65 ± 4.81
Constant
30
-0.02591 ± 0.54040
Mean
8.993 ± 0.371
Sigma
20
10
-60
-40
-20
0
20
40
60
Scan-back BL045: TOP single-micro-track
0-80
80
∆ Y ( µ m)
-60
-40
-20
0
20
40
60
Scan-back BL045: BOTTOM single-micro-track
hsxt
Entries
304
30
Mean
-0.006381
RMS
0.01632
80
∆Y ( µ m)
hsxb
Entries
30
290
Mean
-0.005906
25
25
31.1 ± 2.6
15
Constant
Mean -0.006915 ± 0.000876
10
Mean -0.005758 ± 0.001016
19.46 / 21
15
Constant
10
Sigma
-0.04
-0.02
0
0.02
0.01463 ± 0.00085
0.04
35
Mean
30
RMS
25
χ 2 / ndf
20
Constant
15
10
304
0.02
0.04
0.06
∆SlopeX
hsyb
Entries
-0.02382
0.01433
0.01455
30
RMS
17.68 / 21
25
χ 2 / ndf
37.68 ± 3.00
20
Mean 0.001287 ± 0.000753
15
0.06
∆ SlopeY
Constant
26 / 19
38.49 ± 3.35
Mean -0.02486 ± 0.00070
10
0
-0.06
0.04
290
Mean
5
0.02
0
35
0
-0.06
0
-0.02
40
5
-0.02
27.77 ± 2.16
0.0009054
Sigma 0.01213 ± 0.00067
-0.04
-0.04
Scan-back BL045: BOTTOM single-micro-track
Entries
8.421 / 19
Sigma 0.01631 ± 0.00088
5
0
-0.06
0.06
∆ SlopeX
hsyt
Scan-back BL045: TOP single-micro-track
40
0.01591
χ 2 / ndf
χ / ndf
2
5
RMS
20
20
0
-0.06
70
10
10
0
-80
Entries
290
Mean
-1.191
RMS
8.596
χ2 / ndf
9.893 / 8
Constant
73.15 ± 5.76
Mean
-0.8561 ± 0.5095
Sigma
8.155 ± 0.423
80
20
20
0
-80
hxb
Scan-back BL045: BOTTOM single-micro-track
304
-0.4911
8.537
Sigma
-0.04
-0.02
0
0.02
0.04
0.0109 ± 0.0007
0.06
∆SlopeY
Figure 6.18: Position and slope residuals for top (left column) and bottom (right column) single-micro-tracks. The position resolutions are of the same order of base-track
resolutions (see Fig. 6.16) while slope residuals are, as expected, one order of magnitude
bigger.
6.3 Analysis of brick BL045
113
each micro-track and the predicted base-track is preformed and the quality
2
2
cut χCov
< 1.6 is applied, where the χCov
function is defined in Eq. 6.35 and
p
the resolutions used have more stringent values for the prediction (σ Pos
=1
µtr
µm, σSp lope = 0.007) than for the micro-track (σµtr
=
10
µm,
σ
Pos
S lope =
0.015);
2. with all the surviving micro-tracks a new top-bottom linking is performed
looking for all possible combinatorics: then a base-track - like, called doublemicro-track is created. The candidate is the double-micro-track that has the
2
best χCov
with respect to the prediction;
3. if no double-micro-track is in good agreement with the prediction, re-starting
from the sample selected at point 1) we search for a single-micro-track by
2
requiring a minimum number of grain (N>9) and the best χCov
value.
We observed that, on the sample of found tracks, in 84% of cases the candidate
track is measured with the standard base-track search method, while in the remaining 16% a single or double-micro-track is found: in this last subsample only
in 9% of cases it is possible to recover a base-track, i.e. a double-micro-track
is found, while in the remaining 91% it is possible to recover a candidate track
selecting a single-micro-track. This procedure allows to have a global scanning
efficiency of 96.2% as shown in Fig. 6.24 c).
Fig. 6.18 shows position and slope residuals between the predictions and the
measured single-micro-tracks both in X and Y projection; the left column refers to
micro-tracks selected on the top surface of the emulsion while in the right column,
the residuals of bottom micro-tracks are reported. The residuals obtained are:
σt∆X = 7.0 µm σt∆Y = 9.5 µm
σt∆S lopeX = 0.015 σt∆S lopeY = 0.012
(6.38)
(6.39)
for top micro-tracks, while for bottom we have
σb∆X = 8.1 µm σb∆Y = 9.0 µm
σb∆S lopeX = 0.016 σb∆S lopeY = 0.011
(6.40)
(6.41)
As expected, (see section 5.5.1), the position resolution values are compatible
with the base-track resolutions (see Eq. 6.36), while the micro-track slope resolutions are one order of magnitude bigger of that of the base-tracks (see Eq. 6.37).
For this reason, if the candidate track is found with the single-micro-track method,
the prediction for the upstream plate is made by projecting the coordinates of the
found micro-track with the slopes of the last measured base-track. It also evident
that the position residuals of bottom tracks are slightly worse with respect to the
114
Search for neutrino events
Single-micro-track
htemp
Single-micro-track
Entries 304
Mean 0.7364
RMS 0.3301
10
Entries 304
Mean 12.04
2.07
RMS
50
8
40
6
30
4
20
2
0
10
0.2 0.4 0.6 0.8
1
1.2 1.4 1.6
0
χ
9
10
11
12
13
14
15
16
N
Figure 6.19: χ (left plot) and grain number (right plot) distribution for found singlemicro-tracks. The red dotted line refers to top single-micro-tracks, while the blue one to
the bottom layer.
top micro-tracks. This effect can be explained by looking at quality plots shown
in Fig. 6.19: the mean value of the χ distribution is 0.74 for top micro-tracks (red
dotted line), while for bottom layers (blue line) is 0.94. This is due to the different
optical conditions between the two emulsion layers. On the other hand, the non
zero mean value of the slope distribution (Y projection) of bottom single-microtracks, can be explained by the bigger distortions that affects bottom emulsion
layers with respect to the top layers (see Fig. 6.20). The average grain number
of the measured single-micro-tracks is around 12 and it is the same for top and
bottom.
For double-micro-tracks the mean value of the χ distribution is 0.5 and the
mean number of grains is less than 18 (see Fig. 6.21), while for ”true” base-tracks
(i.e. base-tracks measured in the first or second scanning) the mean χ value is
1.26 and the mean number of grains is ∼24.4 (see Fig. 6.22). As shown in Fig.
6.23, in the χ-N plane, double-base-tracks (red open circles) are in the signal
region, but they are discarded during top-bottom linking because of preliminary
cuts applied to select the best micro-track combination. The micro-track search
method illustrated before, allows to recover the cases in which one or both microtracks belonging to the candidate track, are affected by large distortions or fading
effects.
If the scan-back track is not found neither with the base-track search, nor with
6.3 Analysis of brick BL045
115
Distortion map top
58000
56000
54000
52000
50000
48000
46000
44000
42000
-70000
-68000
-66000
-64000
-62000
-60000
-58000
-56000
-54000
-66000
-64000
-62000
-60000
-58000
-56000
-54000
Distortion map bot
58000
56000
54000
52000
50000
48000
46000
44000
42000
-70000
-68000
Figure 6.20: Distortion maps for top (black) and bottom (red) emulsion layers. The
reference arrow is 100 mrad. Distortions are larger for bottom layers with respect to the
top.
the (single or double) micro-track method, the last measured segment is used to
project the track up to a maximum of 3 upstream plates: the visual inspection is
performed during the scan-back process and if the track is manually recovered,
it is reinserted and followed back again. This procedure, together with the high
efficiency achieved, allowed us to have, as it will be presented in the next section,
a very pure sample of 14 stopping points.
116
Search for neutrino events
htemp
Double-micro-track
Entries
58
Mean 0.4966
RMS
0.3062
5
htemp
Double-micro-track
14
12
Entries
58
Mean 17.64
2.218
RMS
4
10
3
8
6
2
4
1
2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0
χ
14 15 16 17 18 19 20 21 22
N
Figure 6.21: χ (left plot) and grain number (right plot) distribution for found doublemicro-tracks.
Scan-back: first vs second scanning
htemp
Entries 3419
Mean
1.257
RMS 0.4557
140
120
Scan-back: first vs second scanning
400
350
htemp
Entries 3419
Mean
24.38
RMS
3.297
300
100
250
80
200
60
150
40
100
20
0
0
50
0.5
1
1.5
2
2.5
3
3.5
χ
0
16 18 20 22 24 26 28 30 32
N
Figure 6.22: χ (left plot) and grain number (right plot) distribution for found base-tracks.
The red line refers to the first scanning, while the blue one to the second. The mean grain
number of base-tracks measured in the second scanning is lower (∼ 22.02) of the that of
the first scanning (∼ 24.59).
6.3 Analysis of brick BL045
117
χ
Base-track vs double-micro-track
3.5
3
2.5
2
1.5
1
0.5
0
16
18
20
22
24
26
28
30
32
N
Figure 6.23: Distribution of the χ variable versus the grain number of ”true” the basetracks (i.e. base-tracks measured in the first or second scanning). Red open circles refer
to found double-micro-tracks.
Figure 6.24: Scan-back efficiency for brick BL045: Fig a) and b) show the efficiency
of the first ( = 78.3%) and second ( = 84.7%) scanning respectively. Fig c) shows the
efficiency achieved with the use of the micro-track search ( = 96.2%).
118
Search for neutrino events
Top View
Side View
Front View
Draw Detector
Rotate
OpenGL
X3D
NeighParms
TrackParms
.
ROOT
- OPERA FEDRA
.
Pick
Zoom
UnZoom
Figure 6.25: Display of all the reconstructed tracks in a total scan volume. Open circles
indicate track edges (red=start, black=end), while segments of different colors indicate
the measured base-track position (plate). Dotted black lines indicate the fitted track.
6.4 Vertex reconstruction
The expected number of neutrino interaction in a PEANUT brick can be roughly
estimated basing on the number of beam spills (triggers) during the exposure: we
expect 1 neutrino interaction each 104 spills. By knowing the trigger number and
by taking into account the scanned area and the CS-SFT matching efficiency, we
can evaluate the expected event number as
I T h = triggers ×
A
× matching
Atot
(6.42)
where A = 80 cm2 , Atot = 120 cm2 and
matching = doublet × S FT = 0.76 × 0.76 × 0.5 ∼ 0.29
(6.43)
The expected number of neutrino interaction is ∼ 13.3 ± 3.6 and ∼ 12.7 ± 3.6 for
brick BL056 and BL045 respectively.
As already explained in section 6.1.4, once the sample of stopping points was
determined with the scan-back process, in order to confirm the vertex and study his
topology, the total scan procedure is applied. As shown in Fig. 6.25 all the tracks
contained in the scanned volume of 5 × 5 mm2 per 8 plates (the stopping plate
6.4 Vertex reconstruction
119
Top View
Side View
Front View
Draw Detector
Rotate
OpenGL
X3D
NeighParms
TrackParms
.
ROOT
- OPERA FEDRA
.
Pick
Zoom
UnZoom
Figure 6.26: Display of all the reconstructed vertexes in a total scan volume. The black
star indicates the vertex position, while the green line refers to the track followed in the
scan-back.
Top View
Side View
Front View
Draw Detector
Rotate
OpenGL
X3D
NeighParms
TrackParms
.
ROOT
- OPERA FEDRA
.
Pick
Zoom
UnZoom
Figure 6.27: Display of the reconstructed 3-prong vertex. The green line indicate the
track followed in the scan-back.
120
Search for neutrino events
plus 4 downstream and 3 upstream plates), are reconstructed off-line using the
procedure illustrated in section 5.5. The vertexing algorithm implemented within
FEDRA, performs all the possible combinatorics of 2-track vertexes satisfying
some topological cuts. Then the 2-prong vertexes are merged looking for n-prong
vertexes within the following parameters3 :
• dz < 4000 µ m
• IP < 50 µ m
where dz is the longitudinal distance between the track end and the fitted vertex
position and IP is the impact parameter defined as the minimal three-dimensional
distance between tracks belonging to the vertex and the vertex position. As shown
in Fig.6.26, the reconstructed vertexes can have different topologies: only neutral
vertexes (i.e. no upstream tracks are attached to the vertex) are accepted. Finally
the scan back track (see the green line in Fig. 6.26 and 6.27) is searched for in the
tracks fitting to the vertex.
The analysis of the 14 stopping points arising from the scan-back of brick
BL045 (section 6.3) gives rise to 10 reconstructed vertexes with the following
multiplicities:
• 1 5-prong vertex (see Fig. 6.32)
• 1 3-prong vertex (see Fig. 6.27)
• 3 2-prong vertexes
• 5 1-prong vertexes
while in the remaining 4 volumes, a scattering in the lead layer upstream to the
stopping plate was found. All the total scan results were confirmed by visual
inspection. The number of measured neutrino interaction is compatible, within
the errors, with the expected value.
As seen in section 6.2, the result of the scan-back of brick BL056 yields to 20
stopping points; 17 out of 20 have been classified, by the visual inspection, as scattering tracks. For the remaining three the total scan and the vertex reconstruction
procedure was applied: only one 1-prong vertex was reconstructed in the analysed
volumes, while 2 out of 3 stopping points were classified as scattering, i.e. passing
through tracks. Such a so low number of neutrino interaction can be easily justified by studying the distribution of the slope residuals, both in x and y projection,
for matched tracks between the CS doublet and the subsample of SFT tracks classified as ”created inside” (see Fig. 6.28): in this case the resolution values are one
3
for the study of these values we refer to [80]
6.5 Data - Monte Carlo comparison
SFT Matching - ’’created inside’’ 3D tracks
Entries
Mean
35
RMS
30
SFT Matching - ’’created inside’’ 3D tracks
hsx
148
Constant
Mean
25
Sigma
Entries
-0.002942
0.05446
χ2 / ndf
121
Mean
35
37.46 ± 4.11
χ 2 / ndf
30
0.06767 ± 0.00718
Mean
25
Sigma
20
15
15
10
10
5
5
-0.3
-0.2
-0.1
-0
0.1
0.2
Constant
-0.0058 ± 0.0068
20
0
-0.4
RMS
0.7423 / 3
0.3
0.4
∆SlopeX
0
-0.4
-0.3
-0.2
-0.1
-0
0.1
0.2
hsy
148
0.002145
0.06016
9.705 / 5
36.97 ± 3.72
-0.001461 ± 0.005364
0.06009 ± 0.00341
0.3
0.4
∆SlopeY
Figure 6.28: Slope residuals of the matching between CS-doublet and the subsample
of 3D SFT tracks classified as ”created inside” for BL045. The resolutions achieved
(σ∆S lopeX = 0.068, σ∆S lopeY = 0.060) are one order of magnitude bigger than in the
case of the matching performed with the whole sample (”passing through”+”created inside”+”upstream”) of SFT 3D tracks. This effect was observed in several bricks belonging
to Wall3 of the PEANUT apparatus.
order of magnitude bigger (σ∆S lopeX = 0.068, σ∆S lopeY = 0.060) than in the case
of the matching performed with the whole sample (”passing through”+”created
inside”+”upstream”) of SFT 3D tracks. This effect was observed in several bricks
belonging to Wall 3 and is probably due to the bigger fit uncertainty (because of
the reducted number of SFT hits used in the fit) of tracks stopping on the last Walls
of the apparatus, with respect to longer tracks. Then, it is clear that the track sample selected for the scan-back of brick BL056, that was composed by 205 ”passing
through track” and only 12 ”created inside”, was biased by this effect. In order to
find the correct number of expected neutrino interaction a new unbiased scanning
is needed.
6.5 Data - Monte Carlo comparison
The bricks exposed in the PEANUT test, were shared among the different laboratories of the collaboration as in the real OPERA operation. Each laboratory performed the analysis exploiting different methods, but following the same scheme
presented in section 6.1. The number of reconstructed events together with the
122
Brick
BL089
BL090
BL092
BL045
BL046
BL054
BL081
BL082
BL089
BL033
Search for neutrino events
SFT
SFT-CS ”Created
tracks matchings inside”
1148
167
46
3380
306
141
3507
198
62
1992
243
76
1668
273
75
1944
543
92
3972
297
222
4203
383
285
1148
General Scan
1179
General Scan
No. of
Vertexes
8
14
12
10
14
5
14
15
10
30
1
7
12
9
5
10
4
10
11
9
23
Multiplicity
2 3 4 5
1 0 0 0
2 0 0 0
3 0 0 0
3 1 0 1
3 0 0 0
1 0 0 0
2 1 1 0
3 1 0 0
1 0 0 0
7 0 0 0
6
0
0
0
0
1
0
0
0
0
0
7
0
0
0
0
0
0
0
0
0
0
Table 6.4: Summary of neutrino event number (together with the measured multiplicity)
reconstructed on 10 analysed brick. The bricks BL033 and BL089 were analysed with a
general scan method instead of the scan-back procedure.
multiplicity detail is summarised in table 6.4: the total number of neutrino interactions, reconstructed in 10 scanned bricks, is 132.
With the statistics collected a first attempt to measure the scattering fractions
(Deep Inelastic Scattering (DIS), nuclear RESonance production (RES) and Quasi
Elastic (QE) channel) of neutrino CC interactions was done by the PEANUT
collaboration. A complete simulation [79], [80] of the analysis procedures is in
progress. In the following the first results obtained will be presented.
For each CC neutrino interaction channel (DIS, RES and QE), 5000 events
were simulated. The generated events were propagated with a GEANT3 [81]
based software, implemented in the OpRoot framework [82], the official package
of the OPERA experiment.
Fig. 6.29 shows the neutrino energy distribution for DIS, RES and QE events,
while in Fig. 6.30 the multiplicity distributions for each channel are shown: in the
left column MC-truth (i.e. the multiplicity of simulated events without taking into
account the reconstruction efficiency) is reported, while right column plots refer
to the multiplicity of reconstructed MC events.
In order to evaluate the different fractions of CC neutrino interaction channels,
data multiplicity distribution was fitted as a sum of DIS, QE, and RES contributions with weights pDIS , pQE , pRES . The following constraints were imposed:
pDIS , pRES , pQE > 0
pDIS + pRES + pQE = 1
pRES /pQE = 0.58
6.6 Conclusions
123
Figure 6.29: Neutrino energy distribution for DIS, RES and QE events.
Then, a χ2 (pDIS ) function was defined as:
2
χ (pDIS ) =
7
X
(nexp − (nth ))2
i
i=1
σ2i
(6.44)
where i is the multiplicity distribution bin, σi is the relative statistical fluctuation
(calculated as the quadratic sum of data and MC event statistical fluctuations), n exp
is the (normalised) number of neutrino interaction in experimental data and n th is
the (normalised) number of MC reconstructed events; nth is obtained weighing the
MC truth (nDIS , nRES and nQE , see Fig. 6.30) with the probability fractions (p DIS ,
pQE , pRES ) and the reconstruction efficiency evaluated with the MC simulation:
nth = DIS pDIS nDIS + RES pRES nRES + QE pQE nQE
(6.45)
Fig. 6.31 shows the χ2 (pDIS ) distribution; the result of the fit gives the following
probability values:
pDIS = (56 ± 13(stat))%
pRES = (16 ± 5(stat))%
pQE = (28 ± 8(stat))%
(6.46)
(6.47)
(6.48)
A more reliable simulation (including all the measurement methods adopted)
is needed to improve this first attempt. The muon-tagging from the MINOS detector is in progress and will help to discard NC events from experimental data.
6.6 Conclusions
The PEANUT exposure test was conceived to perform a complete check of the
OPERA event reconstruction chain, from the search of the electronic detector
124
Search for neutrino events
DIS MC Truth
DIS
DIS
1600
Entries
Mean
RMS
1400
1200
5000
4.073
1.589
DIS
30
25
1000
20
800
600
15
400
10
200
5
0
0
2
4
6
8
10
12
14
RES MC Truth
16
0
18
RES
Entries 5000
Mean 3.448
RMS
0.914
2200
2000
1800
1600
0
2
4
6
8
10
12
14
16
RES
Entries
Mean
RMS
160
140
120
1200
100
1000
80
800
60
600
18
RES
180
1400
301
1.845
0.5482
40
400
20
200
0
Entries
91
Mean
2.242
RMS 0.8655
35
0
2
4
6
8
10
12
14
16
0
18
Entries
Mean
RMS
3500
3000
2500
5000
2.675
0.5196
2000
2
4
6
8
10
12
14
16
QE
QE
QE MC Truth
0
18
QE
Entries
Mean
RMS
500
400
673
1.583
0.3613
300
1500
200
1000
100
500
0
0
2
4
6
8
10
12
14
16
18
0
0
2
4
6
8
10
12
14
16
18
Figure 6.30: MC results: left column plots show the multiplicity of DIS, RES and QE
interaction (MC-truth). In the right column the multiplicity of reconstructed events, after
the measurement process simulation, is shown.
6.6 Conclusions
125
Figure 6.31: χ2 (pDIS ) function (see Eq. 6.44) versus p DIS .
triggers in the CS doublet, up to the vertex finding.
The analysis of two bricks exposed to the NuMI neutrino beam has been presented: for the first brick (BL056, see section 6.2) the standard scan-back method
was applied, while for brick BL045 (see section 6.3) a new procedure, allowing
a bigger scanning efficiency (∼ 96%) and a more pure stopping point sample,
was tested. The main features of the new procedure was already implemented in
the official software and used to analyse the first neutrino events occured in the
OPERA run of October 2007.
The number of neutrino interaction reconstructed in brick BL045 is 10 (5
multi-prong and 5 single-prong events) and is compatible with what it is expected
(see section 6.4), while in brick BL056 a bias in the SFT track fitting leads to the
reconstruction of a only one 1-prong vertex. For this brick a new scan-back, with
the right SFT track selection, is needed.
The data collected among the laboratories of the collaboration with the scanning of several PEANUT bricks, will be exploited (when a reliable simulation
will be available, accounting for all the scanning procedure adopted) to evaluate
the different scattering fractions (DIS, RES and QE) of CC neutrino interaction in
the low energy region (E ν ∼ 3 GeV).
126
Search for neutrino events
Top View
Side View
Front View
Draw Detector
Rotate
OpenGL
X3D
NeighParms
TrackParms
.
ROOT
- OPERA FEDRA
.
Pick
Zoom
UnZoom
Top View
Side View
Front View
Draw Detector
Rotate
OpenGL
X3D
NeighParms
TrackParms
.
ROOT
- OPERA FEDRA
.
Pick
Zoom
UnZoom
Figure 6.32: Side view (top) and front view (bottom) of the reconstructed 5-prong vertex.
The green line indicate the track followed in the scan-back.
Conclusions
The aim of the OPERA experiment is to provide the final proof of the correctness
of the neutrino oscillation theory, through the detection of the appearance signal
of a ντ in an initially almost pure νµ beam. The neutrino beam is produced at
CERN SPS, 732 Km far from the detector located at the Gran Sasso National
Underground Laboratory.
The appearance signal will be unfolded through the detection of the daughter
particles produced in the decay of the τ lepton, coming from CC ν τ interactions.
A micro-metric spatial resolution is needed in order to measure and study the
topology of the ντ induced events. For this purpose nuclear emulsions, the highest
resolution tracking detector, will be the core of the OPERA apparatus.
The basic detector unit is the ”brick”, a sandwich-like structure made of nuclear emulsion sheets interleaved with lead layers. More than 150000 bricks will
be arranged in dedicated structures, called walls. The detector is composed of
two supermodules, each divided in a target section and a magnetised iron spectrometer equipped with RPCs. Each target section is composed by 29 brick walls
inter-spaced with an electronic detector wall (Target Tracker, TT). Target Trackers
will furnish the trigger for the event localization in the brick, while the spectrometers will perform muon identification and momentum and charge measurements.
If the TT trigger is confirmed by the scanning of a special emulsion doublet (CS)
positioned downstream of each brick, the selected brick is developed and analysed.
The analysis of the large amount of nuclear emulsions employed in the OPERA
experiment, required the development of a new generation of fast automatic microscopes with a scanning speed one order of magnitude bigger than that achieved
in past experiments exploiting nuclear emulsions. The long R&D carried out by
the collaboration, gave rise to two new systems: the European Scanning System
(ESS) and the Japanese S-UTS.
The LNGS Scanning Station is equipped with 6 ESS running at the scanning
speed of ∼ 20 cm2 /h. The ESS performances, in terms of efficiency and spatial
resolution, was evaluated in the first part of this work, analysing the nuclear emulsion exposed to a pion beam at CERN PS in July 2006. The track reconstruction
128
Conclusions
resolution achieved, is of the order of ∼ 1 micron in position and ∼ 1 mrad in
angle. The corresponding average tracking efficiency is around 90%.
Once evaluated the performances and the characteristics of the automatic microscope, in order to test the OPERA event reconstruction chain, a new test beam
was performed at FermiLab of Chicago. Some OPERA-like bricks were exposed
in 2005 at the NuMI neutrino beam running in the low energy configuration
(< E ν >∼ 3 GeV). The so called PEANUT exposure test, was conceived in order to reproduce the OPERA detector configuration and data analysis scheme:
some electronic detectors (SFT) gave the trigger to the event location; the trigger
confirmation was done in the CS doublet and the selected candidate events were
followed upstream looking for the neutrino vertex localization with a procedure
called scan-back.
In the second part of this work, the analysis of two PENAUT bricks has been
presented: for the first one (BL056) the standard scan-back method was applied,
giving a low reconstruction efficiency (∼ 76 %, mainly due to the fading effect
caused by the high temperaure of the MINOS near hall) and a not pure sample of
candidate neutrino interaction (stopping points).
A new scan-back procedure, exploiting the bigger efficiency of the microtrack search method, was applied to the analysis of brick BL045, giving rise to a
tracking efficiency of about 96% and a very pure sample of 14 stopping points.
The vertex reconstruction procedure (total scan) was applied on the selected
stopping points and 10 neutrino interaction vertexes (5 multi-prong and 5 singleprong events) were reconstructed. The number of measured neutrino interactions
is compatible with what expected. For brick BL056 a bias in the SFT track fitting
led to the reconstruction of a only one 1-prong vertex. For this brick a new scanback, with the right SFT track selection, is needed.
The main features of the new scan-back procedure was already implemented
in the official software and used to analyse the first neutrino events occured in the
OPERA run of October 2007.
Several PEANUT bricks were scanned and analysed among all of the laboratories of the collaboration, and a good statistics of neutrino interactions was
collected. A first attempt to evaluate the different scattering fractions contributing
to the neutrino CC interactions, was performed. A more reliable simulation and
the muon tagging from the MINOS near detector is needed in order to discard NC
events from data. Finally the PEANUT exposure test will be helpful not only to
refine the vertex finding strategy, but also, being the average energy of the NuMI
beam lower than that of the CNGS one, to characterize the OPERA performances
in the low energy region.
Bibliography
[1] W. Pauli, letter to radioactive ladies and gentlemen at the Tubingen conference, 4 Dec. 1930.
[2] E. Fermi, Tentativo di una teoria dei raggi β , Nuovo Cim. 11 (1934).
E. Fermi, Versuch einer Theorie der β−Strhlen,Zeitschrift fur Physik 88
(1934) 161.
F. Reines, C.L. Cowan, Phys. Rev. 92 (830) 1953.
C.L. Cowan, F. Reines, F.B. Harrison, H.W. Kruse, A.D. McGuire, Science
124 (1956) 123.
M. Goldhaber, L.Grodzins, A.W. Sunyar, Phys. Rev. 109 (1958) 1015.
[3] G. Danby et al., Phys. Rew. Lett. 9 (1962) 36.
[4] K. Kodama et al., Phys. Lett. B504 (2001) 218.
[5] G. Abbiendi et al., Eur. Phys. J. C18 (2000) 253.
[6] R. Davis, Phys. Rev. Lett. 12 (1964) 302.
R. Davis et al., Phys. Rev. Lett. 20 (1968) 1205.
[7] B.T. Cleveland et al., Astrophys. J. 496 (1998) 505.
[8] A. Strumia, F. Vissani, hep-phY 0606054 (2006)
[9] S.L. Glashow, Nucl. Phys. 22 (1961) 579.
[10] S. Weinberg, Phys. Rev. Lett. 19 (1967) 1264.
[11] A. Salam, (1969), Proc. of the 8th Nobel Symposium on Elementary particle
theory, relativistic groups and analyticity, Stockholm, Sweden, 1968, edited
by N.Svartholm, p.367-377.
[12] L. Landau, Nucl. Phys. 3 (1957) 127.
130
BIBLIOGRAPHY
[13] T.D. Lee and C.N. Yang, Phys. Rev. 105 (1957) 1671.
[14] A. Salam, Nuovo Cim. 5 (1957) 299.
[15] E. Majorana, Nuovo Cim. 14 (1937) 171.
[16] S.M. Bilenky, C. Giunti and W. Grimus, Prog. Part. Nucl. Phys. 43 (1999) 1,
hep-ph/9812360.
[17] M. Gell-Mann, P. Ramond and R. Slansky, Supergravity, ed. D. Freedman
and
P. van Nieuwenhuizen (North-Holland, Amsterdam, 1979), p. 315; T.
Yanagida, Proceedings of the Workshop on Unified Theory and Baryon Number in the Universe, ed. O. Oswada and A. Sugamoto (Japan, 1979);
R.N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. 44 (1980) 912.
[18] C. Giunti, M. Laveder, hep-ph/0310238
[19] Particle Data Group, K. Hagiwara et al., Phys. Rev. D66 (2002)
[20] J. Schechter and J.W.F. Valle, Phys. Rev. D21 (1980) 309.
[21] J. Schechter and J.W.F. Valle, Phys. Rev. D22 (1980) 2227.
[22] C. Giunti, C.W. Kim and M. Monteno, Nucl. Phys. B521 (1998) 3, hepph/9709439.
[23] C. Giunti and M. Tanimoto, Phys. Rev. D66 (2002) 113006, hepph/0209169.
[24] R.N. Mohapatra and P.B. Pal, Massive neutrinos in physics and astrophysics
(World Sci. Lect. Notes Phys. 72, 2003).
[25] C.W. Kim and A. Pevsner, Neutrinos in physics and astrophysics (Harwood Academic Press, Chur, Switzerland, 1993), Contemporary Concepts
in Physics, Vol. 8.
[26] J.N. Bahcall, Neutrino Astrophysics (Cambridge University Press, 1989).
[27] S.M. Bilenky and B. Pontecorvo, Phys. Rept. 41 (1978) 225.
[28] S.M. Bilenky and S.T. Petcov, Rev. Mod. Phys. 59 (1987) 671.
[29] L. Wolfenstein, Phys. Rev. D 17 (1978) 2369;
S. Mikheyev and A. Smirnov, Sov. J. Nucl. Phys. 42 (1986) 913; Sov. Phys.
JETP 64 (1986) 4; Nuovo Cim. 9C (1986) 17.
BIBLIOGRAPHY
131
[30] Chlorine experiment. The experimental tecnique was suggested in B. Pontecorvo, Chalk River Lab. PDŰ205 report (1946).
The final results of the Homestake experiment are reported in B.T. Cleveland
et al., Astrophys. J. 496 (1998) 505.
[31] Gallium experiments. Gallex collaboration, Phys. Lett. B447 (1999) 127
SAGE collaboration, Phys. Rev. C60 (1999) 055801
Final GNO results: GNO collaboration, Phys. Lett. B616 (2005) 174 (hepex/0504037)
[32] Kamiokande. Y. Fukuda et al., Phys. Rev. Lett. 77 (1996) 1683.
[33] Super-Kamiokande.Super-Kamiokande collaboration, hep-ex/0508053
Super Kamiokande Coll., Nucl. Phys. B145 (2005) 112
[34] SNO. First phase: SNO collaboration, Phys. Rev. Lett. 87 (2001) 071301
(nucl-ex/0106015)
Second phase: SNO collaboration, Phys. Rev. Lett. 89 (2002) 011301 (nuclex/0204008); SNO collaboration, Phys. Rev. Lett. 89 (2002) 011302 (nuclex/0204009)
Third phase: SNO collaboration, Phys. Rev. Lett. 92 (2004) 181301 (nuclex/0309004)
[35] KamLAND. KamLAND collaboration, Phys. Rev. Lett. 90 (2003) 021802
(hep-ex/0212021); KamLAND collaboration, Phys. Rev. Lett. 94 (2005)
081801 (hep-ex/0406035)
[36] K.S. Hirata et al., Phys. Lett. B205 (1988) 416
[37] IMB, R.M. Bionta et al., Phys. Rev. D38 (1988) 768
[38] Soudan 2, M. Sanchez et al., hep-ex/0307069
[39] MACRO, M. Ambrosio et al., Phys. Lett. B566 (2003) 35 (hep-ex/0304037)
[40] SuperKamiokande collaboration, Phys. Rev. Lett. 81 (1998) 1562;
SuperKamiokande collaboration, Phys. Rev. Lett. 85 (2000) 3999 (hepex/0009001);
SuperKamiokande collaboration, Phys. Rev. D71 (2005) 112005 (hepex/0501064);
Super-Kamiokande collaboration, hep-ex/0604011.
132
BIBLIOGRAPHY
[41] K2K K2K collaboration, Phys. Rev. Lett. 90 (2003) 041801 (hepex/0212007)
K2K collaboration, Phys. Rev. Lett. 94 (2005) 081802 (hep-ex/0411038)
[42] NuMI www-numi.fnal.gov
MINOS collaboration, hep-ex/0605058
[43] CHOOZ, M. Apollonio et al., Phys. Lett. B466 (1999) 415, hep-ex/9907037
M. Apollonio et al., Eur. Phys. J. C27 (2003) 331, hep-ex/0301017
[44] PaloVerde, F. Boehm et al., Phys. Rev. D64 (2001) 112001, hep-ex/0107009
[45] LSND, A. Aguilar et al., Phys. Rev. D64 (2001) 112007, hep-ex/0104049
[46] Karmen, B. Armbruster et al., Phys. Rev. D65 (2002) 112001, hepex/0203021
[47] CCFR/NuTeV, A. Romosan et al., Phys. Rev. Lett. 78 (1997) 2912, hepex/9611013
[48] NOMAD, P. Astier et al., Phys. Lett. B570 (2003) 19, hep-ex/0306037
[49] M. Maltoni et al., hep-ph/0305312.
[50] MiniBooNE The MiniBooNE Collaboration, ArXiv:0704.1500v2, 2007
[51] OPERA Collaboration, M. Guler et al. Experimental proposal, CERNSPSC-2000-028
[52] CNGS project: http://proj-cngs.web.cern.ch/proj-cngs/
OPERA Collaboartion, M. Guler et al. CERN-SPSC-2001-025
[53] T. Nakamura et al., Nucl. Instrum. Meth. A 556, 80 (2006)
[54] E. Eskut et al. (CHORUS Collaboration), Nucl. Instrum. Meth. A 401, 7
(1997)
[55] M. Ambrosio et al., IEEE Trans. Nucl. Sci. 51, 975 (2004)
[56] R. Zimmermann et al., Nucl. Instrum. Meth. A 555, 435 (2005) [Erratumibid. A 557, 690 (2006)]
[57] A. Bergnoli et al., Nuclear Physics B (Proc. Suppl.) 158 (2006)
A. Bergnoli et al., IEEE Trans.Nucl.Sci.52 (2005)
[58] A.Di Giovanni et al., Nuclear Physics B (Proc. Suppl.) 158 (2006)
BIBLIOGRAPHY
133
[59] A. Di Giovanni, PhD Thesis, L’Aquila University (2008)
A. Di Giovanni, Diploma Thesis, L’Aquila University (2004)
[60] E. Barbuto et al., Nucl. Instr. Meth. A525 (2004)
[61] G. Rosa et al., Nucl. Instrum. Meth. A 394, 357 (1997)
N. Armenise et al., Nucl. Instrum. Meth. A 551, 261 (2005)
M. De Serio et al., Nucl. Instrum. Meth. A 554, 247 (2005)
L. Arrabito et al., Nucl.Instrum.Meth. A 568, (2006)
[62] S. Aoki et al., Nucl. Instrum. Meth. B 51, 466 (1990)
T. Nakano, PhD Thesis, University of Nagoya (1997)
T. Nakano (CHORUS Collaboration), International Europhysics Conference
on High-Energy Physics (HEP 2001), Budapest, Hungary, 12-18 July 2001
[63] B. Van de Vyver, Nucl. Instr. and Meth. A 385 (1997)
[64] M.C. Gonzalez-Garcia and J.J. Gomez-Cadenas, Phys. Rev. D 55 (1997)
[65] A. Kayis-Topaksu et al. (CHORUS Coll.), Phys. Lett. B 549 (2002)
[66] M. Komatsu, P. Migliozzi and F. Terranova, OPERA Internal Note (2004),
hep-ph/0210043
[67] M. Blau, Nature 142 (1938)
[68] C. Lattes, H. Muirhead, G. Occhialini, C. Powell, Process involving charged
mesons, Nature 159 (1947) 694
[69] D. Allasia et al., Nucl. Phys. B176 (1980) 13
[70] N. Ushida et al., Nucl. Instr. Meth. 224 (1984) 50
[71] S. Aoki et al., Nucl. Instr. Meth. A274 (1989) 64
[72] T. Nakamura, PhD Thesis (2004), Nagoya University
[73] C. Sirignano, ”R&D on OPERA ECC:studies on emulsion handling and
event reconstruction techniques”, PhD Thesis, Salerno University, (2005)
[74] T. Nakano, Proc. of Int. Workshop on Nuclear Emulsion Techniques,
Nagoya, Japan, 12-14 Jun 1998
T. Nakano, Proc. of Int. Europhys. Conf. on High Energy Physics, Budapest,
Hungary, 12-18 Jul 2001
134
BIBLIOGRAPHY
[75] W.J. Smith, Modern optical engineering, the design of optical systems, Third
Edition, McGraw-Hill (2000)
[76] FEDRA home page: http://emulsion.na.infn.it/wiki/index.php/FEDRA
V. Tioukov et al., Nucl. Instrum. Meth. A559 (2006) 103
[77] OPERA Collaboration (R. Acquafredda et al.), New J.Phys.8:303 (2006)
[78] C. Bozza et al., OPERA internal note (2006)
[79] A. Marotta, PhD Thesis, Naples University (2005)
[80] A. Russo, Diploma Thesis, Naples University (2006)
[81] http://operaweb.web.cern.ch/operaweb/internal/exchanger/home/software/
/documentation/OpRelease/packages/OpRoot.html
[82] http://wwwasd.web.cern.ch/wwwasd/geant
Acknowledgments
At the end of this thesis I wish to thank all the people who have contributed to this
work.
First of all I have to express my gratitude to the LNGS and L’Aquila University group for their scientific support: in particular to Prof. Piero Monacelli for
teaching me physics from my first year at the university up to now. Then to Dr.
Nicola D’Ambrosio for having introduced me to the world of nuclear emulsion
scanning. A very special thank to Dr. Luigi Salvatore Esposito, ”Luillo”, for his
friendly and patient support and especially for having taught me all my programming and data analysis skills. Finally thanks to Fabio who shared with me the
work about PEANUT scanning.
I also have to express all my gratitude to the whole Naples group, in particular
to Dr. Giovanni De Lellis for his professionalism and fruitful discussions about
PEANUT topics, and then to Andrea, Francesco, Luca and Valeri for their support
and the large amount of work they have carried out.
I cannot forget my friend Adriano who shared with me these last five years of
work at LNGS.
Finally a special thank to my ”large” family and especially to Enrica and Luca
who have taught me, and daily remind me, the right order of things.