Bachelor`s Thesis - Christian Hoffmann

Transcription

Bachelor`s Thesis - Christian Hoffmann
Universität Osnabrück
Cognitive Science Bachelor Program
Bachelor’s Thesis
The influence of prepositions on
attention during the processing of
referentially ambiguous sentences
Christian Hoffmann
September 29, 2009
Supervisors:
Prof. Dr. Peter Bosch
Computational Linguistics Working Group,
Institute of Cognitive Science
University of Osnabrück
Germany
Prof. Peter König
Neurobiopsychology Working Group,
Institute of Cognitive Science
University of Osnabrück
Germany
Abstract
The present study uses eye-tracking to investigate the role of prepositions in resolving referential ambiguities. Playmobil sceneries and
prerecorded sentences were presented and fixation behaviour on possible referents of the discourse was recorded.
The sentences investigated contained a subject NP whose head NP
refers to two objects in the scenery modified by a PP that uniquely
identified the referential object of the subject NP. The hypothesis was
that when a preposition can uniquely identify an object in a scenery
then the fixation probability of said object should rise already prior
to the processing of the following prepositional NP. If the preposition
does not uniquely identify an object, then the fixation probability of
the referential object should only rise after processing the prepositional
NP. The results seem to imply that there are no major differences in
fixation probabilities connected to the prepositions. Bootstrapping
analyses revealed that there are some significant differences, namely
more fixations on the target in the ambiguous block.
2
Contents
1 Introduction
4
2 Methods
8
2.1
Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
2.2
Experimental stimuli . . . . . . . . . . . . . . . . . . . . . . .
8
2.2.1
Visual stimuli . . . . . . . . . . . . . . . . . . . . . . .
10
2.2.2
Auditory stimuli . . . . . . . . . . . . . . . . . . . . .
11
2.2.3
Filler . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
2.3
Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
2.4
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
2.5
Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
2.5.1
Regions of Interest . . . . . . . . . . . . . . . . . . . .
16
2.5.2
Statistics . . . . . . . . . . . . . . . . . . . . . . . . .
16
3 Results
19
3.1
Subject Validity . . . . . . . . . . . . . . . . . . . . . . . . . .
19
3.2
Stimulus Validity . . . . . . . . . . . . . . . . . . . . . . . . .
19
3.3
Time Course of Fixations . . . . . . . . . . . . . . . . . . . .
22
3.4
Bootstrapping . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
4 Discussion
28
References
30
A Visual Stimuli
31
B Auditory Stimuli
33
C Fillers - Visual
36
D Fillers - Auditory
39
E Statistics
41
F Complementary Figures
43
G Consent Sheet
48
3
1
Introduction
“Linguistic theory [...] may inform a theory of language processing. And
observations about language processing may inform linguistic theory, i.e.
support or disconfirm its predictions.”(Bosch (2009))
In the last few decades, interest in neuroscientific methods for analyzing
linguistic processing has been on the rise. More and more research areas
develop which incorporate paradigms and methods from both theoretical
linguistics and neuroscience.
Specifically the method of analyzing the gaze of people, known as eyetracking, generated major interest in the linguistic community, due to the
seminal paper of Cooper (1974), who showed that people fixate elements of
a visual scene that had a connection to spoken language stimuli to which
they listened at the same time. Many researchers focused on eye-tracking
as a method to investigate ongoing linguistic processing.
As Michael K. Tanenhaus puts it, “eye movements provide a continuous measure of spoken-language processing in which the response is closely
time locked to the input without interrupting the speech stream. [...] The
presence of a visual world makes it possible to ask questions about realtime interpretation.” Tanenhaus et al. (2000) Eye-tracking has been used
before to investigate topics like overt attention and its modulation, reading
behaviour (in particular the implications on online lexical access and syntactic processing, e.g. garden-path sentences) and others. For an overview,
see Rayner (1998).
Of most importance for the understanding of the findings from Tanenhaus, Rayner, Cooper, Chalmers and others are the visual world paradigm
and the linking hypothesis. The visual world paradigm serves as a blueprint
for psycholinguistic experiments. Basically, subjects fixations are measured
as they interact in some fashion with a visual scenery according to tasks proposed by the experimenter, thereby integrating linguistic and non-linguistic
knowledge and action. The linking hypothesis proposes an intrinsic connection between eye movements and lexical access, making it possible to derive
knowledge about linguistic processing from analyzing non-linguistic actions.
In his overview (Tanenhaus et al. (2000)) Tanenhaus shows that visual
context and even real-world knowledge (see also Chambers et al. (1998)) can
help to resolve apparent (or temporal) syntactic ambiguity and is rapidly
4
1
INTRODUCTION
5
integrated throughout the processing of linguistic utterances and also that
linguistic experience (such as relative frequencies of lexical competitors) can
influence fixation behaviour.
A major topic in this field is the question of how referential expressions
(e.g. “The cat on the tree”) are processed. As Chambers et al. (1998) shows,
even prepositions suffice in certain tasks to identify the referential object of
an expression by restricting the domain of interpretation. Studies conducted
at the University of Osnabrück show that determiner gender1 and adjectival
constructions ensuring referential uniqueness already give rise to a higher fixation probability on the referential object due to an anticipation effect, even
before the onset of the noun itself (Hartmann (2006)). Kleemeyer (2007) and
Bärnreuther (2007) showed that top-down influences had a much higher impact on attention modulation than bottom-up processes when presented in
parallel. Karabanov (2006) showed what differences in fixation probabilities
arise when processing full noun phrases compared to pronouns.
The last three studies mentioned used a more natural visual world than
Tanenhaus and the others, by providing Playmobil
R
sceneries as visual
stimuli. Furthermore, subjects did not have to perform complex tasks while
viewing the sceneries, as it was the case in Chambers et al. (1998) and
Hartmann (2006).
The object of this study is to investigate a problem posed by Peter Bosch
in Bosch (2009). The basic question is in which way does the uniqueness constraint of the definite determiner contribute to the processing of potentially
ambiguous referential expressions. For a sentence like:
(1) Put the red block on the block on the disk.
which is syntactically ambiguous, one finds two constituent structures:
(2) put [the [red block]] [on [the [block [on [the [disk]]]
(3) put [the [red [block [on [the block]]] [on [the disk]]]
If this sentence is presented while figure 1 is shown, which contains more
than one red block (one of which is even on another block), and a third
block on a disk, the uniqueness constraints of the first two definite determiners are not met when analyzing their corresponding constituents. But
1
determiners in German have gender markers
1
INTRODUCTION
6
somehow, most people intuitively chose sentence 3 as the correct meaning of
the sentence. Bosch proposes two alternatives: either constraints of single
constituents are collecting during incremental construction of the semantic
representation of the determiner phrase so that the meaning becomes clear
after processing the second “block”-phrase, where it becomes clear that the
DP describes a red block which is on another block. Or the violated uniqueness constraint leads to a modulation of processing ressources: the dereference of said DP becomes the most important point on the agenda of the
parser, which immediately uses the information obtained from the following
preposition to decide which block is the referential object of the DP.
Figure 1: Example block world, taken from Bosch (2009)
The hypothesis behind this experiment is that when in such a sentence
(or any other expression containing a definite determiner and a referentially
ambiguous DP) a preposition can give the information needed to resolve
such an ambiguity, then this fact should be easily be seen in an earlier
rise of the fixation probability on the referential object of that DP. If the
preposition cannot provide such information2 , then the fixation probability
on the referential object should rise only after the onset of the prepositional
NP-head.
2
In said case, picture the second block on a hat, then the ambiguity cannot be resolved
solely by the preposition, as both blocks are “on” something
1
INTRODUCTION
7
In order to test this hypothesis, several visual stimuli were constructed
bearing exactly those characteristics mentioned before and were shown to
subjects while they were listening to matching spoken stories.
2
Methods
This part contains all important information about the participants of this
study, the materials used for preparation and experimental design and procedures used during the experiment and for subsequent analysis.
2.1
Participants
Participants were contacted through personal contacts and the internal mailing lists of the student bodies of the cognitive science and psychology programmes at the University of Osnabrück, Germany. The actual subjects of
this study were almost equally distributed among those programmes. They
had to be native German speakers, have normal or corrected-to-normal vision and had to have no hearing deficits. For their participation, subjects
were rewarded with either course credit or 5 Euros. All subjects participated voluntarily and were naı̈ve with regard to the purpose of this study.
Fixations were recorded from 25 subjects. Of those data sets, four had to
be rejected. For two subjects, the data files were corrupt and therefore not
readable. The experiment for one subject ended prematurely, rendering the
data set unusable. One subject had a red-green color blindness, but as the
subjects fixation behaviour was the same as of the other remaining subjects
(see subject validity of subject 21 in table 6), the data set of said subject
was used nevertheless. One subjects performance was significantly different
from the rest (see subject validity of subject 5) and its data set was disregarded. All in all, 21 data sets were used for subsequent analysis. The
characteristics taken from the subject questionnaires are shown in table 1.
2.2
Experimental stimuli
R
Subjects received multi modal stimuli, composed of photographs of Playmobil
sceneries and auditory stimuli which were semantically related to them. See
Karabanov (2006), Kleemeyer (2007), Bärnreuther (2007) and Karabanov
(2006) for similar designs.
Ten stimuli were assembled from stimuli and filler material from prior
experiments collectively used in Alexejenko et al. (2009). The pictures were
edited with GIMP 2.6 in such a way as to conform to the constraints of
the experimental design. Information about the construction of the original
8
2
METHODS
Category
Age (yrs)
Height (cm)
Daily screen time (hours)
Language knowledge (no.)
Previous eye-tracking studies (no.)
Gender
Female
Male
Education
High school diploma
University degree
Occupation
Student
Unemployed
Vision aids
None
Glasses
Contact lenses
Occular dominance
Left
Right
Unclear
Handedness
Left
Right
Unclear
Color Vision
Red-green colour blind
Perfect
9
Range
18-28
154-193
2-10
1-5
0-6
Number
14
11
Number
22
3
Number
24
1
Number
14
6
5
Number
10
11
4
Number
1
23
1
Number
1
24
Median
22
174
5
2
1
Percent
56%
44%
Percent
88%
12%
Percent
96%
4%
Percent
56%
24%
20%
Percent
40%
44%
16%
Percent
4%
92%
4%
Percent
4%
96%
Mean ± SD
22.5 ± 2.26
172.8 ± 8.63
5 ± 2.53
2.56 ± 0.96
1.24 ± 1.67
Table 1: Statistics of study participants, collected from subject questionnaires
2
METHODS
10
stimuli and filler can be found in Kleemeyer (2007). As some of the original
images reused here had a resolution of 1024x768 pixels, all final images were
downscaled to this resolution.
Auditory stimuli were constructed corresponding to the experimental
question raised in Bosch (2009). In order to find out what role prepositions
may play during the processing of referential ambiguities, sentences were
constructed whose subject phrase (sentence head) consisted of a noun phrase
modified with a prepositional phrase. The whole phrase uniquely identified
an object of the visual stimulus matching the auditory stimulus. The head
of the subject phrase matched two objects of the visual stimulus, as did
the NP of the prepositional phrase. In one condition, the preposition was
supposed to uniquely identify the referential object of the subject phrase3 ,
whereas in the other condition, the ambiguity could only be resolved when
processing the prepositional NP.
2.2.1
Visual stimuli
R
Every stimuli/filler depicted a natural scenery constructed from Playmobil
objects. Those sceneries consisted of multiple objects referred to during the
course of the corresponding auditory stimulus and also contained a vast
amount of other objects serving as distraction, ensuring a higher possibility
that a fixation on an object of interest is related to the auditory stimulus,
and not to general browsing of the scenery.
In particular, every scenery had two identical objects (identical in the
sense of being part of the same category, e.g.“owl”,“man”,“cat”) serving as
target and competitor. In addition, two objects served as their “locationary” identifiers, i.e. identifying the location of the target/competitor in the
scenery.
4
It is important to mention that there were matching distractors
for the locationary identifiers as well. This was required in order to keep all
references to the locationary identifiers in the auditory stimuli ambiguous.
3
E.g.“The cat in front of...” uniquely identifies a cat if the other cat in the picture is
not in front of something.
4
To give an example, in one picture two owls were amidst a woodland scenery, one in
a tree, the other on on a hill. The target here was the owl in the tree (the tree therefore
being the locationary identifier of the target), the distractor the owl on the hill (the hill
therefore being the locationary identifier of the competitor).
2
METHODS
11
There was also a reference object in every picture to study the attention
shift of participants to an easily identifiable, salient target and to compare
those shifts to those elicited by the relevant part of the auditory stimulus.
Figure 2: Exemplary visual stimulus. The target is circled in red, the competitor in green. The locationary object of the target and its distractor are
circled in purple, the locationary object of the competitor and its distractor
in blue. The reference object is circled in yellow.
2.2.2
Auditory stimuli
As already stated, there were two conditions for every stimulus, i.e. two stories were designed that solely differed in one preposition in the last sentence.
The stimuli consisted of four to six sentences in four slots. The first sentence
was an informal overview of the scenery, without any direct reference to any
object in it.
1. In der Savanne. (In the savannah.)
The reason for the introduction of that sentence was to measure participants’
fixations on the stimuli while not guided by linguistic input. The next one
to two sentences introduced (referred to) the locationary objects.
2
METHODS
12
2. In der felsigen Landschaft traben zwei Elefanten. (Two elephants
are trotting through the rocky countryside.)
The one to two sentences in the third slot contained references to the target/competitor, as well as distractors and the reference object.
3. Die beiden Männer beobachten die vielen durstigen Tiere am einzigen
Wasserloch5 . The two men are watching the many thirsty animal near
the watering hole.
The only difference between the two conditions could be found in the fourth
slot. As explained above, the sentence consisted of a subject NP composed
of an NP and a prepositional phrase. In one case, the preposition was a
more regular one, not capable of identifying the referential object by itself,
i.e. prepositions able to convey more possible relations than others. The
german prepositions auf, neben and bei were used in this condition (meaning
“on”, “next to” and “near”, respectively. In the other, due to the relation
between subject head and prepositional NP conveyed by the preposition,
the ambiguity posed by the subject head could have already been resolved
by the preposition. Here, the german prepositions in, vor, hinter, unter and
an were used, meaning “in”, “in front of”, “behind”, “below/under” and
“at.”
4. Der Mann vor dem grauen Felsen ist ein erfahrener Jäger. (The man
in front of the grey rock is an experienced hunter.)
See Figure 2 for an exemplary stimulus.
All sentences were recorded6 by using a Trust HS-2100 Headset and Audacity 1.2.67 and Cool Edit Pro 2.08 . Noise and pitch reduction procedures
were carried out on all audio files. Furthermore, silent intervals were cut to
ensure equal length of all files (18.770s - 19.962s). The number of syllables
differed slightly among all sentences (58-62 syllables). Manual alignment
was performed to ensure that onsets of the subject head NP, the preposition and the prepositional NP only differed on a small scale. See Table 2
for details. By adding a non-disambiguating adjective to the PP, a time
window of approximately 800 ms between preposition onset and the onset
of the prepositional NP could be ensured for further analysis.
5
This is the reference object.
Sentences were all spoken by the experimenter himself.
7
(http://audacity.sourceforge.net/)
8
(http://www.adobe.com/products/audition/)
6
2
METHODS
Stimulus Nr.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
mean
13
Onset subj-head NP
15,348
15,362
15,330
15,319
15,357
15,338
15,358
15,336
15,353
15,328
15,343
15,333
15,325
15,330
15,334
15,348
15,345
15,328
15,329
15,319
15,338
Onset prep.
15,946
15,911
15,940
15,909
15,919
15,980
15,960
15,930
15,970
15,964
15,877
15,946
15,928
15,944
15,925
15,887
15,948
15,970
15,901
15,968
15,936
Onset prep. NP
16,786
16,766
16,704
16,842
16,757
16,736
16,777
16,778
16,792
16,721
16,757
16,780
16,719
16,765
16,732
16,771
16,744
16,739
16,623
16,698
16,749
Table 2: Onsets of subject head NP, prepositions and prepositional NPs.
Even Stimulus numbers correspond to the ambiguous case, odd to the unambiguous case. The first two stimuli correspond to visual stimulus 1, the
next two to visual stimulus 2, and so on.
2
METHODS
2.2.3
14
Filler
Filler images were all those images from the material of Alexejenko et al.
(2009), which had not been used to construct stimuli. For each of those
filler images an auditory filler was recorded, which was of equal length as
the auditory stimuli and consisted of 3-5 sentences.
2.3
Apparatus
A head-mounted binocular Eye-Tracker(“Eye Link II”, SR Research, Mississauga, Ontario, Canada) was used to record subjects’ eye movements.
Two infrared cameras tracked the movements of the participants’ pupils,
one tracked the head position relative to the monitor. A Pentium 4 PC
(Dell Inc., Round Rock, TX, USA) was used to control the eye-tracker. See
figure 3 for an overview of the system9 . A second PC (Powermac G4 8000
MHz) controlled the stimulus presentation. Stimuli were presented on a 21”
cathode ray tube monitor (SyncMaster 1100DF 2004, Samsung Electronics
Co, Ltd, Korea), resolution set to 1024x768 and a refresh rate of 100Hz.
Pupil positions were tracked using a 500 Hz sample rate.
Figure 3: Eye Link II Head-Mounted Eye-Tracking System
9
Image taken from Karabanov (2006)
2
METHODS
2.4
15
Procedure
The experiment was conducted in a dimly lit room. Prior to the experiment
itself, subjects were welcomed and the experiments procedure was explained
to them. Subjects were informed that they could interrupt the experiment
at any time. Subjects then had to fill out a consent sheet (see section G)
and a standardized questionnaire (see table 1). Tests for ocular dominance
and color deficiency were performed. If subjects were able to follow the instructions up until now, it was assumed that their hearing was also sufficient
for the experiment.
Subjects were then seated 80 cm from the monitor and the eye-tracker
was fitted on their head. Afterwards, a 13 point calibration and validation procedure was started. Participants were asked to fixate a small dot
showing up in a random order at thirteen different locations on the screen.
During calibration, the raw eye-data was mapped to gaze-position. During
validation, the difference between computed fixation and target point was
computed, in order to obtain gaze accuracy. The procedure was repeated
until the mean error for one eye was below 0.3◦ , with a maximum error below
1◦ , this eye was subsequently tracked during the whole experiment. Subjects
then were provided with headphones (WTS Philips AY3816), through which
the auditory stimuli were presented. The headphones also served the purpose of blocking out background noise in order to ensure full concentration
on the task.
Subjects were told to carefully listen to the auditory stimuli and look at
the visual stimuli. Before each stimulus, a small fixation spot in the middle
of the screen was presented, so that drift correction could be performed
and subjects had the chance to have a small break in between trials. If
the difference between gaze and computed fixation position was too high,
calibration and validation were repeated. The stimuli were presented in a
random order, with the constraints that no more than two actual stimuli
were presented in a row and that every subject was presented with exactly
five stimuli conforming to the ambigue condition and five stimuli of the
unambigue condition. Furthermore, for every subject there was another
subject that was presented with the same order of stimuli, but with exactly
the opposite conditions, as to assure that all stimuli and all conditions were
presented equally often without fully giving up randomization. The ten
stimuli and 15 fillers were presented as one block. After the experiment,
2
METHODS
16
participants were informed about the goal of this study.
2.5
Data Analysis
It has already been shown extensively that measuring eye movements seems
to be an adequate tool for the investigation of attention and is especially
useful when trying to understand the mechanisms behind language processR
ing Tanenhaus et al. (2000). With the help of Playmobil
scenarios it has
also been shown that top-down influences seem to at least partially override
bottom-up influences on attention (Kleemeyer (2007), Bärnreuther (2007)).
It therefore seems to be an adequate instrument to study the processing
of prepositions and its influence on attention. A fixation is defined as the
inverse of a saccade, i.e. whenever the eye-tracker does not measure a saccade, there is a steady fixation. The acceleration threshold for a saccade was
8000◦ /sec2 , the velocity threshold 30◦ /s and the deflection threshold 0.1◦ .
Fixation locations and durations were calculated online by the eye-tracking
software and later converted into ASCII text. All further analysis was done
with MATLAB
2.5.1
10
Regions of Interest
In order to find out whether a subject fixated a referent of the discourse, regions of interest (ROIs) were manually chosen around each referent in every
scene using MATLABs build-in function roipoly. The borders of the referent
were chosen as close as possible around the actual figurine in the scene. As
part of the fixations in question lay outside the manually chosen regions of
interest, they were scaled up 12 pixel in the horizontal axis (being equivalent to 0,552◦ of visual angle) and 20 pixel in the vertical axis (equivalent to
0.76◦ of visual angle). For an example, see figure 4. An example of all the
fixation outside of the regions of interest can be seen in figure 5.
2.5.2
Statistics
The time course of the probabilities to fixate a certain referent throughout
viewing the scenery is the important part of analysis. In order to interpret rise and fall of fixation probabilities, 150 ms time windows were chosen
in which all relevant statistical analysis was implemented. This particular
10
(www.mathworks.com)
2
METHODS
17
Figure 4: Example image for regions of interest. Left: target (woman) and
target locationary object (car), right: competitor (woman sitting), competitor locationary object (tree), front: reference object (man)
Figure 5: Example of fixations not belonging to any region of interest
length was chosen as the data was somewhat scarce. In order to test stimulus validity, the first 2.5 seconds (in which no reference to any object in the
2
METHODS
18
scenery was yet made in the auditory stimulus) were analyzed by adding
up all fixations on referents and comparing them among images. As this
revealed some minor issues (see Results [3]), the time window between 2500
ms and 15000 ms was also analyzed (being the time window in which all
referents were introduced) in the same way. Subject validity was analyzed
by summing up all fixations over the different images. For both validity
analyses, MATLABs lillietest function was used to ensure normal distributions. The influence of prepositions on fixation probabilities (and therefore
on attention) was then tested using bootstrapping algorithms. Both intraconditional and inter-conditional testing was performed11 . For all statistic
tests, a significance level of α = .05 was used.
11
Intra-conditional meaning the comparison of fixation probabilities between different
ROIs of the same condition, inter-conditional being the comparison of fixation probabilities
for a specific ROI in the two different conditions.
3
Results
3.1
Subject Validity
The first statistical test conducted was to find out whether the fixations on
the different ROIs over all subjects constituted normal distributions. For
that, all fixations over the whole time course of the stimulus presentation
were summed up and MATLABs lillietest function was used as a test for
normality. The findings are visualized in figure 6, an overview over the
statistics can be found in table 3. As it could be easily discerned that
subject number 5 was a statistical outlier, all further statistical tests were
conducted without the data of that subject.
The lillietest revealed that all fixation distributions were normalized,
except for the fixations on the locationary object of the competitor. This
could be due to the fact that this object was mostly inanimate and most
stimuli contained considerable amounts of animate distractors, so that fixations on those objects could be unstable due to the fact that, as Karabanov
(2006) already pointed out, subjects prefer fixations on animate/human objects over inanimate. This did not pose a problem however, as the data
clearly shows that all subjects fixated the object during the presentation
(mean = 4.3536%, std. − dev. = 0.8902%), i.e. identified it either before or
during the presentation of the relevant part of the stimulus.
3.2
Stimulus Validity
Following that, a series of normality tests was conducted to ensure stimulus
validity. Contrary to previous studies, it could not be shown that fixation
behaviour in the first part of the stimulus, where no objects were yet introduced, could be a reliable baseline for test statistics concerning fixation
behaviour mediated by auditory stimuli.
As can be seen in figure 7 and in table 4, there was quite a large variance
in fixation probabilities, especially on target, competitor and the distractor
of the targets locationary object. This is due to the fact that those objects
varied in size and that each stimulus contained a great amount of distractor objects. But this also ensured that fixations done on objects during
their introduction via the auditory stimulus could be considered to be directly linked to the linguistic input, and not to attentional browsing of the
19
3
RESULTS
20
Figure 6: Subject validity, fixations over all images
picture12 .
That browsing occurred nevertheless can be seen in light of the large
number of fixations on beyond-ROI regions. This was partly also due to the
limited accuracy of the eye-tracker, leading to the fact that a percentage of
fixations that should have counted towards one of the ROIs was off by a few
degrees. Also see figure 5. As can be seen in table 4, fixation probabilities
on target and competitor were nevertheless a normal distribution. To ensure that the stimuli were really valid and appropriate for further statistical
testing, the time interval between 2500 and 15000 ms was tested, under the
hypothesis that the auditory stimuli presented similar objects for all stimuli,
so that fixation probabilities should be similar as well. The results are visualized in figure 8 and table 5. One can see quite clear that in every picture
all the relevant objects were fixated prior to the investigated stimulus part.
Thus it was secured that all objects have been seen before and subjects do
not have to search for objects first, overt attention that is due to linguistic
input should be immediately visible.
12
If one has many objects in a stimulus, fixation on one of them precisely at the point
when it is presented in a concomitant auditory stimulus get more and more unlikely to
have been a coincidence with increasing number of objects.
3
RESULTS
21
Figure 7: Stimulus validity, fixations over all subjects, between 0 and 2500
ms
Figure 8: Stimulus validity, fixations over all subjects, between 2500 and
15000 ms
3
RESULTS
3.3
22
Time Course of Fixations
The time course of fixation probability over all images are shown in figures 9
and 10. As expected, the fixation probabilities on both target and competitor object rise twice during the whole presentation of the stimulus. A small
peak beginning around 9000 ms can be distinguished, representing the time
frame in which the target/competitor compatible NP is introduced. This
clearly shows that subjects shift their attention on visual sceneries in line
with the linguistic processing of additional linguistic stimuli.
The second rise of the fixation probabilities (i.e. the relative number of
fixations) occurs concurrently with the second naming of said NP. Around
the time of the onset of the prepositional head-NP, the fixation probabilities
diverge and a considerable number of fixations is directed towards the target, implying that the subjects we’re focusing their attention on it, having
understood that the subject-NP refers to it. Throughout the rest of the
stimulus, most fixations stay on either the target or the target locationary
object, shifting back and forth between them.
To better understand the
Figure 9: Time course of fixation probabilities, ambiguous condition. Yellow
stripe: first introduction of target/competitor-NP. First line: mean onset
subject-head-NP, second line: mean onset prepositional NP-head.
3
RESULTS
23
Figure 10: Time course of fixation probabilities, unambiguous condition.
Yellow stripe: first introduction of target/competitor-NP. First line: mean
onset subject-head-NP, second line: mean onset prepositional NP-head.
stages of linguistic processing of ambiguous sentences and to compare it to
the processing of unambiguous sentences, a closer visual inspection of the
time frame in question was necessary. A visualization of the fixation probabilities of said time frame for both the unambiguous and the ambiguous
condition can be found in figures 11 and 12.
A few observations can be made: first and foremost, the differences in
the time course of fixation probability are minimal at best. Second, there
seems to be an early peak of fixations on the target in the ambiguous case
around 16400 ms, which would have been suspected in the unambiguous case
when the integration of the proposition alone would be enough to resolve the
ambiguity of the subject-NP. Third, increased fixations on both target and
target locationary object seem to last longer in the ambiguous case than in
the unambiguous one (the last peak in the ambiguous case is at 19350 ms).
All of those observations have to be treated carefully, as the dataset is
small and therefore statistical significance cannot be guaranteed.
3
RESULTS
24
Figure 11: Time course of fixation probabilities, unambiguous condition,
time span between subject head onset and end of stimulus
Figure 12: Time course of fixation probabilities, ambiguous condition, time
span between subject head onset and end of stimulus
3
RESULTS
3.4
25
Bootstrapping
Bootstrapping analyses were conducted to find out if there are any significant differences between fixation probabilities on target and competitor.
Both differences between conditions and in the conditions were analyzed.
Bootstrapping algorithms were applied both over all images and all subjects, to find out for how many images and subjects significant differences
can be found, respectively. Bootstrapping was applied to time windows of
150 ms width, between 15200 ms (shortly before the onset of the subjecthead-NP) and 22000 ms (the last recorded fixations). 1000 bootstrap samples were taken from the vector of fixations on either ROI1 (target) or ROI2
(competitor), for both the ambiguous and the unambiguous condition.
As a test statistic, the difference of means was calculated and compared
to the actual difference of means, both in and between conditions13 . A
difference was considered significant if it fell either into the 2,5 percentile
or was larger than the 97,5 percentile. The figures 13, 14, 15 and 16 depict
the results of bootstrapping analyses over images. No graphs are given for
the results of bootstrapping over the subjects, as it did not yield a single
significant difference.
From figure 13 it can be concluded that fixation behaviour on the target does indeed differ between conditions. Further analysis confirmed the
observation made earlier, namely that the target object gets significantly
more fixations in the ambiguous case. For all four images for which the time
window between 15950 ms and 16100 ms became significant, the difference
between unambiguous and ambiguous case was negative. Interestingly, this
is the time window right after the onset of the preposition. The other peaks
seem to support the claim that in the ambiguous case, fixations stayed more
often on the target for a longer time. The differences here are also all negative. As there is mostly only one or two pictures that lead a significant
difference, this hypothesis cannot be proven.
There are also significant differences in the fixation probabilities on the
competitor object between conditions. They are even less pronounced than
in the case of the target object. The relevant time frames can be observed
in figure 14.
13
I.e. mean(ROI1 unamb) - mean(ROI1 amb), mean(ROI1 unamb) - mean(ROI2 unamb), ...
3
RESULTS
26
Figure 13: Significant Differences after Bootstrapping - Fixations on Target
unamb. vs amb. Condition
Figure 14: Significant Differences after Bootstrapping - Fixations on Competitor Unamb. vs Amb. Condition, peaks at 16100, 17750 and 19700 ms
3
RESULTS
27
Figure 15: Significant Differences after Bootstrapping - Fixations on Target
vs Competitor Ambiguous Condition
Interestingly, the differences seen in the time course of fixations in both
conditions do not seem to be that significant. For the ambiguous case, there
are 15 time windows in which the differences become significant for one
image. For the unambiguous one, there are 17 time windows, two of which
show two images with significant differences.
Figure 16: Significant Differences after Bootstrapping - Fixations on Target
vs Competitor Unambiguous Condition
4
Discussion
This study about the linguistic processing of prepositions has some interesting implications. Due to the scarcity of the data14 most of the implications
are in need of future research. It seems that contrary to e.g. Chambers
et al. (1998), prepositions do not seem to provide as much information into
the processing stages of natural language understanding as for experiments
in which choices are limited and subjects rely heavily on them.
It seems that people process the prepositional NP-head fully when faced
with a referentially ambiguous phrase and only then shift their attention to
the referent. It could also be the case that the time window in which an
influence of the preposition was suspected did not suffice. Therefore one
proposal for future research would be to widen the gap between preposition and PP-head-NP even further. As far as this study is concerned, there
are a few significant differences in fixation probabilities, oddly enough there
14
As could be seen by the fact that no bootstrapping analysis over the subjects yielded
significant results - in most time windows, the single subject did not look at either target
or competitor, only the average over subjects shows results
28
4
DISCUSSION
29
seem to be more fixations on the target in the ambiguous case. This could
be an artifact of this study, i.e. there could be a bias towards fixating the
competitor (even though none of the earlier time windows shows such a discrepancy). Nevertheless, it should be subject of future research. The results
of this study seem to be in favor of the theory that constraints from single
constituents are collected during an incremental construction of semantic
representations.
References
Alexejenko, S., Brukamp, K., Cieschinger, M., and Deng, X. (2009). Meaning, vision and situation - study project.
Bärnreuther, B. (2007). Investigating the influence of visual and semantic
saliency on overt attention - bsc. thesis univ. osnabrueck, cognitive science.
Bosch, P. (2009). Processing Definite Determiners. Formal Semantics Meets
Experimental Results. Lecture Notes on Computer Science.
Chambers, C. G., Tanenhaus, M. K., Eberhard, K. M., Carlson, G. N.,
and Filip, H. (1998). Words and worlds: The construction of context for
definite references.
Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken
language. Cognitive Psychology, 6.
Hartmann, N. (2006). Processing grammatical gender in german - an eyetracking study on spoken-word recognition - bsc. thesis univ. osnabrueck,
cognitive science.
Karabanov, A. N. (2006). Eye tracking as a tool for investigating the comprehension of referential expressions - bsc. thesis univ. osnabrueck, cognitive
science.
Kleemeyer, M. (2007). Contribution of visual and semantic information and
their interaction on attention guidance - an eye-tracking study - bsc. thesis
univ. osnabrueck, cognitive science.
Rayner, K. (1998). Eye movements in reading and information processing:
20 years of research. Psychological Bulletin, 124(3).
Tanenhaus, M. K., Magnuson, J. S., Dahan, D., and Chambers, C. (2000).
Eye movements and lexical access in spoken-language comprehension:
Evaluating a linked hypothesis between fixations and linguistic processing.
30
A
Visual Stimuli
Figure 17: Visual Stimuli 1-6
31
A
VISUAL STIMULI
Figure 18: Visual Stimuli 7-10
32
B
Auditory Stimuli
(a) is the unambiguous condition, (b) is the ambiguous one.
1. (a) Im Wald ist viel los. Ein paar Hügel säumen die kleine Lichtung.
Bäume spenden Schatten. Zwei Eulen schauen sich um, Rehe
spielen am Wasser und auch ein Fuchs traut sich dazu. Die Eule
in dem kleinen Baum hält nach Beute Ausschau.
(b) Im Wald ist viel los. Ein paar Hügel säumen die kleine Lichtung.
Bäume spenden Schatten. Zwei Eulen schauen sich um, Rehe
spielen am Wasser und auch ein Fuchs traut sich dazu. Die Eule
auf dem kleinen Baum hält nach Beute Ausschau.
2. (a) In der Savanne. In der felsigen Landschaft traben zwei Elefanten. Die beiden Männer beobachten die vielen durstigen Tiere
am einzigen Wasserloch. Der Mann vor dem grauen Felsen ist
ein erfahrener Jäger.
(b) In der Savanne. In der felsigen Landschaft traben zwei Elefanten. Die beiden Männer beobachten die vielen durstigen Tiere
am einzigen Wasserloch. Der Mann neben dem grauen Felsen ist
ein erfahrener Jäger.
3. (a) Im Wartezimmer. Die Kisten sind voller Spielzeug. Die Frauen
warten schon lange. Die beiden Kinder langweilen sich trotz der
vielen Spielsachen. Auf dem Tisch liegen Zeitschriften. Das Kind
vor der einen Kiste wird gerade aufgerufen.
(b) Im Wartezimmer. Die Kisten sind voller Spielzeug. Die Frauen
warten schon lange. Die beiden Kinder langweilen sich trotz der
vielen Spielsachen. Auf dem Tisch liegen Zeitschriften. Das Kind
bei der einen Kiste wird gerade aufgerufen.
4. (a) Der erste Frühlingstag. Die Kinder spielen vergnügt, nur mit den
Eimern spielt gerade keins. Zwei Kätzchen schleichen herum, und
Blumen blühen überall. Die Frau geniesst die Sonne. Die Katze
vor dem kleinen Kind geht jetzt auf Erkundungstour.
(b) Der erste Frühlingstag. Die Kinder spielen vergnügt, nur mit den
Eimern spielt gerade keins. Zwei Kätzchen schleichen herum, und
33
B
AUDITORY STIMULI
34
Blumen blühen überall. Die Frau geniesst die Sonne. Die Katze
bei dem kleinen Kind geht jetzt auf Erkundungstour.
5. (a) Nachmittags im Park. Bänke laden zum Ausruh’n ein. Zwei
Frauen sind mit ihren Enkeln da. Zwei Picknickkörbe steh’n
bereit, die Kinder spielen Fussball und ein Hund tollt freudig
umher. Der Korb hinter der einen Frau ist voller Leckereien.
(b) Nachmittags im Park. Bänke laden zum Ausruh’n ein. Zwei
Frauen sind mit ihren Enkeln da. Zwei Picknickkörbe steh’n
bereit, die Kinder spielen Fussball und ein Hund tollt freudig
umher. Der Korb bei der einen Frau ist voller Leckereien.
6. (a) Im vollen Wirtshaus. An den Tischen sitzen ein paar Männer
und trinken etwas. Zwei Hunde schnüffeln neugierig, die Männer
warten auf’s Essen und die Kellnerin serviert ein Bier. Der Hund
unter dem einen Tisch bettelt um einen Knochen.
(b) Im vollen Wirtshaus. An den Tischen sitzen ein paar Männer
und trinken etwas. Zwei Hunde schnüffeln neugierig, die Männer
warten auf’s Essen und die Kellnerin serviert ein Bier. Der Hund
bei dem einen Tisch bettelt um einen Knochen.
7. (a) Im Klassenzimmer. Es gibt ein paar Tische und Hocker für die
Schüler. Die beiden Kinder setzen sich gerade, die Spielsachen
sind weggeräumt und die Lehrerin beginnt die Stunde. Das Kind
vor dem einen Tisch hört ihr noch nicht richtig zu.
(b) Im Klassenzimmer. Es gibt ein paar Tische und Hocker für die
Schüler. Die beiden Kinder setzen sich gerade, die Spielsachen
sind weggeräumt und die Lehrerin beginnt die Stunde. Das Kind
bei dem einen Tisch hört ihr noch nicht richtig zu.
8. (a) Ein Grillfest im Sommer. Die Familie ist mit zwei Autos da. Bei
den Bäumen spielt ein Hund. Die zwei Frauen sind schon hungrig,
die Kinder sitzen am Feuer und der Vater passt aufs Essen auf.
Die Frau hinter dem grossen Auto holt noch mehr Kohle.
(b) Ein Grillfest im Sommer. Die Familie ist mit zwei Autos da. Bei
den Bäumen spielt ein Hund. Die zwei Frauen sind schon hungrig,
die Kinder sitzen am Feuer und der Vater passt aufs Essen auf.
Die Frau neben dem grossen Auto holt noch mehr Kohle.
B
AUDITORY STIMULI
35
9. (a) Auf dem Bauernhof. Die Kinder beobachten die Enten und Gänse
an den Teichen. Zwei Katzen streifen umher, und Hühner gackern
um die Wette. Die Bäuerin hat viel zu tun. Die Katze an dem
kleinen Teich hat grad einen Fisch entdeckt.
(b) Auf dem Bauernhof. Die Kinder beobachten die Enten und Gänse
an den Teichen. Zwei Katzen streifen umher, und Hühner gackern
um die Wette. Die Bäuerin hat viel zu tun. Die Katze bei dem
kleinen Teich hat grad einen Fisch entdeckt.
10. (a) Mitten in der Prärie. Kakteen wachsen auf den Felsen. Zwei Cowboys schlagen ein Lager auf. Zwei Geier suchen nach Nahrung
und Pferde laufen herum. Ein schwarzer Hund schaut sich um.
Der Geier vor dem einen Cowboy ist schon ganz abgemagert.
(b) Mitten in der Prärie. Kakteen wachsen auf den Felsen. Zwei Cowboys schlagen ein Lager auf. Zwei Geier suchen nach Nahrung
und Pferde laufen herum. Ein schwarzer Hund schaut sich um.
Der Geier bei dem einen Cowboy ist schon ganz abgemagert.
C
Fillers - Visual
Figure 19: Filler Images 1-6
36
C
FILLERS - VISUAL
Figure 20: Filler Images 7-12
37
C
FILLERS - VISUAL
Figure 21: Filler Images 13-15
38
D
Fillers - Auditory
1. Beim Zahnarzt. Die Arzthelferin holt die nötigen Instrumente aus
den Schränken. Der Zahnarzt steht noch hinter dem Trennschirm
am Tisch und trinkt noch seinen Kaffee aus. Der Patient auf dem
Behandlungsstuhl fühlt sich schon ein wenig unwohl.
2. Im grossen Burghof. Der grosse goldene Ritter bringt dem kleinen
gerade den Schwertkampf bei. Der Mann bei den Fässern betrinkt
sich und die Marktfrau bietet ihre Waren feil. Der Ritter mit der
Hellebarde bewacht das Stadttor.
3. Nachmittags im Zoo. Zwei Löwen stehen an der Tränke und ein Elefant
ist eine Portion Heu. Die Oma und ihr Enkel beobachten begeistert die
vielen Tiere. Der Tierpfleger will gleich das Elefantengehege sauber
machen.
4. Tief im Dschungel. Auf den Bäumen hocken Vögel und auf dem Boden
streiten sich zwei Affen um Bananen. Die Schildkröte versucht die
reifen Früchte zu erreichen. Der einzelne Affe versucht die anderen
vor der Schlange zu warnen.
5. In der Zirkusmanege. Die Affen und der Elefant rollen Fässer umher
während ein Clown jongliert. Der Dompteur passt auf dass die Tiere
alles richtig machen. Die Zuschauer auf den Rängen amüsieren sich
prächtig.
6. Auf einer Lichtung Bei den Bäumen und an den Blumen tummeln
sich viele Tiere. Zwei Frischlinge halten sich nah bei ihrer Mutter auf,
die kleinen Füchse trauen sich weiter weg. Das Eichhörnchen klettert
lieber auf dem Baum umher.
7. Auf dem Wochenmarkt.
In den Körben und auf dem Tisch liegt
frisches Gemüse. Der Mann ist mit dem Fahrrad gekommen um bei
der Bäuerin seine Einkäufe zu erledigen. Die Bäuerin begrüsst ihn und
seinen Hund gerade freundlich.
8. Beim Familienausflug. Die Mutter und ihr Kind wollen gleich mit
dem Kanu los paddeln Der Vogel beim Korb versucht etwas zu essen
39
D
FILLERS - AUDITORY
40
zu ergattern und die Enten gehen schwimmen. Der Junge hat seinen
Fussball zum spielen mitgenommen.
9. Auf einer Ranch. Der Bulle frisst Stroh dass die Rancher gerade
zusammengeharkt haben. Das Gras hat der Rancher gebündelt um
es später den Pferden zu geben. Die Frau vor dem Wagen wird gleich
noch die Pferde striegeln.
10. Beim Kinderarzt. Beim Bett stehen allerlei medizinische Gerätschaften
und im Schrank liegt Spielzeug. Der Junge auf dem Stuhl hat sich beim
Sportunterricht verletzt. Die ärztin sagt ihm dass er wahrscheinlich
auf Krücken nach Hause gehen muss.
11. Ein Tag im Stadtpark. Ein paar Hasen und Rehe ruhen sich unter den
Bäumen aus. Die Frau macht einen Spaziergang mit ihrem Hund. Sie
unterhält sich gerade mit dem Mann. Die Ente am Teich schaut ihren
Jungen beim Schwimmen zu.
12. In einem kleinen Park. Die Blumen blühen und die vielen Bäume sind
voller Blätter. Die Oma und ihr Enkel sind mit dem Hund zum Spielen
in den Park gekommen. Das Fahrrad an dem einen Baum gehört den
kleinen Jungen.
13. Morgens in der Schule. Die Kleiderschränke sind noch leer und die
Stühle noch nicht besetzt. Nur die Lehrerin und ein Schüler sind
schon da. Sie fragt ihn wo die anderen bleiben. Die Aktentaschen im
blauen Schrank gehören der Lehrerin.
14. Auf dem Reiterhof. Beim Zaun liegt in einer Schubkarre Stroh für die
Pferde. Auf dem Zaun hängen auch ein paar Sattel. Das kleine Kind
will gleich einen Ausritt machen. Das Pferd neben der Tränke hat
schon einen Sattel auf dem Rücken.
15. Im Indianerdorf. Ein grosses Tipi ist aufgebaut und die Pferde haben
Jagdbemalung. Der Häuptling redet mit dem Cowboy über die bevorstehende Jagd. Das braune Pferd, dass gerade am Fluss trinkt, gehört
dem Häuptling.
E
Statistics
ROI
1
2
3
4
5
6
7
8
H
0
0
0
0
1
0
0
0
mean
10.5542
5.8977
8.7508
5.3703
4.3536
6.0272
9.2993
49.7469
std-dev.
2.1560
1.1071
1.5973
0.8303
0.8902
0.9649
2.2232
5.0927
Table 3: Statistics of the Subject Validity - H: Outcome of the Lilliefors-test
with α = 0.05, mean value of ROI, standard deviation (both in percent).
Fixations on (from top to bottom): target object, competitor object, target locationary object, distractor for target locationary object, competitor
locationary object, distractor for competitor locationary object, reference
object, beyond ROI
ROI
1
2
3
4
5
6
7
8
H
0
0
1
1
0
1
0
0
mean
7.1418
3.6015
8.2362
5.7861
3.3063
8.0087
12.7139
51.2055
std-dev.
5.5750
2.5735
7.5051
7.8266
4.3488
13.1942
8.3210
10.8276
Table 4: Statistics of the Stimulus Validity, for the first 2500 ms - H: Outcome of the Lilliefors-test with α = 0.05, mean value of ROI, standard
deviation. ROIs like above.
41
E
STATISTICS
42
ROI
1
2
3
4
5
6
7
8
H
0
0
0
0
1
1
0
1
mean
6.9826
6.1760
6.1081
6.0384
5.2184
6.6296
9.7145
53.1324
std-dev.
2.5666
2.8477
1.7871
3.5721
1.6377
7.2425
3.5632
10.8374
Table 5: Statistics of the Stimulus Validity, for the timespan between 2500
and 15000 ms - H: Outcome of the Lilliefors-test with α = 0.05, mean value
of ROI, standard deviation. ROIs like above.
ROI
1
2
3
4
5
6
7
8
H
0
0
0
0
1
1
0
0
mean
10.4496
5.9346
8.7613
5.3531
4.3345
6.0183
9.1997
49.9490
std-dev.
3.4217
2.2071
2.6996
3.5624
1.8885
7.2006
3.6903
9.5045
Table 6: Statistics of the Stimulus Validity, for the whole presentation of
the stimulus - H: Outcome of the Lilliefors-test with α = 0.05, mean value
of ROI, standard deviation. ROIs like above.
F
Complementary Figures
Figure 22: Timecourse of total fixations, ambiguous condition
43
F
COMPLEMENTARY FIGURES
44
Figure 23: Timecourse of total fixations, unambiguous condition
Figure 24: Timecourse of total fixations, unambiguous condition, timespan
between subject head onset and end of stimulus
LIST OF FIGURES
45
Figure 25: Timecourse of total fixations, ambiguous condition, timespan
between subject head onset and end of stimulus
List of Figures
1
Example block world, taken from Bosch (2009) . . . . . . . .
6
2
Exemplary visual stimulus . . . . . . . . . . . . . . . . . . . .
11
3
Eye Link II Head-Mounted Eye-Tracking System . . . . . . .
14
4
Example image for regions of interest . . . . . . . . . . . . . .
17
5
Fixations not belonging to any region of interest . . . . . . .
17
6
Subject validity, fixations over all images . . . . . . . . . . . .
20
7
Stimulus validity, fixations over all subjects, between 0 and
2500 ms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
Stimulus validity, fixations over all subjects, between 2500
and 15000 ms . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
21
21
Time course of fixation probabilities, ambiguous condition.
Yellow stripe: first introduction of target/competitor-NP. First
line: mean onset subject-head-NP, second line: mean onset
prepositional NP-head.
. . . . . . . . . . . . . . . . . . . . .
22
10
Time course of fixation probabilities, unambiguous condition. Yellow stripe: first introduction of target/competitorNP. First line: mean onset subject-head-NP, second line:
mean onset prepositional NP-head. . . . . . . . . . . . . . . .
11
Time course of fixation probabilities, unambiguous condition,
time span between subject head onset and end of stimulus . .
12
26
Significant Differences after Bootstrapping - Fixations on Target vs Competitor Ambiguous Condition . . . . . . . . . . . .
16
26
Significant Differences after Bootstrapping - Fixations on Competitor Unamb. vs Amb. Condition . . . . . . . . . . . . . . .
15
24
Significant Differences after Bootstrapping - Fixations on Target unamb. vs amb. Condition . . . . . . . . . . . . . . . . .
14
24
Time course of fixation probabilities, ambiguous condition,
time span between subject head onset and end of stimulus . .
13
23
27
Significant Differences after Bootstrapping - Fixations on Target vs Competitor Unambiguous Condition . . . . . . . . . .
28
17
Visual Stimuli 1-6
. . . . . . . . . . . . . . . . . . . . . . . .
31
18
Visual Stimuli 7-10 . . . . . . . . . . . . . . . . . . . . . . . .
32
19
Filler Images 1-6 . . . . . . . . . . . . . . . . . . . . . . . . .
36
20
Filler Images 7-12 . . . . . . . . . . . . . . . . . . . . . . . . .
37
21
Filler Images 13-15 . . . . . . . . . . . . . . . . . . . . . . . .
38
22
Timecourse of total fixations, ambiguous condition . . . . . .
43
23
Timecourse of total fixations, unambiguous condition . . . . .
44
24
Timecourse of total fixations, unambiguous condition, timespan between subject head onset and end of stimulus . . . . .
25
44
Timecourse of total fixations, ambiguous condition, timespan
between subject head onset and end of stimulus . . . . . . . .
46
45
List of Tables
1
Statistics of study participants, collected from subject questionnaires . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
2
Onsets of subject head NP, prepositions and Prepositional NP 13
3
Statistics Subject Validity . . . . . . . . . . . . . . . . . . . .
41
4
Statistics Stimulus Validity 0-2500 ms . . . . . . . . . . . . .
41
5
Statistics Stimulus Validity 2500-15000 ms . . . . . . . . . . .
42
6
Statistics Stimulus Validity whole timecourse . . . . . . . . .
42
47
G
Consent Sheet
Christian Hoffmann
Arbeitsgruppe Computerlinguistik
Universität Osnabrück
Albrechtstrasse 28
49069 Osnabrück
email: [email protected]
Aufklärung/Einwilligung
Sehr geehrte Teilnehmerin, sehr geehrter Teilnehmer,
Sie haben sich freiwillig zur Teilnahme dieser Studie gemeldet. Hier erhalten
Sie nun einige Informationen zu Ihren Rechten und zum Ablauf des folgenden Experiments. Bitte lesen Sie sich die folgenden Abschnitte sorgfältig
durch.
1) Zweck der Studie
Ziel dieser Studie ist es, neue Erkenntnisse über das Satzverständnis anhand
von Eye-Tracking-Daten zu erhalten.
2) Ablauf der Studie
In dieser Studie werden Ihnen 25 Bilder auf einem Computermonitor gezeigt.
Bitte sehen sie sich die Bilder sorgfältig an. Zugleich werden Sie einen kurzen
Text zu hören bekommen. Hören Sie aufmerksam zu.
Um Ihre Blickposition zu errechnen, wird Ihnen ein ”Eye-Tracker” auf den
Kopf geschnallt. Dieses Gerät erfasst die Position Ihres Auges mit Hilfe von
kleinen Kameras und Infrarotsensoren. Dieses Verfahren ist ein psychometrisches Standardverfahren, das in dieser Art bereits vielfach angewandt und
getestet wurde. Bei unseren bisherigen Erfahrungen und Experimenten mit
dem Gerät ist keine Versuchsperson zu Schaden gekommen.
Zu Beginn der Untersuchung muss der ”Eye-Tracker” eingestellt werden,
dieser Vorgang dauert etwa 10-15 Minuten. Das eigentliche Experiment
48
G
CONSENT SHEET
49
dauert dann etwa 15 Minuten. Der Versuchsleiter wird während des ganzen
Experiments mit Ihnen im Versuchsraum sein und steht Ihnen für Fragen
jederzeit zur Verfügung. Nach der Studie erhalten Sie weitere Informationen
zum Sinn und Zweck dieser Untersuchung. Bitte geben Sie diese Informationen an niemanden weiter um die Objektivität eventueller Versuchspersonen
zu wahren.
3) Risiken und Nebenwirkungen
Diese Studie ist nach derzeitigem Wissenstand des Versuchsleiters ungefährlich
und für die Teilnehmer schmerzfrei. Durch Ihre Teilnahme an dieser Studie
setzen Sie sich keinen besonderen Risiken aus und es sind keine Nebenwirkungen bekannt. Da diese Studie in ihrer Gesamtheit neu ist, kann
das Auftreten von noch unbekannten Nebenwirkungen allerdings nicht ausgeschlossen werden.
Wichtig: Bitte informieren Sie den Versuchsleiter umgehend, wenn Sie unter
Krankheiten leiden oder sich derzeit in medizinischer Behandlung befinden.
Teilen Sie dem Versuchsleiter bitte umgehend mit, falls Sie schon einmal
einen epileptischen Anfall hatten. Bei Fragen hierzu wenden Sie sich bitte
an den Versuchsleiter.
4) Abbruch des Experiments
Sie haben das Recht, diese Studie zu jedem Zeitpunkt und ohne Angabe
einer Begründung abzubrechen. Ihre Teilnahme ist vollkommen freiwillig
und ohne Verpflichtungen. Es entstehen Ihnen keine Nachteile durch einen
Abbruch der Untersuchung.
Falls Sie eine Pause wünschen oder auf die Toilette müssen, ist dies jederzeit
möglich. Sollten Sie zu irgendeinem Zeitpunkt während des Experiments
Kopfschmerzen oder Unwohlsein anderer Art verspüren, dann informieren
Sie bitte umgehend den Versuchsleiter.
5) Vertraulichkeit
Die Bestimmungen des Datenschutzes werden eingehalten. Personenbezogene Daten werden von uns nicht an Dritte weitergegeben. Die von Ihnen
erfassten Daten werden von uns anonymisiert und nur in dieser Form weiterverarbeitet oder veröffentlicht.
G
CONSENT SHEET
50
6) Einverständniserklärung
Bitte bestätigen Sie durch Ihre Unterschrift die folgende Aussage:
”Hiermit bestätige ich, dass ich durch den Versuchsleiter dieser Studie über
die oben genannten Punkte aufgeklärt und informiert worden bin. Ich habe
diese Erklärung gelesen und verstanden. Ich stimme jedem der Punkte zu.
Ich ermächtige hiermit die von mir in dieser Untersuchung erworbenen Daten
zu wissenschaftlichen Zwecken zu analysieren und in wissenschaftlichen Arbeiten anonymisiert zu veröffentlichen.
Ich wurde über meine Rechte als Versuchsperson informiert und erkläre mich
zu der freiwilligen Teilnahme an dieser Studie bereit.”
Ort, Datum
Unterschrift
Bei Minderjährigen, Unterschrift des Erziehungsberechtigten
Acknowledgments
I want to thank Prof. Peter Bosch and Prof. Peter König for their constant support during the development of this thesis and the opportunity to
conduct research of my own in such an exciting field. Furthermore, I want
to thank Torsten Betz and Frank Schumann from the NBP-group for their
open ear and advice when it was dearly needed. Lastly, I want to thank
Vera Mönter for her moral support and permanent motivation.
51
Confirmation
Hereby I confirm that I wrote this thesis independently and that I have not
made use of any other resources or means than those indicated.
Hiermit bestätige ich, dass ich die vorliegende Arbeit selbständig verfasst
und keine anderen als die angegebenen Quellen und Hilfsmittel verwendet
habe.
Christian Hoffmann, Nijmegen, September 29, 2009
52