My title - Relais d`information sur les sciences de la cognition

Transcription

My title - Relais d`information sur les sciences de la cognition
THESE DE DOCTORAT DE
L’UNIVERSITE PIERRE ET MARIE CURIE
Spécialité
Sciences cognitives
Ecole doctorale Cerveau Cognition Comportement
Présentée par
Guillaume Dezecache
Pour obtenir le grade de
DOCTEUR de l’UNIVERSITÉ PIERRE ET MARIE CURIE
Studies on emotional propagation in humans :
the cases of fear and joy
Soutenue publiquement le 17 décembre 2013 devant le jury composé de :
Pr. Natalie SEBANZ, Rapporteure
Dr. Daniel HAUN, Rapporteur
Pr. Robin DUNBAR, Examinateur
Dr. Mathias PESSIGLIONE, Examinateur
Pr. Dan SPERBER, Examinateur
Dr. Didier BAZALGETTE, Examinateur
Dr. Pierre JACOB, Directeur de thèse
Dr. Julie GREZES, Directrice de thèse
1
Contents
Abstract
8
Résumé
9
Foreword
11
Chapter One: The crowd in 19th and 20th century early social psychology
15
The seven key-characteristics of crowd behavior . . . . . . . . . . . . . . . . . 16
How are mental and emotional homogeneity achieved within crowds? . . . . . 17
The concept of contagion in early crowd psychology, and its epistemological consequences for our current understanding of the process of emotional
transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Chapter Two: How can emotions of fear and joy propagate in crowds?
23
How can emotions become collective? . . . . . . . . . . . . . . . . . . . . . . . 23
What we know from the study of dyadic interactions . . . . . . . . . . 23
Can emotional transmission go transitive? . . . . . . . . . . . . . . . . 26
Evidence for unintentional emotional contagion beyond dyads (Dezecache et
al., 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
What psychological mechanisms are at work? . . . . . . . . . . . . . . . . . . 34
“Social comparison” models . . . . . . . . . . . . . . . . . . . . . . . . 34
“Conditioning” models . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
The “primitive emotional contagion” model . . . . . . . . . . . . . . . 35
2
Emotional contagion as emotional communication . . . . . . . . . . . . 36
How do we react to others’ emotional displays? . . . . . . . . . . . . . 37
How do shared-representations and emotional processes cooperate in response
to social threat signals? (Grèzes & Dezecache, in press) . . . . . . . . . 37
Is emotional transmission equivalent to contagion? . . . . . . . . . . . . . . . 48
Emotional transmission as a process of influencing others . . . . . . . 49
Emotional transmission 6= contagion . . . . . . . . . . . . . . . . . . . 50
An evolutionary perspective on emotional communication (Dezecache, Mercier
& Scott-Phillips, 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Summary of this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Chapter Three: Why do emotions of fear and joy propagate in crowds?
65
Why should we expect emotions to spread in crowds? . . . . . . . . . . . . . . 65
Information acquisition and sharing at the basis of emotional crowds . 65
Humans spontaneously compensate others’ informational needs in threatening
contexts (Dezecache et al., in preparation) . . . . . . . . . . . . . . . . 67
Summary of this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Chapter Four: General discussion
83
Summary of the main findings . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Emotions of fear and joy can be transmitted beyond dyads . . . . . . . . . . . 83
Emotional transmission as a process of influencing others . . . . . . . . . . . . 84
Emotional transmission is sensitive to others’ informational needs . . . . . . . 86
Beyond audience effects: how others’ mental states can influence transmission
of emotional information behavior . . . . . . . . . . . . . . . . . . . . . . . . . 87
Emotional transmission is not contagion . . . . . . . . . . . . . . . . . . . . . 89
Epilogue: Emotional transmission beyond triads
91
Where traditional views might have gone wrong . . . . . . . . . . . . . . . . . 92
Revising the key-characteristics of crowd behavior . . . . . . . . . . . 92
Are crowd members suggestible? . . . . . . . . . . . . . . . . . . . . . 95
3
The “myth” of crowd panics . . . . . . . . . . . . . . . . . . . . . . . . 97
Why don’t crowds panic? Tentative explanations . . . . . . . . . . . . . . . . 99
Emotional crowd behavior is regulated by emerging social norms . . . 99
Modern crowd psychologists face serious methodological issues . . . . . 101
Different levels of analysis at the source of the dilemma . . . . . . . . 103
Natural reactions to threat: affiliative tendencies vs. self-preservative
responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
General references
106
Appendix
120
4
Acknowledgments
Quand j’étais petit, je voulais devenir footballeur, et je voudrais remercier la foule de gens
qui m’ont, consciemment ou inconsciemment, empêché de tenter même de poursuivre ce
grand rêve. Ils m’ont inéluctablement conduit à la poursuite d’une thèse.
Au sein de cette longue liste, il y aurait d’abord mon père et ma mère: m’obliger à faire mes
devoirs tous les soirs, ne posséder pendant très longtemps qu’une télé en noir-et-blanc qui
empêchait de distinguer les joueurs de l’équipe adverse, et m’emmener à la messe pendant
l’heure de Téléfoot ont précipité mes souhaits footballistiques à la grande casse de l’existence.
Je ne saurais oublier mes fidèles compagnons calaisiens aux-pieds-carrés qui ont su me faire
oublier le drame de ce rêve bafoué : Maxime "Le Duc" Carpentier, François Timmerman,
Jean-Baptiste Sodooren, Rémy Mazet, Amaury Decourcelle, Guillaume Tourbeaux, Aurélien
Fournier (qui, soit dit en passant, a les pieds un peu plus adroits), Raphaël Foulon... Il y en
aurait beaucoup d’autres mais, Dieu soit loué, ils n’auront jamais la curiosité d’ouvrir cette
thèse et de regretter de n’y point voir leur nom.
Je tiens également à remercier Alexandre Billon : un jour de pluie comme Villeneuve-d’Ascq
en connaît beaucoup, il me conseilla la lecture de la Contagion des Idées de Dan Sperber.
Cet ouvrage m’a éloigné à tout jamais des terrains de football et des œuvres complètes de
Lévinas. De Villeneuve-d’Ascq, il me faudrait aussi remercier Claudio Majolino : s’il ne tenait
pas spécialement à ce que je quitte les rangs de la phénoménologie, son peu de sympathie
pour les sciences cognitives m’a permis de publier mon premier petit article scientifique.
La découverte des travaux de Dan Sperber m’a complètement bouleversé. J’en ai immédiatement jeté mon exemplaire de l’Archéologie du Savoir (si je n’y comprenais pas grand chose,
j’y tenais beaucoup). Merci à toi Dan d’avoir été, pendant toute la durée de mon master et
de ma thèse, un superviseur attentif et bienveillant.
L’équipe du Nash (Olivier Morin, Nicolas Baumard, Coralie Chevallier, Hugo Mercier,
Olivier Mascaro, Jean-Baptiste André, Anikó Sebestény, Hugo Viciana, Nicolas Claidière)
ont énormément apporté à ce travail de thèse : puisqu’ils ont tous au moins cinq ans d’avance
mentale sur moi, ils m’ont aidé à éviter certaines erreurs de parcours.
5
Sur le chemin de la rue d’Ulm, j’ai eu la grande chance de tomber sur Laurence Conty. Sa
croisade contre les interminables blablas et pour les approches expérimentales m’a contaminé. Je lui en suis cent fois gré. Je dois également la remercier de m’avoir fait rencontrer
Julie Grèzes.
Julie est sans doute la personne à qui ce travail doit le plus: je lui suis infiniment reconnaissant
de m’avoir pris dans son équipe malgré mes aptitudes mathématiques d’enfant de CP, d’avoir
supporté mes protocoles farfelus, d’avoir écouté mes pleurnichements extrascientifiques, et
de m’avoir emmené avec elle s’enterrer à Cerisy avec des psychanalystes. J’ai énormément
appris à ses côtés, et si tout était à refaire, je pleurnicherais moins mais je conduirais tout
le reste de l’exacte même manière.
Un grand merci à Pierre Jacob. Alors qu’il aurait pu m’imposer des idées bien plus sages que
celles que j’ai adoptées, il m’a laissé une totale liberté intellectuelle, a écouté mes positions
et changements de position avec enthousiasme et indulgence.
Il me faudrait également remercier Etienne Kœchlin, pour avoir accueilli le philosophemarxiste-rasta que j’étais au sein de son laboratoire, et Laura Chadufau pour sa patience et
son énergie. L’orientation évolutionnaire de ce travail doit énormément à Robin Dunbar. Si
je suis revenu d’Albion avec une étude sur "les gens qui rigolent dans les bars" qui m’a valu
mille (gentilles) moqueries, j’en suis aussi retourné avec la certitude de vouloir participer à la
grande investigation de ce qu’être humain veut dire. Merci également à Bronwyn Tarr, Rob
Stubbs, Taiji Ogawa, Réka Bakonyi, Juan Aragon, Aurélie Barré, Andy Smith et Raffaella
Smith pour ces incroyables moments à Oxford.
Grâce à Marina Davila-Ross, j’ai pu m’exiler un petit mois sous le soleil Zambien : j’y matais
du chimpanzé le jour, et je me frottais aux talentueux (mais un poil tricheurs) footballeurs
du Chimfunshi Wildlife Centre le soir. J’y ai laissé dix kilos, mais y ai attrapé des mollets
musclés et des rêves de primatologue : quand je serais grand, je veux faire tout comme
Marina.
Enfin, s’ils ne m’ont pas donné le salaire de Zlatan Ibrahimovic, je voudrais remercier Didier
Bazalgette et la DGA-MRIS, ainsi que le Conseil Régional d’Île-de-France pour leur soutien
6
financier. Promis, votre argent n’a pas été gaspillé.
Après avoir salué les grands, au tour des petits pairs :
Aux Cogmastériens : Camillia Bouchon, qui ne manquera pas d’être en retard à ma soutenance de thèse ; Romain Ligneul, qui peut envisager une carrière de moniteur de ski s’il
fait un flop en recherche ; Victor Bénichoux, le tacleur fou, à qui je dois la perte de 10 ongles ; Bahar Gholipour, Marie-Sarah Adenis, Jean-Rémy Martin, Clio Coste, Anne Kösem,
Louise Goupil, Raphaëlle Abitbol, Zina Skandrani, François Le Corre, Léonor Philip, Margaux Perez, Aurélie Coubart, Valeria Mongelli, Klara Kovarski, Hadrien Orvoën. . . Et j’en
oublie sans doute.
Aux étudiants du LNC : la liste est trop longue, et ils se reconnaîtront.
A la Social Team : Emma "Podouce" Vilarem, Marwa "Habibi" El-Zein, Michèle "MichMich" Chadwick, Terry Eskenazi, Lise Hobeika, Lucile Gamond, Matias Baltazar : vous
allez me manquer.
Enfin, je voudrais dédier ce travail à mon grand-père : s’il abhorrait les dreadlocks, je suis
certain qu’il aurait adoré me voir aboutir un travail, et le présenter en costume-cravate dans
un endroit prestigieux. J’espère que l’excellence de tous ces gens aura été contagieuse !
7
Abstract
Crowd psychologists of the 19th and 20th centuries have left us with the idea that emotions
are so contagious that they can cause large groups of individuals to rapidly and spontaneously converge on an emotional level. Good illustrations of this claim include situations
of crowd panic where largemovements of escape are thought to emerge through local interactions, and without any centralized coordination. Our studies sought to investigate the
propagation of two allegedly contagious emotions, i.e., fear and joy. This thesis presents two
theoretical and two empirical studies that have investigated, at two different levels of analysis, the phenomenon of emotional propagation of fear and joy: firstly, at a proximal level
of analysis (the how-question), I discuss the potential mechanisms underlying the transmission of these emotions in crowds, and the extent to which emotional transmission can be
considered analogous to a contagion process. Secondly, at an evolutionary/ultimate level of
analysis (the why-question), I ask why crowd members seem to be so inclined to share their
emotional experience of fear and joy with others. I present a study showing that the transmission of fear might be facilitated by a tendency to modulate one’s involuntary fearful facial
reactions according to the informational demands of conspecifics, suggesting that the biological function of spontaneous fearful reactions might be communication of survival-value
information to others. Finally, I discuss the implications of these studies for the broader
understanding of emotional crowd behavior.
Emotional transmission; emotional contagion; emotional communication; fear; joy; crowd
psychology
8
Résumé
Les psychologues de la foule des 19e et 20e siècles nous ont légué l’idée que les émotions
sont si contagieuses qu’elles peuvent conduire un grand nombre d’individus à rapidement et
spontanément adopter une même émotion. L’on pense par exemple aux situations de panique
de foule, où, en l’absence de coordination centrale, des mouvements de fuite collective sont
susceptibles d’émerger. Les travaux présentés dans cette thèse se proposent d’étudier la
propagation de deux émotions considérées comme particulièrement contagieuses, la peur et
la joie. Leur propagation est étudiée à deux niveaux d’analyse : d’abord, au niveau proximal
(la question du "comment"), je discute les mécanismes potentiels permettant à l’émotion de
se propager en foule; aussi, je soulève la question du bien-fondé de considérer la transmission
émotionnelle comme un processus de contagion. Dans un second temps, au niveau d’analyse
évolutionnaire ou ultime (la question du "pourquoi"), je pose la question de savoir pourquoi
les individus de la foule ont ainsi l’air de partager leur états émotionnels de peur et de joie
avec leurs voisins. A ce propos, je présente une étude montrant que la transmission de la peur
peut être facilitée par la propension du système cognitif humain à moduler l’intensité des
réactions faciales liées à la peur, en fonction de l’état informationnel de leurs congénères. Ces
résultats suggèrent que les réactions faciales spontanées de peur ont pour fonction biologique
la communication, à autrui, d’information cruciale pour la survie. Pour finir, je discute les
implications de ces travaux pour notre compréhension plus générale des liens entre émotions
et comportement de foule.
Transmission émotionnelle; contagion émotionnelle; communication émotionnelle; peur; joie;
psychologie de la foule
9
"La majorité était venue là par pure curiosité,
mais la fièvre de quelques-uns a rapidement gagné le cœur de tous."
Tarde, 1903
Figure 1: Inside the Iroquois Theatre (Chicago, USA) while the fire raged (October 1871). Source:
Everett, 1904
10
Foreword
The overall aim of this thesis is to investigate the proximate mechanisms and the biological
function of the production and reception of emotional signals of fear and joy in humans, and
more precisely, to attempt a response to a twofold question: how and why do we involuntarily
transmit our emotions of fear and joy to others?
Although it would have been relevant to consider the transmission process for the whole
range of emotions humans can feel and express, I have focused on the emotions of fear and
joy. Expressions of fear, as they signal an imminent threat in the environment, are most
likely to spread in large groups and to structure collective behaviors. Numerous examples of
mass hysteria can indeed be found in historical records (Bernstein, 1990; Cook, 1974; Evans
& Bartholomew, 2009; Headley, 1873; Hecker & Babington, 1977; Kerckhoff & Back, 1968;
Stahl & Lebedun, 1974; Wessely, 1987). While collective euphoria might be less common, joy
can also lead to emotion-based collective behavior (Ehrenreich, 2007; Evans & Bartholomew,
2009), and is known to spread like an "infectious disease" in social networks (Fowler &
Christakis, 2008; Hill, Rand, Nowak, & Christakis, 2010).
In this respect, emotional crowd situations (as they are conceptualized in the early social
psychology and sociology literature [Tarde, 1903, 1890; Le Bon, 1896; Sighele, 1901; Pratt,
1920]) provide a fruitful framework for the study of emotional propagation. To use the
phrase coined by Gustave Le Bon (1896), emotions are highly contagious and spread like
germs in groups of individuals. Crowd situations, by the over-proximity they impose on crowd
members (Moscovici, 1993, 2005), are highly conducive to a wide diffusion of emotions and to
the rapid adoption, in large groups of individuals, of “similar affective states and patterns of
11
behavior through local interactions and without any prior centralized coordination” (Raafat,
Chater, & Frith, 2009).
It is important to note that, for the sharing
of emotional information between individuals
A and B to constitute a genuine case of emotional transmission, B’s emotional state must be
caused by the perception of A’s emotional signals, rather than by B’s perception of the source
of A’s emotion. This condition enables a distinction to be made between cases of mere collective reactions to a single event (e.g., a general
panic provoked by the news of a sudden financial crisis, where people rush to bank branches
to collect their savings; in this case, the collective emotion is only accidental as it does not
Figure 2: in the USA. Source: Wikimedia
result from a set of local transmissions of information between agents – see Figure 2A) and
Commons; Figure 2B shows a stampede resulting from a blaze (The “Valley Parade fire
genuine cases of transmission-based emotional disaster”) which occurred in the stadium of
collective behavior (e.g., a stampede caused by Bradford (United Kingdom) in 1985. Source:
a blaze; in this latter case, the collective emotion The Sun. Schemas on the right show, for each
is the outcome of a set of local transmissions of representation, the likely causal route leading
fear and anxiety between individuals – see Fig-
to the collective panic.
ure 2B). It is fairly difficult to argue that such
“pure” cases of transmission-based emotional collective behaviors actually exist, but it is
crucial to distinguish between these two prototypical cases as each one is based on a distinct
causal route and therefore calls for a very different cognitive explanation. Surprisingly, such
a distinction is virtually absent from the investigations of crowd phenomena by early social
psychologists.
Last, there are two important points to take into consideration concerning the studies de-
12
veloped in this thesis:
Firstly, we focused on three types of signaling media or effectors: the face, the body, and
the vocal system (for the production of emotional vocalizations), each medium being likely
to employ a specific signature for each emotion (for facial signals, cf. Ekman, 2007; for
bodily or postural signals, cf. de Gelder, 2006; for vocal signals, cf. Sauter et al., 2010). This
claim, however, is disputed, as far as facial signals are concerned (Barrett, 2011; Fridlund,
1994). We are fully aware that emotions can also be expressed through numerous other
media including chemosensory signals (Mujica-Parodi et al., 2009), verbal (Rimé, Corsini,
& Herbette, 2002) and prosodic (Frick, 1985) signals. The massive use of internet-based
communication nowadays would also have called for the examination of the possibility of
emotional transmission through emoticons (Derks, Fischer, & Bos, 2008; Marcoccia, 2000).
We have decided, however, to concentrate on signals that could actually play a significant
role in real crowd contexts.
Secondly, while the transmission of affective information encompasses the transfer of various
kinds of contents, such as moods (which are long-lasting and diffuse phenomena with no obvious physiological signature) and emotions (which are briefer phenomena with a somewhat
specific physiological signature) (Ekman, 1994), we have focused exclusively on the transmission of emotions: their duration and physiological signature make them easier to study
in laboratory settings. Moreover, crowd phenomena are seldom long-lasting, thus involving
the transmission of emotions rather than that of moods.
To sum up, this work is dedicated to the examination of transmission-based emotional collective behavior, and attempts to tackle two main issues: through which cognitive mechanisms
do emotions propagate within crowds? Why do people in crowds tend to involuntarily transmit their emotions to their conspecifics?
Before examining these questions in more detail, we will discuss traditional views on crowd
behaviors, and the theoretical history of the concepts of contagion in the science of crowd
behavior. We will also examine the constant use (and the epistemological consequences)
of the metaphor of the disease by early social psychologists and sociologists when describing the spread of emotions in large groups. Finally, and after having presented the main
13
characteristics of the phenomenon of emotional transmission and the broad class of plausible psychological mechanisms that might serve it, we will discuss the possibility that such
mechanisms may actually apply to the transmission of emotions in crowd situations.
14
Chapter One: The crowd in 19th and
20th century early social psychology
Crowd behaviors and the spread of emotions in groups were central questions for the sociological tradition in the 19th and early 20th centuries. The numerous analyses by Gustave Le
Bon (1896), Gabriel Tarde (1903; 1890) and Scipio Sighele (1901) (among others), along with
the many crowd panic scenes in movies (e.g., “The Steps of Odessa” scene in Sergei Einsenstein’s famous movie, The Battleship Potemkin [1925]) have continuously fed the collective
imagination and shaped a popular representation of the “crowd mind”. Of main interest
here, early crowd psychologists were probably among the first to conceptualize emotions in
groups systematically as disease or germs, and to introduce the term contagion to explain
how emotions might be transmitted between individuals in large groups.
As will be shown below, this tradition of crowd psychologists and their writings has not
only had massive epistemological consequences on the way we understand emotion-based
collective behavior (i.e., how it has affected its popular representation), but has also had
an important impact on the way we conceptualize the process of transmission of emotional
information, i.e., often understood as passive, ineluctable and somewhat dangerous. In what
follows, we will briefly analyze the epistemological impact of this tradition on our popular representations of crowd behavior, which has indeed been extensively and convincingly
discussed elsewhere (e.g., Couch, 1968; Drury, 2002; Reicher & Potter, 1985; Reicher, 1996,
2001; Schweingruber & Wohlstein, 2005). We will also briefly explore the cognitive aspects
of the influence of a large number of people on individual cognitive functioning, and the
15
impact these factors have on the spread of emotions within crowds. Finally, we will discuss
the introduction of the concept of contagion and its impact on our current understanding of
the emotional transmission process.
The seven key-characteristics of crowd behavior
Crowd behavior, as a research topic, has immediate appeal to most audience. It is true
that its relative simplicity, as well as the quasi-absence of deep conceptualization makes the
subject easily communicable. However, the reason for this “widespread interest” in the investigation of crowd behavior is to be found elsewhere. Despite the fact that very few of us have
ever been stuck in a panicking crowd, it would seem that we all have some idea about how
crowds and their members behave in emergency situations. Popular understanding of emotional crowd behavior, which could be quickly summarized by using striking formulae such as
Serge Moscovici’s “the crowd is a social animal that has broken its leash” (Moscovici, 1980)
or Gabriel Tarde’s “the crowd is an anonymous and monstrous beast” (Tarde, 1890), seems to
operate systematically around seven key-characteristics (Schweingruber & Wohlstein, 2005),
which are often redundant:
(i). Irrationality: because they are participating in collective action, crowd members become incapable of rational thought, even though each individual is perfectly rational
in isolation.
(ii). Emotionality: because crowd members are not capable of rational thought, their behavior is driven solely by their affects. This is because of a genuine incapacity to inhibit
their impulses.
(iii). Suggestibility: crowd members are the “slaves of [their] impulses” (Le Bon, 1896): they
are highly susceptible to the ideas, acts and emotions of other group members.
(iv). Destructiveness: crowd members are especially prone to anti-social behaviors, or as
termed by Scipio Sighele (1901), “a crowd is a substratum in which the germ of evil
spreads very easily, while the germ of good nearly always dies for lack of the necessary
conditions for survival”.
16
(v). Spontaneity: this point follows on from (i), (ii) and (iii). Acts perpetrated by crowds
are mostly spontaneous, i.e., not planned in advance.
(vi). Anonymity: People in crowds feel anonymous because they are surrounded by many
other individuals. Anonymity tends to increase antisocial behaviors, as the presence
of a great number of individuals decreases the risk of being held responsible for the
acts perpetrated. Scipio Sighele deals with this problem of collective responsibility in
his book La Foule Criminelle (1901).
(vii). Unanimity: everyone in the group behaves and feels in exactly the same way.
These characteristics paint a rather extravagant picture of the crowd which has greatly
contributed to its popular success. However, and although it is hard to acknowledge a genuine
scientific stance in Le Bon’s and others’ writings, those writings were in fact intended to
be a scientific contribution to understanding how individuals in a group situation can, so
spontaneously and rapidly, become extremely homogeneous in their mental and emotional
states.
How are mental and emotional homogeneity achieved within
crowds?
According to Le Bon’s perspective, mental and emotional homogeneity are fundamental
characteristics of crowds. In his work The Crowd: a Study of the Popular Mind (1895),
Gustave Le Bon set out to explain how such homogeneity might be achieved in groups. To
this end, he identified three main causes that might contribute to the emergence of crowd
behavior ("crowd" being equated with "mental and emotional unity"). It should be noted
that it is not clear whether these causes were thought of as different stages of the crowd
formation process, or whether they were conceptualized as independent forces which could
be combined.
Firstly, the mere presence of a great number of people would have the immediate effect
of degrading the conscious self and the personality of individuals, as well as their sense
17
of responsibility. At the same time, each one would gain a feeling of "invincible power"
(as stated by Le Bon himself). This first cause is often termed submergence (Reicher, 2001)
and has subsequently been explored, through the concept of deindividuation, by many social
psychologists (Cannavale, Scarr, & Pepitone, 1970; Diener, Fraser, Beaman, & Kelem, 1976).
It has been shown, in particular, that anonymity in a group tends to increase antisocial
behavior (due to the disappearance of any sense of responsibility) and to favor the use of
poor rationalizations when participants are asked to account for their previous antisocial acts
(i.e., relying on collective excuses, such as “but everyone did the same”, instead of basing them
upon personal reasons). It therefore appears that the presence of many individuals can reduce
each one’s sense of responsibility and control over his own actions (compared to how that
individual would behave in isolation). As far as the mechanisms which support this process
of deindividuation are concerned, these could be a decline in self-evaluation processes (one
simply becomes incapable of judging one’s own behavior in terms of personal standards),
and/or less concern about social evaluation (one no longer judges one’s own behavior in
terms of social norms or standards). When personal and social values no longer operate,
any sense of guilt disappears. This would definitively pave the way for the production of
antisocial behaviors (Zimbardo, 1969). For others (e.g., Diener, 1977), the mechanisms at
the basis of deindividuation might be purely cognitive: the presence of a great number of
individuals would saturate one’s capacity to process information, thus making it impossible
to monitor one’s own actions or guide behavior according to personal standards. Inevitably,
each individual becomes unable to protect himself from the stimulation of other crowd
members, and is therefore strongly inclined to mimic their actions.
The second general cause leading to the emergence of crowd behavior is known as mental
contagion. This can be seen as a consequence of the process of submergence: since crowd
members have lost their capacity of self-evaluation, they become incapable of resisting passing ideas and emotions. Gustave Le Bon (1895) considers this process to be comparable to
that of hypnosis:
“The most careful observations seem to prove that an individual
immerged for some length of time in a crowd in action soon finds
18
himself – either as a consequence of the magnetic influence given out
by the crowd, or from some other cause of which we are ignorant
– in a special state, which much resembles the state of fascination
in which the hypnotized individual finds himself in the hands of the
hypnotizer. The activity of the brain being paralyzed in the case of the
hypnotized subject, the latter becomes the slave of all the unconscious
activities of his spinal cord, which the hypnotizer directs at will. The
conscious personality has entirely vanished; will and discernment are
lost. All feelings and thoughts are bent in the direction determined
by the hypnotizer.”
The third cause, that of suggestion, appears to be closely linked to that of mental contagion,
and serves as defining the sort of ideas and emotions that emerge from the crowd. As
noted in the key-characteristics (see above), these behaviors are necessarily antisocial and
destructive. While the loss of personality does not necessarily imply that crowd members will
commit antisocial acts, the fact that all members share an uncivilized, brutal and primitive
common-ground (the so-called racial unconscious) dramatically restricts the range of possible
behaviors that can emerge:
“It is more especially in Latin crowds that authoritativeness and
intolerance are found developed in the highest measure. In fact, their
development is such in crowds of Latin origin that they have entirely
destroyed that sentiment of the independence of the individual so
powerful in the Anglo-Saxon. Latin crowds are only concerned with
the collective independence of the sect to which they belong, and
the characteristic feature of their conception of independence is the
need they experience of bringing those who are in disagreement with
themselves into immediate and violent subjection to their beliefs.”
Combined together, these three causes would highly favor emotional crowd behavior: a
19
newcomer, as soon as entering the crowd, will feel outnumbered and will lose any self-control
as a consequence (submergence). Being incapable of self-evaluating his own behavior, he will
become contaminated by passing ideas and emotions (mental contagion). Those ideas will
necessarily be anti-social as mental contagion operates on a shared and primitive commonground, which is composed of ideas that are brutal in nature (suggestion). As a consequence,
emotions that are raw and primal (such as anger and fear) would be highly favored by this
process.
For social neuroscientists, the description of emotions as highly contagious has an immediate
appeal: they themselves make extensive use of the concept of "contagion", a word that carries
many preconceptions and which strictly limits our conception of emotional transmission.
The concept of contagion in early crowd psychology, and its
epistemological consequences for our current understanding
of the process of emotional transmission
Although the concept of contagion can be traced back to medieval philosophers (Robert,
2012), its use to describe the way ideas, behaviors and emotions can be transferred from
individual to individual in large groups, has become systematic with the early discourses
on crowd behavior (Rubio, 2010). In this respect, the word "contagion" is employed by Le
Bon (Le Bon, 1896) to describe the mechanism which accounts for the rapid mental and
emotional homogeneity that can be attained in any sort of group:
“Ideas, sentiments, emotions, and beliefs possess in crowds a contagious power as intense as that of microbes. This phenomenon is very
natural, since it is observed even in animals when they are together
in number. Should a horse in a stable take to biting his manger the
other horses in the stable will imitate him. A panic that has seized
on a few sheep will soon extend to the whole flock.” (Le Bon, 1896).
20
In this passage, emotions are compared to microbes, with respect to their power of propagation. Interestingly enough, Gustave Le Bon writes that the phenomenon of emotional
contagion might not only be found in humans, but would also be shared with other social
mammals: groups of humans will not be analyzed differently than herds of horses or flocks
of sheeps. In all those species, emotions spread like diseases.
Such conceptualization has had three main consequences for our understanding of crowd
behavior:
Firstly, it has made the discourse look scientific in the eyes of the audience (Rubio, 2010).
The term "contagion" was indeed conventionally employed by scholars of medical studies.
This need for scientific legitimacy becomes evident with Gustave Le Bon’s argument that
the phenomenon of contagion must be classified with other phenomena such as hypnosis
or madness, thus claiming further scientific credentials by associating his research with the
work of Jean Martin Charcot and Hippolyte Bernheim, among other sources.
The second important consequence of the introduction of the concept of contagion is that
it efficiently serves Le Bon’s ideological purpose: the mere use of the lexical field of disease
("contagion", "microbe", "madness", "disorder", "agoraphobia", etc.) immediately makes
group behavior look pathological. What happens in crowds suddenly becomes abnormal and
deserves no rational explanation. Moreover, it suggests that, like parasites, emotions could
be dangerous and one should keep clear of gatherings.
Thirdly, and most importantly in the context of this thesis, the systematic use of the concept of contagion has also dramatically shaped the way in which we understand the process
of emotional transmission, whether these emotions are transmitted in groups or in strictly
dyadic contexts. Emotional transmission is necessarily thought to be primitive, fast, passive, unintentional, irrepressible, and somewhat dangerous as it cannot be inhibited. As a
consequence, the spread of emotions can be scientifically analyzed, as is the spread of disease. This is clear when considering the works of Nicholas A. Christakis and colleagues (e.g.,
Fowler & Christakis, 2008; Hill et al., 2010) who treat the spread of moods by explicitly
using a disease epidemiology model. Within this type of model, moods are transmitted in
large networks, from node to node, through social contact, over a long period of time, and
21
in a spontaneous and automatic fashion. Not surprisingly, most of these characteristics can
also be found in Elaine Hatfield and colleagues’ account of the phenomenon of emotional
transmission, which also assumes, purely and simply, that emotional transmission can be
conceived as a contagion process (Hatfield et al., 1994).
I should make it clear that the above analysis only aims at an objective description of the
presence and epistemological consequences of introducing the concept of "contagion" to describe the process of affects being spread in groups. The use of this concept is, in itself,
interesting to analyze, as authors could have used a more neutral term, such as "transmission". This would not have presupposed that agents are passive in the process of exchanging
their emotions, nor that this process is irrepressible. What we need to examine is whether
the process of spontaneous emotional transmission can appropriately be termed "contagion". In other words, whether emotional transmission is an automatic (or unconditional),
irrepressible and dangerous process. The mechanisms at the heart of emotional propagation
in crowds are the subject of chapter 2.
22
Chapter Two: How can emotions of
fear and joy propagate in crowds?
How can emotions become collective?
As shown above, emotions have been conceptualized as very contagious elements since Le
Bon’s work on crowd behavior. The numerous studies reporting spontaneous transmission
of emotions between two individuals seem to confirm that emotions are highly “contagious”
and legitimate the use of such a metaphor.
What we know from the study of dyadic interactions
Human beings’ propensity to communicate their emotions via facial, bodily and vocal signals,
and their converse propensity to catch others’ emotions are recognized as striking (Hatfield,
Cacioppo, & Rapson, 1994; Schoenewolf, 1990). Newborn babies are alleged to be highly
sensitive to the distress of their fellows, crying in response to others’ cries (Simner, 1971;
Dondi, Simion, & Caltran, 1999); they can pass this on, in turn, to their parents, who are
known to be highly responsive to the distress of their offspring (Frodi et al., 1978; Wiesenfeld,
Malatesta, & Deloach, 1981). On a more positive note, joy can be similarly contagious: in the
words of composers Larry Shay, Mark Fisher and Joe Godwin, and the song made popular
by Louis Armstrong (1929), “when you are smiling, keep on smiling / The whole world smiles
with you.”
23
In fact, emotional contagion is so ubiquitous in human affairs that, according to social psychologist and psychotherapist Elaine Hatfield and her colleagues (Hatfield, Cacioppo, &
Rapson, 1993; Hsee, Hatfield, & Chemtob, 1992), one might even gain the most valuable
information about a target’s affective states by focusing on one’s own feelings during an
ongoing social interaction, rather than by pursuing explicit and conscious reasoning about
the target’s own account of his emotional state. In this respect, a personal anecdote related
by Elaine Hatfield and Richard L. Rapson, at the beginning of their popular book Emotional contagion (Hatfield et al., 1994), is particularly illuminating. They reveal that, during
psychotherapeutic sessions, therapists might fail to grasp their clients’ affective states, while
at the same time being contaminated by them:
“For over a decade, Richard L. Rapson and I (Elaine Hatfield) have
worked together as therapists [. . . ]. One day [. . . ], Dick [Richard L.
Rapson] complained irritably at the end of a session: ‘I really felt out
on a limb today. I kept hoping you’d come in and say something, but
you just left me hanging there. What was going on?’ I was startled.
He had been brilliant during the hour, and I had not been able to
think of a thing to add; in fact, I had felt out of my depth and ill
at ease the whole time. As we replayed the session, we realized that
both of us had felt on the spot, anxious, and incompetent. The cause
of our anxiety soon became clear. We had been so focused on our
own responsibilities and feelings that we had missed how anxious our
client had been. [. . . ] Later, she admitted that she had been afraid
the whole hour that we would ask her about her drug use and discover
that she had returned to her abusive, drug-dealing husband.”
Readers who are familiar with psychotherapy tradition may even be surprised by the somewhat “naive” reaction of Elaine Hatfield and Richard L. Rapson when they realize the contagious power of others’ affective states: early theorists such as Sigmund Freud and Carl
Jung had long warned their fellows to keep their distance from their patients, affectively
24
speaking. While the former advised therapists to put themselves in the shoes of a surgeon
when dealing with a patient (Freud, 1912), the latter was reminding them that therapists
who think they can protect themselves from their client’s emotions are seriously in error
(Jung, 1968):
“It is a great mistake if the doctor thinks he can lift himself out
of [the emotional contents of the patient]. He cannot do more than
become conscious of the fact that he is affected.”
Supporting the view that emotional contagion constitutes an ineluctable process, an experiment carried out in Elaine Hatfield’s research team (Uchino, Hsee, Hatfield, Carlson, &
Chemtob, 1991) showed that prior expectations about a target’s emotional states do not alter the subjects’ subsequent susceptibility to those emotions, as revealed by their emotional
self-reports, as well as by their facial emotional expressions during exposure to the targets’
displays.
These studies would suggest that considering emotions as contagious elements could be
appropriate: at no stage in the contagious process do agents intend either to emit or to react
to emotional signals. In this respect, emotions operate just like diseases.
One thing that has remained unclear, however, is whether emotions could spread beyond
dyads. Work by Alison L. Hill and colleagues (Hill et al., 2010) (cited above) seems to
suggest that moods can indeed spread widely in social networks. The issue is very different,
though, when dealing with emotions in the context of crowd behavior: unlike the diffusion of
moods in social networks, the spread of emotions in crowds is not a diffuse and long-lasting
phenomenon but is instantaneous and rapid. Moreover, while the diffusion of moods relies
on repeated and reciprocal interactions through numerous transmission media (including
text messages and phone calls), the spread of emotions in crowds can be based solely on the
non-reciprocal transmission of information through facial, vocal, bodily, and possibly verbal
signals (all of which imply the physical presence of participants). In sum, for emotional crowd
behavior to emerge, information must circulate between each individual crowd member, until
25
emotional homogeneity is reached.
Can emotional transmission go transitive?
This necessary condition for emotions to spread in crowds raises an important technical issue:
if it has largely been shown that one individual (“Individual A”) can transmit her emotions
to another (“B”), there is no evidence that Individual B can, in turn, transmit emotional information to a third individual (“C”) who has no perceptual access to A’s emotional displays.
In other words, for emotional information to spread in crowds, emotions need to pass the
minimal condition of being transitively contagious, i.e., that they can be transmitted from A
to C, through B. This question has been the subject of empirical investigation, the results of
which are reported in our paper published in the international peer-reviewed journal PLoS
One, in June 2013. My contribution to this work was as follows: conception and design of
the experiment, collection and analysis of the data, and writing of the paper.
26
Evidence for Unintentional Emotional Contagion Beyond
Dyads
Guillaume Dezecache1,2*, Laurence Conty3, Michele Chadwick1, Leonor Philip1, Robert Soussignan4,
Dan Sperber2,5., Julie Grèzes1*.
1 Laboratoire de Neurosciences Cognitives (LNC), INSERM U960, and Institut d’Etudes de la Cognition (IEC), Ecole Normale Supérieure (ENS), Paris, France, 2 Institut Jean
Nicod (IJN), UMR 8129 CNRS and Institut d’Etudes de la Cognition (IEC), Ecole Normale Supérieure, and Ecole des Hautes Etudes en Sciences Sociales (ENS-EHESS), Paris,
France, 3 Laboratoire de Psychopathologie et Neuropsychologie (LPN, EA2027), Université Paris 8, Saint-Denis, France, 4 Centre des Sciences du Goût et de l’Alimentation
(CSGA), UMR 6265 CNRS, 1324 INRA, Université de Bourgogne, Dijon, France, 5 Department of Cognitive Science, Central European University (CEU), Budapest, Hungary
Abstract
Little is known about the spread of emotions beyond dyads. Yet, it is of importance for explaining the emergence of crowd
behaviors. Here, we experimentally addressed whether emotional homogeneity within a crowd might result from a cascade
of local emotional transmissions where the perception of another’s emotional expression produces, in the observer’s face
and body, sufficient information to allow for the transmission of the emotion to a third party. We reproduced a minimal
element of a crowd situation and recorded the facial electromyographic activity and the skin conductance response of an
individual C observing the face of an individual B watching an individual A displaying either joy or fear full body expressions.
Critically, individual B did not know that she was being watched. We show that emotions of joy and fear displayed by A
were spontaneously transmitted to C through B, even when the emotional information available in B’s faces could not be
explicitly recognized. These findings demonstrate that one is tuned to react to others’ emotional signals and to
unintentionally produce subtle but sufficient emotional cues to induce emotional states in others. This phenomenon could
be the mark of a spontaneous cooperative behavior whose function is to communicate survival-value information to
conspecifics.
Citation: Dezecache G, Conty L, Chadwick M, Philip L, Soussignan R, et al. (2013) Evidence for Unintentional Emotional Contagion Beyond Dyads. PLoS ONE 8(6):
e67371. doi:10.1371/journal.pone.0067371
Editor: Manos Tsakiris, Royal Holloway, University of London, United Kingdom
Received January 4, 2013; Accepted May 17, 2013; Published June 28, 2013
Copyright: ß 2013 Dezecache et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits
unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: The research was supported by a DGA-MRIS scholarship and the Agence National of Research (ANR) "Emotion(s), Cognition, Comportement" 2011
program (ANR 11 EMCO 00902), as well as an ANR-11-0001-02 PSL*, and an ANR-10-LABX-0087. The funders had no role in study design, data collection and
analysis, decision to publish, or preparation of the manuscript.
Competing Interests: The authors have declared that no competing interests exist.
* E-mail: [email protected] (GD); [email protected] (JG)
. These authors contributed equally to this work.
psychologists, probably due in part to the difficulty of producing
group-like phenomena in laboratory settings [7]. Emotional
contagion (here, the "tendency to automatically mimic and
synchronize facial expressions, vocalizations, postures, and movements with those of another person and consequently to converge
emotionally" [8]) is commonly studied in dyadic interactions (see
[8] for an extensive review). If however emotional homogeneity
within a crowd is to be achieved through transmission from
individual to individual, it is not sufficient that humans should be
tuned to catch others’ emotions in dyadic interactions. It is also
critical that humans should be tuned to reproduce the emotional
cues they observe to a degree sufficient to spontaneously spread
emotional information to other crowd members. This is needed for
emotions to be transitively contagious: the perception of individual A’s
emotional expressions by individual B should ultimately affect the
emotional experience of an individual C who is observing B but
not A. Such a minimal situation of transitive emotional transmission
may be, we surmise, at the basis of emotional contagion on a much
larger scale. What is also critical here is that emotional contagion
should take place automatically rather than as a result of people’s
decision to influence others or to accept such influence.
Introduction
Emotional crowds - where groups of individuals come to adopt
similar affective states and patterns of behavior through local
interactions and without any prior centralized coordination [1] were a major topic in the nascent field of social psychology in the
Nineteenth and early Twentieth centuries. Social scientists such as
Gabriel Tarde [2], Scipio Sighele [3] or Gustave Le Bon [4]
theorized about the emergence of such collective behaviors and
the psychological impact crowds have over their members. Crowds
were characterized as milieux where affects spread very rapidly and
in an uncontrollable manner (e.g., [4]). As a consequence, a group
of individuals who are not acquainted with one another may
spontaneously come to adopt the same behavior (e.g., a collective
flight in crowd panic), giving the impression of ’mental unity’
within the group [4]. A necessary condition for the emergence of
such collective behavior is the propagation of emotions across
individuals. How can emotional information circulate from one
individual to another in a way that rapidly achieves emotional
unity of the crowd?
Despite a few notable exceptions [5,6], emotional transmission
processes occurring in groups has been neglected by later social
PLOS ONE | www.plosone.org
1
June 2013 | Volume 8 | Issue 6 | e67371
Emotional Contagion Beyond Dyads
In the present study, we investigated the transmission of
emotional information in transitive triadic chains where the
behavior of an individual A was observed by a participant B who
was herself observed by a participant C. Joy and fear were chosen
as target emotions because of their relevance to coordinated
behavior and, arguably, their survival and fitness value [9,10]
makes it particularly likely that they should easily spread in groups.
As both motor and affective processes are implicated in emotional
contagion [11,12], we recorded in a first experiment, the
electromyographic (EMG) activity of zygomaticus major (ZM) and
corrugator supercilii (CS), two muscles that are respectively involved
in the production of smiling and frowning [13] and may be
differentially induced by unseen facial and bodily gestures of joy
and fear [14]. The activity of ZM and CS, as well as the skin
conductance response (SCR) (a measure of physiological arousal
[15]), were recorded in participants C while they were watching a
participant B’s face. B herself was either watching a video of full
bodily expressions including vocalizations of joy or fear displayed
by another individual A, or, in a control condition, a video without
social or emotional content. While our protocol was designed to
investigate transitive emotional transmission in terms of facial
patterns and physiological arousal in C, it was not designed to
investigate the mechanisms behind facial reactions: whether these
facial reactions qualify as rapid facial reactions [16] and whether
they are mediated by motor-mimetic [17,18] or affective/
emotional appraisal [16,19,20] processes cannot be addressed
here. As shown in figure 1, participants B and C were sitting in
adjacent booths during the experiment. While numerous studies
did report an impact of the presence of an audience on the
intensity of facial expressions of emotions [21,22], importantly
here, participants B were not informed that the participant in the
adjacent booth was watching them.
Finally, to help determine the degree to which the emotional
expressions of B were explicitly perceptible, and hence the nature
of emotional transmission from B to C, we presented in a followup experiment the video recordings of individuals B’s faces to
naive judges who were asked to label the emotional expressions of
B.
We predicted that an emotion displayed by individual A would
be transmitted to individual C via individual B (figure 1) even
though B was not aware that she was being watched, and even
when her emotional reactions could not be explicitly identified by
individual C. Testing the transitivity of emotional contagion
processes in this way may not only provide insight concerning the
spread of affects in groups and crowds; it may also shed light on
what may be the nature and function of emotional signaling
mechanisms from an evolutionary point of view.
Materials and Methods
(a) Ethics Statement
We obtained ethics approval from the local research ethics
committees (CPP Ile de France III and Institut Mutualiste
Montsouris) for the two experiments. All provided written
informed consent according to institutional guidelines of the local
research ethics committee.
(b) Experiment 1
(i) Participants. Thirty male participants (mean age 24.6 y
60.73 SE, range 18–36 y) were recruited to represent C in the
emotional transmission chain. We chose female participants to
represent individuals B in the transmission chain because
numerous studies suggest that women are facially more expressive
than men (e.g., [23]). Sixty female participants (mean age 24 y
60.48 SE, range 18–36 y) were thus recruited to represent B. All
of the participants had normal or corrected-to-normal vision, were
naive to the aim of the experiment and presented no neurological
or psychiatric history. All provided written informed consent
according to institutional guidelines of the local research ethics
committee and were paid for their participation. All the
participants were debriefed and thanked after their participation.
(ii) Stimuli. The stimuli presented to B (and standing for A)
consisted of 45 videos (mean duration 6060620 ms, range 6000–
6400 ms) of size 6206576 pixels projected on a 19-inch black
LCD screen. The videos of emotional conditions depicted 15
actors (8 females, 7 males) playing joy (n = 15) and fear (n = 15),
using facial, bodily as well as vocal cues. These videos were
extracted from sessions with professional actors from the Ecole
Jacques-Lecoq, in Paris, France. The stimuli of the non-social
condition (n = 15) displayed fixed shots of landscapes that were
shot in the French countryside.
All stimuli were validated in a forced-choice task where 15
participants (6 females, 8 males, mean age 22.5 y 61.46 SE) were
Figure 1. The experimental apparatus. Participant B (on the right of the picture) is isolated from participant C (in the middle) by means of a large
black folding screen. On the left of the picture is the recording device, concealed to C. Stimuli were presented to B using a screen located in front of
her; a webcam was placed on top of the screen so as to display B’s face on C’s screen.
doi:10.1371/journal.pone.0067371.g001
PLOS ONE | www.plosone.org
2
June 2013 | Volume 8 | Issue 6 | e67371
Emotional Contagion Beyond Dyads
during the session. B was then asked to select the emotion
displayed on the video in a forced-choice task, choosing the
appropriate emotion from between three options (joy, fear, none)
and to rate the intensity of the emotion on a 9-point scale. After
having responded to these two questions, B waited for a period of
time (between 15 and 20 sec) before a new video sequence began.
Note that B was wearing headphones, and that this was done to
improve the auditory input provided to B as well as to prevent any
auditory cues about the content of the videos to be transmitted to
C. Also, before joining participant C in the experimental room, Bs
were told that there would already be somebody in the
experimental room participating in an experiment led by another
research team, and that it was important to enter the room as
quietly as possible. At the same time, B was also told that the words
‘‘Start’’ and ‘‘End’’ would not disturb this other participant who
was wearing headphones. Critically, during the experimental
session, participant B never saw participant C who was hidden by
a folding screen.
(v) Specific procedure for participant C. While participant
B was instructed, participant C was installed in the experimental
room (placing of EMG and SCR electrodes, see (vi) Data acquisition)
and was told that he will watch two other participants (one after
the other) watching different movies with non-social or emotional
content. His task was to report on a sheet of paper, after each trial,
what he thought the other participant had just seen between the
words ‘‘Start’’ and ‘‘End’’. C was also told to remain silent
throughout the experiment.
(vi) Data acquisition. Using the acquisition system ADInstruments (ML870/Powerlab 8/30), we continuously recorded the
EMG activity of C using Sensormedics 4 mm shielded Ag/AgCl
miniature electrodes (Biopac Systems, Inc) (sample rate: 2 kHz;
range: 20 mV; spatial resolution: 16 bits). Before attaching the
electrodes, the target sites on the left of C’s face were cleaned with
alcohol and gently rubbed to reduce inter-electrode impedance.
Two pairs of electrodes filled with electrolyte gel were placed on
the target sites: left ZM and left CS muscles [24]. The ground
electrode was placed on the upper right forehead. Last, the signal
instructed to determine the emotional content of the video,
selecting from among 7 possible choices (anger, disgust, joy, fear,
surprise, sadness or none). The stimuli were correctly categorized:
joy stimuli were labeled as depicting joy (93% of the responses
selected the ‘joy’ label, contra 4% for the ‘sadness’ label, and less
than 1% for the five other labels); fear stimuli were labeled as
depicting fear (97% of the responses selected the ‘fear’ label,
contra less than 1% of the responses for the six other labels);
finally, non-social stimuli were labeled as not depicting any
emotion (94% of the responses selected the ‘none’ label, contra 4%
for the ‘joy’ label, and less than 1% for the five other labels).
(iii) Overall procedure. After their arrival, the first two
participants (one female participant, representing B; and one male
participant representing C) were told that they will take part in two
distinct experiments and were escorted to two separated rooms.
The second female participant B was called in one hour later so as
to replace the former female participant.
(iv) Specific procedure for participant B. While participant C was escorted to and set up in the experimental room,
participant B underwent training in the experimental procedure in
a waiting room, so as to lead B to believe that she was going to
participate to a completely different experiment. The procedure
(see figure 2) consisted in the presentation of the videos in a
random order on a black LCD screen of size 19-inch. Each video
was preceded by a 250 ms beep followed by the presentation of the
word ‘‘Start’’ for 1000 ms. At the end of each video, the word
‘‘End’’ appeared on the screen for 1000 ms. B was instructed to
pronounce these words sufficiently loudly to permit her speech to
be recorded by the webcam’s microphone and was told that this
would help the experimenter distinguish between the different
trials in a further analysis. This was actually done to inform C that
a video was beginning or ending. Furthermore, B was told that she
would be filmed via a webcam placed on the top of the screen and
that this was solely done to check whether she actually paid
attention to the movies. In fact, her reflection was retransmitted
onto C’s screen (figure 1) but none of our B participants reported
being aware that she was being watched by another participant
Figure 2. The experimental protocol timeline for participants B and C. Specific instructions are inserted between asterisks. The subject of
the photograph has given written informed consent, as outlined in the PLOS consent form, to publication of her photograph.
doi:10.1371/journal.pone.0067371.g002
PLOS ONE | www.plosone.org
3
June 2013 | Volume 8 | Issue 6 | e67371
Emotional Contagion Beyond Dyads
the screen) were rejected. The resulting stimuli (n = 609) consisted
of videos of size 7206421 pixels of length 6 sec projected on a 19inch black LCD screen and represented 22.5% of the videos
recorded during the Experiment 1.
(iii) Procedure. The judges were confronted with all the
videos, presented in a random order. They were told that they
were going to watch videos of women perceiving emotional or
non-social movies. Before each trial, a grey screen with the
indication ‘‘Get ready…’’ was presented for 400 ms, followed by
the video. Participants were asked to press the appropriate key on
a keyboard when they recognized joy, fear, or non-social signs in
the women facial expressions. They were then required to wait
500 ms for the next video to appear on the screen.
(iv) Data analysis. A Cohen-Kappa coefficient test was used
to measure the inter-rater agreement. To explore the performance
of the judges against chance-level, we performed a series of threechoice binomial tests.
was amplified, band-pass filtered online between 10–500 Hz, and
then integrated. Integral values were then offline subsampled at
10 Hz resulting in the extraction of 100 ms time bins.
Concerning the recording of SCR, 2 bipolar finger electrodes
(MLT116F) were attached with a VelcroTM attachment straps to
the first phalanx of the index and middle-fingers of the nondominant hand. The SCR was recorded at a sampling frequency
of 2 kHz with a high-pass filter at 0.5 Hz, and then offline
subsampled at 2 Hz resulting in the extraction of 500 ms time
bins.
(vii) Data analysis. Due to the nature of our protocol
(stimuli of long duration, expectation of signals of low amplitude),
we deliberately chose not to prevent participant C’s free facial
movements though they were instructed that they should not move
their arms nor their head during the presentation of the stimulus.
Consequently, we had to exclude those participants whose data
were too noisy: data from ZM (n = 4 participants), CS (n = 8
participants) and SCR (n = 4 participants) were thus rejected prior
to the analysis. Moreover, EMG trials containing artifacts were
manually rejected, following a visual inspection. Participants with
a high rate of trial rejection were excluded from the statistical
analysis for the relevant signal, (n = 3 for ZM, n = 5 for CS),
leaving a total of n = 23 for ZM, n = 17 for CS for the statistical
analysis. For SCR recordings, responses beginning before the first
second following the video presentation were rejected and
participants with a high rate of trial rejection or with absence of
SCRs were excluded of the statistical analysis (n = 7) leaving a total
of 19 participants for the statistical analysis.
Then, for EMG data, the pre-stimulus baseline was computed
over 500 ms before the video onset. EMG activity per trial was
obtained by extracting the maximal change from the baseline level
occurring between 500 to 6000 ms after the video onset. As we
could not predict when, in relation to B’s processing of the stimuli,
C’s facial activity would occur, we considered the maximal activity
in this large temporal time window.
For SCR data, the pre-stimulus baseline was computed over
1500 ms before the video onset. SCR activity per trial was
obtained by extracting the maximal change from baseline level
occurring between 1000 to 6500 ms after the video onset. Data for
each trial was then natural-log transformed for both EMG and
SCR activity.
Finally, data were submitted, separately for each physiological
measure, to repeated measures ANOVA using Emotion (joy vs.
non-social vs. fear) as within-subject factors. Taking into account
the sphericity assumption, we adjusted the degrees of freedom
using the Greenhouse-Geisser correction where appropriate (e
value). Finally, Bonferroni corrections were employed to account
for multiple testing. Post-hoc comparisons were also performed for
the analysis of simple main effects.
Results
First, we tested whether facial cues of joy were transitively
transmitted from A to C, via B. Figure 3A displays the mean ZM
response in participants C depending on the emotional content
displayed in A and presented to participants B. Typically involved
in the production of smiles and preferentially activated during the
perception of joy expressions [25], ZM activity was expected to
increase in C when B was watching videos depicting joy. Our
analysis of ZM activity showed a significant main effect of Emotion
(F2, 22 = 7.96, p = 0.001, e = 0.70, corrected p = 0.004, b = 0.715,
g2 = 0.266). ZM activity was significantly enhanced in C when B
was watching joy expressions compared to non-social stimuli
(t22 = 2.90, p,0.01, d = 0.45) and when compared to fearful
expressions (t22 = 3.05, p,0.01, d = 0.62). Moreover, ZM activity
was not different between fear and non-social conditions
(t22 = 1.39, p.0.1). These results show that muscular activity in
C was specific of the emotional content observed by B, revealing a
transitive motor transmission of joy expressions.
Second, we tested whether facial cues of fear were transitively
transmitted. We therefore compared the activity of the CS across
the conditions. The CS pulls the brows together and is often used
as a measure of negative emotional reactions to negative stimuli
[26], notably fear-related stimuli (e.g., snakes in [27] or facial and
bodily expressions of fear [14]). Our analysis showed a significant
main effect of Emotion (F2, 16 = 5.46, p,0.01, b = 0.752,
g2 = 0.334). CS activity was significantly enhanced in C when B
was watching fearful expressions compared to non-social stimuli
(t16 = 2.91, p = 0.01, d = 0.68) and to joy expressions (t16 = 2.83,
p,0.05, d = 0.53). Last, CS activity was not different between joy
and non-social conditions (t16 = 0.56, p.0.1). Figure 3B shows the
mean CS activity across the conditions. Again, we found that
muscular activity in C matched the emotional expressions watched
by B.
Third, we tested whether the transmission process also involved
an arousal component or whether it was only limited to facial
reactivity by comparing the SCR activity across the conditions.
The statistical analysis revealed a significant main effect of
Emotion (F2, 18 = 7.32, p,0.01, b = 0.924, g2 = 0.289). A significant increase of SCR was found in C when B was watching joy
expressions, compared to when she was watching non-social
stimuli (t18 = 3.75, p = 0.001, d = 0.24). A similar pattern was
observed for fear vs. non-social (t18 = 23.21, p = 0.005, d = 0.27).
Lastly, no difference was found between joy and fear (t18 = 20.19,
p.0.1). The results suggest an increase of physiological arousal in
C when B was watching emotional content, irrespective of the
(c) Experiment 2
(i) Participants. Three judges (2 female, 1 male, mean age
23.3 y 62.66 SE, range 18–26) were recruited. All of the
participants had normal or corrected-to-normal vision, were naive
to the aim of the experiment and presented no neurological or
psychiatric history. All provided written informed consent
according to institutional guidelines of the local research ethics
committee and were paid for their participation. All the
participants were debriefed and thanked after their participation.
(ii) Stimuli. The recordings of the first 16 B participants were
each cut into 45 videos corresponding to the 45 trials performed
during the experiment. Videos containing artifacts (e.g., B
participants moving beyond of the scope of the webcam,
concealing her face with her hand/fingers, or looking away from
PLOS ONE | www.plosone.org
4
June 2013 | Volume 8 | Issue 6 | e67371
Emotional Contagion Beyond Dyads
Figure 3. Electromyographic (zygomaticus major [ZM] and corrugator supercilii [CS]) and skin conductance (SCR) responses in
participant C relative to the emotional content perceived by participant B. (A) EMG response of ZM in C relative to the emotional content
perceived by B. (B) EMG response of CS in C relative to the emotional content perceived by B. (C) SCR responses in C relative to the emotional content
perceived by B. Black lines indicate significant effects at *P,0.05; **P,0.01; ***P,0.001. Error bars indicate SEM.
doi:10.1371/journal.pone.0067371.g003
exact nature of the perceived emotion. Figure 3C displays the
mean SCR across the conditions.
Finally, to investigate the nature and reliability of information
which was transmitted from B to C, three judges who were blind to
our hypotheses were requested to explicitly recognize signs of joy,
fear or neutrality (when watching non-social cues) on B’s faces,
using a forced-choice task, in a follow-up experiment. We
performed a Cohen-Kappa coefficient test that provides a measure
PLOS ONE | www.plosone.org
of inter-rater agreement for qualitative items [28]. This test
revealed a strong agreement between the judges (mean k
value = 0.78; k value for joy items = 0.77; k value for non-social
items = 0.89; k value for fear items = 0.67). The judges were at
chance-level in recognizing joy signs in B’s faces (Three-choice
binomial, p..1) and above chance-level in recognizing fear signs
and neutrality displayed by B (Three-choice binomial, p = 0.01
and p,0.001 respectively). These results suggest that the
5
June 2013 | Volume 8 | Issue 6 | e67371
Emotional Contagion Beyond Dyads
Though it is tempting to generalize this finding to all type of
emotions, the fact that cues related to the experience of fear could
be recognized in B’s face may also indicate that the extent to
which emotional expressions are spontaneously expressed by their
observers might well depend on their reference or content. In this
respect, expressions related to immediate and urgent threats (such
as expressions of fear) might more easily induce explicit cues in the
face of their observers. Be that as it may, they must be such
unintentional cues that they explain the well-documented fact that
emotional contagion can occur without conscious access [8]. An
impressive study by Tamietto et al. [14], in particular, reported
emotional transmission in cortically-blind patients. Note that one
limitation of this study is the absence of physiological measures in
B. Further experiments could test each step in the spread of
emotions in transitive situations and provide information about a
potential decrease in physiological responses from A to C, or
conversely, a gradual increase in emotional information, that is to
be expected in crowd contexts [4].
Finally, our findings point out an important theoretical issue,
the distinction between cues and signals. Cues can be defined as
stimuli that elicit a specific cognitive or behavioral response that
goes beyond the mere perception of the cue itself. Signals can be
defined as cues that have the function of eliciting such a response
[36,37]. Are the subtle emotional cues produced by B and picked
up by C a mere side effect of B’s emotional arousal caused by the
recognition of A’s emotion, or do these cues have the function of
eliciting a similar emotional response in the third party? In other
terms, are they not merely cues but signals?
In our study, participant B did not know that she was being
observed and did not therefore intend to communicate anything
by means of her facial expression (of which she may well have been
unaware). The fact that, at least in the case of joy, these
expressions were not recognized by judges strongly suggests that
participant C’s use of these cues was not intentional either. The
cues we are talking about are neither intentionally emitted not
intentionally attended to.
The fact that B nevertheless produced unintentional cues strong
enough for them to influence participant C can be interpreted as
evidence that these emotional cues are biological adaptations, the
function of which is to transmit an emotion in a non-intentional
way. If so, how is this function adaptive? A possibility worth
exploring is that facial activity in B is an evolved cooperative
behavior that consists in the unconscious and spontaneous
signaling of survival-value information that may induce appropriate emotional and preparatory behavior in our conspecifics. Such
a mechanism would be adaptive, on the one hand, in threatening
situations where flight and mobbing behaviors are optimal
strategies; and, on the other hand, in favorable situations where
signaling to conspecifics the presence of non-scarce rewarding
features of the environment may foster social bonds. More work
would be required to ascertain whether unintended and not
consciously attended cues of specific emotion are in fact evolved
signals that contribute to the fitness of those who produce them
and to that of those who are influenced by them.
Our study, we hope, offers some new insights and raises new
questions about the spread of emotions across individuals in group
settings. This should help revive a once prolific intellectual
tradition – the social psychology of crowds – which has
contributed so much to the study of human collective behavior
in the past.
transmission of an emotion from B to C may be independent of an
explicit recognition of the emotional signs displayed on B’s faces,
at least in the situation where B herself perceived joy expressions.
Yet, there was a difference between the two experiments: while
judges were exposed to several participants B, C was only exposed
to two participants. As a consequence, we cannot exclude the
possibility that there were differences in susceptibility to emotional
cues in B between participants C and judges.
Discussion
Our findings indicate that emotional expressions of joy and fear
can be spontaneously transmitted beyond dyads. Overt expressions of an emotion in an individual A caused in an observer B the
involuntary production of subtle cues that induced an emotional
reaction in a third individual C (who had perceptual access to B
but not to A).
The facial reactions triggered in C were characteristic of the
type of emotions displayed by A, as revealed by the EMG
responses of our participants. Activity of the ZM muscle region
was heightened in C when B perceived the display of joy in A (in
the form of facial, bodily and vocal signals) compared to when B
was perceiving displays of fear in A or non-social stimuli. Activity
of the CS muscle region, on the other hand, was heightened in C
when B was observing expressions of fear compared to when B was
watching expressions of joy in A or non-social stimuli. Although
the use of CS as an index of fearful facial reactions is a limitation to
demonstrate a transitive motor transmission of fearful expressions,
according to the FACS nomenclature [13], facial expressions of
fear usually involve the widening of the eyes (AU5), a raising of the
eyebrows (AU1+2: activity of the frontalis) co-occurring with
frowning (AU4: activity of CS), as well as the stretching of the
mouth sideways (AU20). Thus, if the frontalis activity is indeed
used in the EMG literature to measure facial reactions associated
with the experience of fear (e.g., [16,19,29]), CS is also relevant
(e.g., [14,27]) as it is known to reflect a more general negative
facial response and is recruited in fearful facial expressions.
Numerous studies report the production of subtle and specific
facial reactions in front of facial, bodily, as well as vocal
expressions of emotions [16,19,20,25,26,30–34]. Here we extend
these results to a minimal element of a crowd situation by showing,
for the first time, that the perception of the facial reactions of an
individual (B), herself perceiving an emotional display (A), triggers
the release of a specific facial pattern in a third party (C).
Importantly, C’s reactions were not limited to a set of facial motor
responses but involved emotional arousal, as evidenced by the
increase in SCR during the emotional conditions (joy and fear)
compared to the non-social condition. Importantly, the lower
SCRs we observed during the non-social condition provide
evidence against interpreting SCR increases for fear and joy
expressions as the physiological consequences of attentional
process only [35]. This increase of SCR response to emotional
conditions as compared to non-social condition does not merely
reflect an overall increase of arousal for vision of a body versus
vision of a non-body stimulus as SCR was recorded in C who only
sees a social agent B. Moreover, given that SCR activity is found to
be coupled with specific muscular activity during emotional
conditions, it is unlikely that observed SCR would not reflect the
processing of emotional content.
Of interest here, judges in the follow-up experiment were at
chance level when asked to recognize joy cues in B’s faces. This
indicates that transitive emotional transmission could occur, even
in the absence of explicit recognition of emotional information in
the pivot individual’s face, on the basis of mere unintentional cues.
PLOS ONE | www.plosone.org
Acknowledgments
The authors would like to thank Juliane Farthouat, Emma Vilarem, MarieSarah Adenis and Auréliane Pajani for technical assistance and help in data
6
June 2013 | Volume 8 | Issue 6 | e67371
Emotional Contagion Beyond Dyads
collection and Karl Boyer for help in designing the figures. We are grateful
to Dr. Sylvie Berthoz (INSERM U669 & IMM) for administrative support.
Author Contributions
Conceived and designed the experiments: GD LC DS JG. Performed the
experiments: GD MC LP. Analyzed the data: GD LC. Contributed
reagents/materials/analysis tools: RS. Wrote the paper: GD JG DS.
References
20. Grèzes J, Philip L, Chadwick M, Dezecache G, Soussignan R, et al. (2013) SelfRelevance Appraisal Influences Facial Reactions to Emotional Body Expressions. PLoS ONE 8: e55885. doi:10.1371/journal.pone.0055885.
21. Fridlund AJ (1991) Sociality of solitary smiling: Potentiation by an implicit
audience. Journal of Personality and Social Psychology 60: 229.
22. Chovil N (1991) Social determinants of facial displays. Journal of Nonverbal
Behavior 15: 141–154. doi:10.1007/BF01672216.
23. Hall JA (1990) Nonverbal Sex Differences: Accuracy of Communication and
Expressive Style. Reprint. Johns Hopkins University Pr.
24. Fridlund AJ, Cacioppo JT (1986) Guidelines for human electromyographic
research. Psychophysiology 23: 567–589.
25. Dimberg U (1982) Facial reactions to facial expressions. Psychophysiology 19:
643–647.
26. Soussignan R, Ehrlé N, Henry A, Schaal B, Bakchine S (2005) Dissociation of
emotional processes in response to visual and olfactory stimuli following
frontotemporal damage. Neurocase 11: 114–128.
27. Dimberg U, Hansson GÖ, Thunberg M (1998) Fear of snakes and facial
reactions: A case of rapid emotional responding. Scandinavian Journal of
Psychology 39: 75–80.
28. Cohen J, others (1960) A coefficient of agreement for nominal scales.
Educational and Psychological Measurement 20: 37–46.
29. Lundqvist LO (1995) Facial EMG reactions to facial expressions: A case of facial
emotional contagion? Scandinavian Journal of Psychology 36: 130–141.
doi:10.1111/j.1467-9450.1995.tb00974.x.
30. Dimberg U, Thunberg M (1998) Rapid facial reactions to emotional facial
expressions. Scandinavian Journal of Psychology 39: 39–45. doi:10.1111/14679450.00054.
31. Dimberg U, Thunberg M, Elmehed K (2000) Unconscious Facial Reactions to
Emotional Facial Expressions. Psychological Science 11: 86–89. doi:10.1111/
1467-9280.00221.
32. Hess U, Blairy S (2001) Facial mimicry and emotional contagion to dynamic
emotional facial expressions and their influence on decoding accuracy.
International Journal of Psychophysiology 40: 129–141. doi:10.1016/S01678760(00)00161-6.
33. Hietanen JK, Surakka V, Linnankoski I (1998) Facial electromyographic
responses to vocal affect expressions. Psychophysiology 35: 530–536.
doi:10.1017/S0048577298970445.
34. Magnée MJCM, Stekelenburg JJ, Kemner C, de Gelder B (2007) Similar facial
electromyographic responses to faces, voices, and body expressions. Neuroreport
18: 369–372.
35. Frith CD, Allen HA (1983) The skin conductance orienting response as an index
of attention. Biological Psychology 17: 27–39. doi:10.1016/03010511(83)90064-9.
36. Smith JM, Harper D (2003) Animal signals. Oxford University Press, USA.
37. Scott-Phillips TC (2008) Defining biological communication. Journal of
Evolutionary Biology 21: 387–395.
1. Raafat RM, Chater N, Frith C (2009) Herding in humans. Trends in Cognitive
Sciences 13: 420–428.
2. Tarde G (1890) Les lois de l’imitation: étude sociologique. F. Alcan.
3. Sighele S (1901) La foule criminelle: Essai de psychologie collective. F. Alcan.
4. Le Bon G (1896) Psychologie des foules. Macmillan.
5. Barsade SG (2002) The Ripple Effect: Emotional Contagion and Its Influence on
Group Behavior. Administrative Science Quarterly 47: 644–675. doi:10.2307/
3094912.
6. Konvalinka I, Xygalatas D, Bulbulia J, Schjødt U, Jegindø EM, et al. (2011)
Synchronized arousal between performers and related spectators in a firewalking ritual. Proceedings of the National Academy of Sciences 108: 8514.
7. Niedenthal PM, Brauer M (2012) Social Functionality of Human Emotion.
Annual Review of Psychology 63: 259–285. doi:10.1146/annurev.psych.121208.131605.
8. Hatfield E, Cacioppo JT, Rapson RL (1994) Emotional contagion. Cambridge
Univ Pr.
9. Buss DM (2000) The evolution of happiness. American Psychologist 55: 15.
10. Öhman A, Mineka S (2001) Fears, phobias, and preparedness: Toward an
evolved module of fear and fear learning. Psychological Review 108: 483–522.
doi:10.1037/0033-295X.108.3.483.
11. Hess U, Philippot P, Blairy S (1998) Facial reactions to emotional facial
expressions: affect or cognition? Cognition & Emotion 12: 509–531.
12. Moody EJ, McIntosh DN (2006) Bases and Consequences of Rapid, Automatic
Matching Behavior. Imitation and the social mind: Autism and typical
development: 71.
13. Ekman P, Friesen WV (1978) Facial action coding system: A technique for the
measurement of facial movement. Consulting Psychologists Press, Palo Alto, CA.
14. Tamietto M, Castelli L, Vighetti S, Perozzo P, Geminiani G, et al. (2009)
Unseen facial and bodily expressions trigger fast emotional reactions.
Proceedings of the National Academy of Sciences 106: 17661–17666.
15. Sequeira H, Hot P, Silvert L, Delplanque S (2009) Electrical autonomic
correlates of emotion. International Journal of Psychophysiology 71: 50–56.
doi:10.1016/j.ijpsycho.2008.07.009.
16. Moody EJ, McIntosh DN, Mann LJ, Weisser KR (2007) More than mere
mimicry? The influence of emotion on rapid facial reactions to faces. Emotion 7:
447–457. doi:10.1037/1528-3542.7.2.447.
17. Chartrand TL, Bargh JA (1999) The chameleon effect: The perception–
behavior link and social interaction. Journal of Personality and Social
Psychology 76: 893.
18. Bavelas JB, Black A, Lemery CR, Mullett J (1986) ‘‘I show how you feel’’: Motor
mimicry as a communicative act. Journal of Personality and Social Psychology
50: 322–329. doi:10.1037/0022-3514.50.2.322.
19. Soussignan R, Chadwick M, Philip L, Conty L, Dezecache G, et al. (2013) Selfrelevance appraisal of gaze direction and dynamic facial expressions: Effects on
facial electromyographic and autonomic reactions. Emotion 13: 330–337.
doi:10.1037/a0029892.
PLOS ONE | www.plosone.org
7
June 2013 | Volume 8 | Issue 6 | e67371
What psychological mechanisms are at work?
“Social comparison” models
A first broad class of models can be said to rely on a process of “social comparison” whereby
observers adopt targets’ affective states by conscious reasoning and by imagining themselves
in the same situation. Such a model can be traced back to Adam Smith’s observations (1759):
“Though our brother is upon the rack, as long as we ourselves are
at ease, our senses will never inform us of what he suffers. They
never did and never can carry us beyond our own persons, and it is
by the imagination only that we form any conception of what are
his sensations... His agonies, when they are thus brought home to
ourselves, when we have thus adopted and made them our own, begin
at last to affect us, and we then tremble and shudder at the thought
of what he feels. [. . . ] By the imagination we place ourselves in his
situation, we conceive ourselves enduring all the same torments, we
enter as it were into his body, and become in some measure the same
person with him, and thence form some idea of his sensations, and
even feel something which, though weaker in degree is not altogether
unlike them."
More modern accounts of this model do exist (e.g., Bandura, 1969), but they share the
similar problematic assumption that some sort of deliberation is at the core of the process of
emotional transmission. Such a costly cognitive process can hardly account for what happens
in crowd panics, where emotions seem to be transmitted very rapidly.
“Conditioning” models
Other models, which rely on associative processes, have been put forward to account for the
primitive character of emotional contagion. Their proponents argue that emotional responses
34
can be conditioned or unconditioned and that this accounts for most cases of emotional
contagion (Aronfreed, 1970). For instance, one can generalize from situations where smiles
are expressed in response to pro-social behavior and a sense of well-being, or learn that
fearful expressions are associated with stressful situations (conditioning). Also, one could
well have unconditioned responses of fear and joy in reaction, respectively, to fearful and
joyful expressions.
These models are appealing by virtue of their simplicity, but that is precisely where the shoe
pinches: they are so simple that they are completely imprecise, mechanistically speaking.
This probably explains why the “primitive emotional contagion” model, which is causally
explicit, has been so popular in recent years.
The “primitive emotional contagion” model
Through their work on the processing of social cues, John Bargh and his colleagues have
consistently shown that we have a natural tendency to mimic, spontaneously and unconsciously, the postures of individuals we are interacting with (Chartrand & Bargh, 1999).
This “mimicry” is not restricted to postures, as the perception of facial expressions of emotion is also known to induce, in observers, slight activity in the same facial muscle as the
target’s within the first second after exposure to the stimulus (Dimberg, 1982; Dimberg,
Thunberg, & Elmehed, 2000; Dimberg & Thunberg, 1998). The effects of such behavioral
mimicry are, for Hatfield and colleagues (Hatfield et al., 1994), at the basis of “primitive
emotional contagion” which is a three-step process: (i) first, observers tend to mimic and
synchronize their overall behavior (facial expressions, bodily postures, vocal behavior) with
the target with whom they interact; (ii) doing so, observers adopt a muscular configuration
which, through muscular feedback, alters the emotional experience in a way congruent with
their adopted muscular configuration; (iii) consequently, observers and the target individual
converge emotionally.
Each of these three steps has been widely documented in the literature: (i) people indeed tend
to precisely and rapidly mimic and to synchronize their facial expressions (Dimberg, 1982;
Dimberg et al., 2000; Moody, McIntosh, Mann, & Weisser, 2007; Soussignan et al., 2013),
35
bodily postures (Bernieri & Rosenthal, 1991; Bernieri, 1988) and vocal behavior (Cappella
& Planalp, 1981; Cappella, 1981, 1997). (ii) There is also some evidence that facial (Bush,
Barr, McHugo, & Lanzetta, 1989; Laird, 1984; Lanzetta & Orr, 1980), postural (Stepper &
Strack, 1993) and vocal (Hatfield & Hsee, 1995) feedback can alter, in an emotion-specific
way, the subjective emotional experience. In sum, these three steps, when causally combined,
could indeed allow for the transmission of emotion between two interacting agents.
It must, however, be pointed out that this model suffers from three main limitations: firstly,
there is no evidence that the first two stages are in fact causally linked in the process of
emotional transmission. Secondly, it relies entirely upon the mechanism of motor mimicry
(where several works indicate that rapid facial reactions to emotional faces rely on affective
processing: Dimberg, Hansson, & Thunberg, 1998; Grèzes et al., 2013; Moody et al., 2007;
Soussignan et al., 2013). Thirdly, it presupposes that an observer’s emotional reaction should
completely match that of the individual being observed where, in fact, certain social contexts
may not favor the sharing of emotional experiences between two agents (should I really pick
up my competitor’s joy, or my enemy’s fear?).
This third problem, although it is acknowledged by Hatfield and colleagues, is totally incompatible with the view that motor mimicry is at the basis of emotional transmission.
This model, generally speaking, is based on equating the transmission of emotional information with contagion, i.e., a process where observers are passive and where they mandatorily
“catch” the various emotions expressed by the target. In this respect, cognitive models that
explain how humans take into account the various characteristics of the models’ emotional
expression to flexibly respond to others’ emotional signals (sometimes in a congruent way,
by adopting an emotional experience similar to that of the observer) can be a good remedy.
Emotional contagion as emotional communication
One of the assumptions shared by all three types of models mentioned above is the common
belief that “emotional contagion” is a special process which relies on dedicated psychological
mechanisms. This is particularly obvious when considering the model of “primitive emotional
contagion”: by assuming that individuals A and B share their emotions, which are therefore
36
taken to be replica of each other, advocates of this model typically restrict their investigation
to psychological processes that allow for a mirroring between A and B. But, even if some
sort of “interpersonal similarity” (as described by De Vignemont & Jacob, 2012) between
A’s and B’s emotional states is necessary for a given social interaction to be classified as
“emotional contagion”, this does not mean that B’s reactions to A’s emotional displays rely
on mechanisms that automatically ensure the congruency between the emotional states of A
and B. In fact, emotional contagion could be a subset of emotional communication processes:
confronted with emotional signals, observers may accidentally react in a congruent manner
(thus leading to emotional contagion). But they may equally react in a complementary or
even incongruent fashion, which case, emotional contagion would merely be a case of reaction
to others’ emotional signals, where the emotions of A and B happen to be similar. Explaining
the proximal mechanisms of emotional contagion thus brings us back to the wider issue of
which psychological mechanisms allow humans to react to others’ emotional displays.
How do we react to others’ emotional displays?
The question of the proximal mechanisms that support our reactions to others’ emotional
signals deserves an entire thesis. My own research agenda has been very different and I will
only suggest a few tentative answers to this question.
Together with Dr. Julie Grèzes, we have proposed a cognitive and neural framework to
explain how the human brain interprets and reacts to others’ expressions of threat. We also
briefly mention the possibility that this model could be extended to apply to other emotional
displays (such as joy, which is of main interest here). The model has been described in a paper
recently published in the journal Neuropsychologia (October 2013). As stated above, this
model comprehends emotional contagion as a specific case of emotional communication. In
other words, emotional contagion is considered to be a subset of a larger set which comprises
the whole range of emotional communication.
37
Neuropsychologia ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Contents lists available at ScienceDirect
Neuropsychologia
journal homepage: www.elsevier.com/locate/neuropsychologia
How do shared-representations and emotional processes cooperate
in response to social threat signals?
Julie Grèzes a,b,n,1, Guillaume Dezecache a,c
a
b
c
Cognitive Neurosciences Lab., INSERM U960 & IEC—Ecole Normale Supérieure, Paris 75005, France
Centre de NeuroImagerie de Recherche (CENIR), Paris, France
Institut Jean Nicod, UMR 8129 & IEC—Ecole Normale Supérieure, Paris, France
art ic l e i nf o
Keywords:
Emotional communication
Threat signals
Opportunities for action
Shared motor representation
Amygdala
Premotor cortex
a b s t r a c t
Research in social cognition has mainly focused on the detection and comprehension of others’ mental
and emotional states. Doing so, past studies have adopted a “contemplative” view of the role of the
observer engaged in a social interaction. However, the adaptive problem posed by the social environment
is first and foremost that of coordination, which demands more of social cognition beyond mere
detection and comprehension of others’ hidden states. Offering a theoretical framework that takes into
account the dynamical aspect of social interaction – notably by accounting for constant interplay
between emotional appraisal and motor processes in socially engaged human brain – thus constitutes an
important challenge for the field of social cognition. Here, we propose that our social environment can be
seen as presenting opportunities for actions regarding others. Within such a framework, non-verbal
social signals such as emotional displays are considered to have evolved to influence the observer in
consistent ways. Consequently, social signals can modulate motor responses in observers. In line with
this theoretical framework we provide evidence that emotional and motor processes are actually tightly
linked during the perception of threat signals. This is ultimately reflected in the human brain by constant
interplay between limbic and motor areas.
& 2013 Elsevier Ltd. All rights reserved.
1. Introduction
“Actions are critical steps in the interaction between the self
and external milieu” (Jeannerod, 2006).
We are continuously confronted with a great number of opportunities for actions in our environment, and we are constantly collecting
information in order to select the most relevant set of motor
commands from among numerous potential action plans so as to
respond to environmental challenges (Cisek, 2007; Cisek & Kalaska,
2010). This ability to form multiple motor plans in parallel and to
flexibly switch between them brings survival advantage by dramatically reducing the time one takes to respond to environmental
challenges (Cisek & Kalaska, 2010; Cui & Andersen, 2011). These action
possibilities emerge from the relationship between species and their
milieu, as well as from the interaction between individuals and their
more immediate environment. They thus depend both on long-term
attunement (at the evolutionary time-scale, through cognitive adaptations and natural selection), and on short-term attunement (at the
n
Corresponding author at: Laboratoire de Neuroscience Cognitive, INSERM U960
and IEC Ecole Normale Supérieure, 29 Rue d’Ulm, 75005 Paris, France.
Tel.: þ 33 1 44 32 26 76; fax: þ33 1 44 32 26 86.
E-mail address: [email protected] (J. Grèzes).
1
http://www.grezes.ens.fr/.
proximal level, through developmental patterns as well as through
local accommodation) (Kaufmann & Clément, 2007). Note that contextual assumptions (through observer/actor's preferences and skills,
as well as objects’ characteristics) also play an important role in the
interactions between individuals and their milieu and ultimately shape
action opportunities.
Although the concept of action opportunities has mostly been
used to account for interactions between animals and non-social
physical objects in the world, it may equally apply to our interactions
with the social world. We would therefore perceive our physical and
social environments as maps of relevant action opportunities in a
space which can also include potential actions of another present in
one's own space (Sebanz, Knoblich, & Prinz, 2003; Sartori, Becchio,
Bulgheroni, & Castiello, 2009; Bach, Bayliss, & Tipper, 2011; Ferri,
Campione, Dalla Volta, Gianelli, & Gentilucci, 2011). Again, contextual
assumptions do play a role within such a framework: observers’ skills
and the characteristics of the social objects (the individual[s] with
whom one interacts) shape opportunities for action regarding others;
they are function of one's own needs (Rietveld, De Haans, & Denys,
2012) and attitudes towards others (Van Bavel & Cunningham, 2012).
Opportunities for action may also well emerge from emotional
signals (Grèzes, 2011; Dezecache, Mercier, & Scott-Phillips, 2013).
A fearful display, for instance, invites observers to act upon it,
whereby observers select among numerous potential actions
0028-3932/$ - see front matter & 2013 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.neuropsychologia.2013.09.019
Please cite this article as: Grèzes, J., & Dezecache, G. How do shared-representations and emotional processes cooperate in response to
social threat signals? Neuropsychologia (2013), http://dx.doi.org/10.1016/j.neuropsychologia.2013.09.019i
J. Grèzes, G. Dezecache / Neuropsychologia ∎ (∎∎∎∎) ∎∎∎–∎∎∎
2
(fleeing from the threatening element, fighting against it or
rescuing potential endangered congeners, among other numerous
potential actions) according to their preferences and appraisal of
the situation. The concept of opportunities for action thus constitutes a fruitful framework within which we can better understand social and emotional perception and its modulation in
different individuals.
One critical consequence of the view that our social world
features opportunities for action in response to others’ emotions is
the necessity to propose an adequate cognitive and neural model of
social interaction: behaviour and brain activity should reflect the
processing of multiple representations of potential actions in
response to others’ behaviour, and their selection through the use
of external, as well as internal sensory information. It also requires
that socio-emotional understanding be tightly linked with social
interactive skills (McGann & De Jaegher, 2009). The building of such a
view, we shall argue, supposes the integration of two separate
systems in the brain, i.e., the motor and the emotional systems that
have been mainly studied independently in the literature. We believe
the synthesis of these two lines of research will help generate a novel
framework to better understand the cognitive and neural mechanisms which ensure the initiation of adaptive responses during social
and emotional interactions. Since much of our work has been
dedicated to the perception of threat signals, we will exclusively
focus on the perception of fear and anger in others’ face and body.
The possibility that limbic and motor processes similarly interact
during the perception of other emotional signals (such as joy and
disgust) will be briefly discussed.
2. Shared motor representations: A key mechanism for the
understanding of others’ actions and emotions.
2.1. Perception of actions
The neural basis underpinning our ability to represent and
understand the actions of others has been the object of considerable
research in both monkeys and humans. It is now acknowledged that
perceived actions are mapped onto the motor system of the observer,
activating corresponding motor representations (henceforth “shared
motor representations”). The motor system of the observer simulates
the observed action by issuing corresponding motor commands that
account for predictions of immediate outcomes of the perceived
action (e.g. Jeannerod, 2001; Wilson & Knoblich, 2005). Shared motor
representations were shown to be more selectively tuned to process
actions that (1) conform to biomechanical and joint constraints of
normal human movement (Reid, Belsky, & Johnson, 2013; Saygin,
2007; Dayan, Casile, Levit-Binnun, Giese, & Flash, 2007; Elsner,
Falck-Ytter, & Gredebäck, 2012), and (2) are simple, and familiar
within the observer's motor repertoire (Calvo-Merino, Grèzes, Glaser,
Passingham, & Haggard, 2006; Kanakogi & Itakura, 2011) or for
which the observer has acquired visual experience (Cross, Kraemer,
Hamilton, Kelley, & Grafton, 2009; Jola, bedian-Amiri, Kuppuswamy,
Pollick, & Grosbras, 2012). Shared motor representations sustained
by premotor, motor, somatosensory and parietal cortices (Grèzes
& Decety, 2001; Morin & Grèzes, 2008; Caspers, Zilles, Laird, &
Eickhoff, 2010; Van Overwalle, 2008; Shaw, Grosbras, Leonard, Pike,
& Paus, 2012; Molenberghs, Hayward, Mattingley, & Cunnington,
2012) allow us to identify “what” the action is and “how” it is or will
be performed (Thioux, Gazzola, & Keysers, 2008; Hesse, Sparing, &
Fink, 2008).
2.2. Limits
If shared motor representations play a key role in deciphering
and predicting other's actions, they are, per se, not sufficient to
allow for interpersonal coordination. What is involved in the
perception of opportunities for actions during social interaction
is, cognitively speaking, very different from what shared motor
representations are known to do, that is, to allow for the simulation of an observed motor pattern (Rizzolatti, Fogassi, & Gallese,
2001). We assume those action opportunities to be emergent
properties of the observer-environment interactions, such that
interaction with the social world triggers a wide range of opportunities for actions in the engaged observer. It was shown that in
an interactive context, the perception of another individual's
gestures can override pre-planned actions towards physical
objects: the opening of an empty hand or the mouth induces, in
observers, changes in the trajectory of their grasping gesture
toward an object (Sartori et al., 2009; Ferri et al., 2011). Importantly, these gestures here were perceived as a request to be given
the object or to be fed, and not to reproduce the perceived action.
These experiments strongly support the hypothesis that our brain
processes both physical and social information as currently available potential actions, and that the context strongly impacts the
selection between these action opportunities.
This perspective about social interaction calls for re-examination of previous findings on the neural bases suggested to
sustain the shared representations. Activity in parietal cortex and
connected premotor and motor regions (dorsal visuomotor
stream) may also reflect the implementation of multiple representations of potential actions that one can perform (Cisek, 2007;
Cisek & Kalaska, 2010). Within the dorsal visuomotor stream of the
macaque brain, 20% of motor neurons showed object-related
visual properties (canonical neurons) related to specification of
the potential action triggered by the perceived object (Rizzolatti &
Fadiga, 1998; Murata, Gallese, Luppino, Kaseda, & Sakata, 2000;
Raos, Umiltá, Murata, Fogassi, & Gallese, 2006). In parallel, 17% of
motor neurons of dorsal visuomotor stream showed action-related
visual properties (mirror neurons—(Gallese, Fadiga, Fogassi, &
Rizzolatti, 1996)) associated with the understanding of other
individuals’ behaviour (Rizzolatti & Sinigaglia, 2010). Among these
17%, only 5.5% code for a strictly congruent action in the motor and
the visual domain, whereas 8.6% code for two or more actions in
the visual domain, and 1.3% for non-congruent actions. Similar
proportions were revealed in the human supplementary motor
cortex (Mukamel, Ekstrom, Kaplan, Iacoboni, & Fried, 2010): 14% of
the recorded neurons in area responded to congruent observed
actions, but 10% responded to non-congruent observed actions.
Before viewing all action-related visual activities in dorsal
visuomotor stream as shared motor representations processes,
one may first suggest that motor neurons that responded to noncongruent observed actions should not be considered mirror
neurons, but should be categorized as “social” canonical neurons,
that is, neurons that are active when foreseeing a possible social
interaction (vs. interaction with an object as for canonical neurons)
and preparing oneself accordingly (Dezecache, Conty, & Grèzes,
2012). Also, one may suggest that, in parallel to shared motor
representations activity, there is neural activity that is involved in
the processing of the oberver's potential opportunities for action
in response to other individuals' behaviour.
2.3. Perception of emotions
The concept of shared motor representation is also influential
in the emotional domain. The perception of others’ emotional
expressions is taken to trigger an automatic and non-affective
motor matching (termed ‘mimicry’) of the perceived expressions
(Hatfield, Cacioppo, & Rapson, 1993; Hatfield, Cacioppo, & Rapson,
1994; Chartrand & Bargh, 1999; Williams, Whiten, Suddendorf, &
Perrett, 2001; Carr, Iacoboni, Dubeau, Mazziotta, & Lenzi, 2003;
Niedenthal, Barsalou, Winkielman, Krauth-Gruber, & Ric, 2005;
Please cite this article as: Grèzes, J., & Dezecache, G. How do shared-representations and emotional processes cooperate in response to
social threat signals? Neuropsychologia (2013), http://dx.doi.org/10.1016/j.neuropsychologia.2013.09.019i
J. Grèzes, G. Dezecache / Neuropsychologia ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Lee, Josephs, Dolan, & Critchley, 2006; Dapretto et al., 2006),
which could precede and even cause emotion through facial and
bodily feedback, ultimately generating emotional contagion in the
observer (Hatfield et al., 1993; McIntosh, 2006; Niedenthal et al.,
2005). Some researchers therefore consider this non-emotional
motor convergence to be independent of internal emotional
reactions and/or to sustain a communicative role by allowing the
observer to inform the emitter that they have understood the
expressed state (Bavelas, Black, Lemery, & Mullett, 1986; Hess &
Blairy, 2001). The motor (and eventual emotional) convergence
between the emitter and the observer could also serve to enhance
social and empathic bonds bolstering social communication,
prosocial behaviour and affiliation (e.g., Chartrand & Bargh, 1999;
Yabar & Hess, 2006). In parallel, others have suggested that a direct
and implicit form of understanding others is achieved through
embodied simulation (e.g. Gallese, 2001; Niedenthal, 2007). It is
proposed that a vicarious activation of somatosensory representations (“as if loop” which bypasses the facial musculature) when
observing the emotional expressions of others is what facilitates
their recognition (Adolphs, Tranel, & Denburg, 2000; Pourtois
et al., 2004; Pitcher, Garrido, Walsh, & Duchaine, 2008; Banissy
et al., 2011; Maister, Tsiakkas, & Tsakiris, 2013).
2.4. Limits
One important limit can be raised regarding the role of shared
representations in the emotion domain. As Jeannerod stated:
“emotional contagion can only provide the observer with the
information that the person he sees is producing a certain type
of behaviour or is experiencing a certain type of emotion; but,
because it does not tell the observer what the emotion is about, it
cannot be a useful means for reacting to the emotion of that
person, and would not yield an appropriate response in a social
interaction. Imagine facing somebody who expresses anger and
threat: the adaptive response in that case seems to be avoidance
rather than imitation, i.e. not to experience anger oneself, but to
experience fear and eventually run away” (Jeannerod, 2006
pp. 147). In other words, the shared representations system is, in
itself, not sufficient to allow for appropriate reactions in the
observer. Note that this critique of the shared motor representation system specifically call into question its lack of details about
how sharing others’ motor states can significantly prepare oneself
to react in a relevant fashion; this is complementary to other
criticisms that targets the very contribution of motor mirroring
processes to mindreading (Borg, 2007; Jacob, 2008) but consistent
with critiques focusing on the role of shared motor representations in the production of empathetic responses (de Vignemont &
Singer, 2006; de Vignemont & Jacob, 2012) in that it suggests that
perceiving cues of emotion in others is, in itself, not sufficient to
produce an appropriate response. The production of appropriate
responses should ultimately rely on appraisal processes that lie
beyond the scope of the shared motor representations system.
We suggest that the processing of opportunities for action
during social interaction involves a specific brain network, a set of
specific neuron populations and specific mechanisms that differ
(for the most part) from the shared motor representations network. There are brain regions that do not display motor and
mirroring properties – such as the amygdala, the brain's key
emotional centre – that are fundamental in the evaluation of
social signals and in exerting significant influence over the selection of one's own adaptive reaction and that should thus be taken
into account. Characterizing the neural specificities of the network
serving the processing of opportunities for action during social
interaction, beyond the shared motor representation system,
constitutes a challenging step in our understanding of the processing of social emotional signals. This will be the topic of the next
3
section. As stated above, we will focus our discussion on threat
signals, i.e., displays of fear and anger.
3. Emotions motivate actions in the observers (emotion-toactions processes)
Emotional displays, which constitute a medium for biological
communication, can be seen as tools which influence the behaviour of the agents with whom we interact (Grèzes, 2011;
Dezecache et al., 2013). They promote fast processing and elaboration of adapted social decisions and responses in the observer
(Frijda, 1986; Frijda & Gerrod Parrott, 2011). Surprisingly, although
emotions are believed to promote adaptive social decisions and
responses in others, most research on emotions in humans has
focused on the sensory (Adolphs, 2002) or sensorimotor (Gallese,
2001; Niedenthal, 2007) processing of emotional signals and
associated attentional capture (Vuilleumier & Pourtois, 2007). More
generally, the literature on emotion has taken for granted that the
basic task of social cognition is the detection and comprehension of
others’ mental and affective states2. Yet, efficient coordination
during emotional social interactions, which constitutes a step
beyond the mere detection of others’ emotional states, has been
mostly overlooked. As a result, cognitive and anatomical links
between structures that detect emotions in others and structures
that prepare motor responses to cope with these emotional signals
have been largely neglected or studied apart. In consequence, little
is known about the anatomical substrates which allow the limbic
system to influence purposive actions, i.e., to prepare a coordinated
set of motor commands necessary to face social demands, through
interaction with the cortical motor system. The following subsection summarizes the anatomical and functional evidence in support of functional interactions between the limbic and the motor
systems during the perception of threat signals.
3.1. Anatomical evidence
In non-human primates, anatomical tracing and electrophysiology
studies in monkeys provide compelling evidence that the amygdala
(AMG) plays a role in two key functions: processing the emotional
significance of features of the environment, and interfacing with
motor systems for the expression of adaptive behavioural responses
(see Damasio et al., 2000; LeDoux, 2000; Barbas, 2000). The closely
linked network composed of visual areas (fusiform gyrus (FG) and
superior temporal sulcus (STS)), the AMG and the lateral inferior
frontal gyrus in humans (BA 45/47) forms the anatomical substrate of
the first function, i.e. the evaluation of the emotional significance of
sensory events (Ghashghaei & Barbas, 2002). As for the second
function, there is abundant animal brain literature suggesting
that a hierarchically-organized subcortical circuit constituted by the
central nucleus of the AMG, the hypothalamus, the bed nucleus of
the stria terminalis and the periaqueductal gray matter mediates
species-specific basic survival behaviours (Holstege, 1991; the Royal
Road, Panksepp, 1998). The basolateral complex of the AMG, in
concert with the ventromedial prefrontal cortex and the ventral
striatum (nucleus accumbens), underlies the modulation and regulation of these visceral functions and behavioural choices (Price, 2003;
Mogenson, Jones, & Yim, 1980; Groenewegen & Trimble, 2007).
2
There has been a general tendency in the field of social neuroscience to
concentrate on the “contemplative” part of the social interaction (i.e., putting
participants in position of observers who are trying to decipher others’ mental
states) and to ignore the preparation of an adaptive but flexible response which is
an essential part of social interaction. Whether the stress on shared-representation
constitutes a symptom or the cause of this tendency is an interesting question, but
it lies out of the scope of the paper.
Please cite this article as: Grèzes, J., & Dezecache, G. How do shared-representations and emotional processes cooperate in response to
social threat signals? Neuropsychologia (2013), http://dx.doi.org/10.1016/j.neuropsychologia.2013.09.019i
J. Grèzes, G. Dezecache / Neuropsychologia ∎ (∎∎∎∎) ∎∎∎–∎∎∎
4
In addition, there is consistent evidence from monkey, cat and rat
studies, that the magnocellular division of the basal nucleus of the
AMG complex sends direct projections to cortical motor-related areas
(cingulate motor areas, SMA and pre-SMA, lateral premotor cortex
(PM), primary motor cortex and somatosensory cortex) (Avendano,
Price, & Amaral, 1983; Amaral & Price, 1984; Llamas, Avendano, &
Reinoso-Suarez, 1977; Llamas, Avendano, & Reinoso-Suarez, 1985;
Macchi, Bentivoglio, Rossini, & Tempesta, 1978; Sripanidkulchai,
Sripanidkulchai, & Wyss, 1984; Morecraft et al., 2007; Jürgens,
1984; Ghashghaei, Hilgetag, & Barbas, 2007). These latter findings
provide a potential mechanism through which AMG can influence
more complex and subtle behaviours elicited during social interactions other than the well-known automatic stereotypical emotional
behaviours (Llamas et al., 1977; Chareyron, Banta Lavenex, Amaral, &
Lavenex, 2011).
3.2. Neuroimaging evidence
In humans, it is currently unknown whether there are direct
anatomical connections between the AMG and the cortical motor
system. Yet, using functional magnetic resonance imaging (fMRI),
we and others have revealed, during the perception of emotional
displays, co-activation of the AMG and motor-related areas,
notably the PM (Isenberg et al., 1999; Whalen et al., 2001;
Decety & Chaminade, 2003; Carr et al., 2003; de Gelder, Snyder,
Greve, Gerard, & Hadjikhani, 2004; Sato, Kochiyama, Yoshikawa,
Naito, & Matsumura, 2004; Grosbras & Paus, 2006; Warren et al.,
2006; Grèzes, Pichon, & de Gelder, 2007; Pichon, de Gelder, &
Grèzes, 2008; Hadjikhani, Hoge, Snyder, & de Gelder, 2008; Pichon,
de Gelder, & Grèzes, 2009; Pouga, Berthoz, de Gelder, & Grèzes,
2010; Van den Stock et al., 2011; Pichon, de Gelder, & Grèzes, 2012;
Grèzes, Adenis, Pouga, & Armony, 2012).
Moreover, functional connectivity was revealed between the
amygdala and motor-related areas (Qin, Young, Supekar, Uddin, &
Menon, 2012; Ahs et al., 2009; Roy et al., 2009; Grèzes, Wicker,
Berthoz, & de Gelder, 2009; Voon et al., 2010), and direct evidence
that emotional stimuli prime the motor system and facilitate
action readiness was provided by transcranial magnetic stimulation (TMS) studies (Oliveri et al., 2003; Baumgartner, Matthias, &
Lutz, 2007; Hajcak et al., 2007; Schutter, Hofman, & van Honk,
2008; Schutter & Honk, 2009; Coombes et al., 2009; Coelho, Lipp,
Marinovic, Wallis, & Riek, 2010; van Loon, van den Wildenberg, &
van Stegeren, 2010).
Of particular interest, the mean coordinates reported in PM in
above-mentioned fMRI studies using facial and body expressions
of fear and anger fall within the ventral/dorsal PM border
(Tomassini et al., 2007a) (see Fig. 1). Knowing that lateral PM is
implicated in motor preparation and environmentally-driven
actions (Hoshi & Tanji, 2004; Passingham, 1993) and that in monkeys
electrical stimulation of PMv/PMd border triggers characteristic
defensive movements (Cooke & Graziano, 2004; Graziano & Cooke,
2006), we suggest that emotional displays, once evaluated in
amygdala, prompt or modulate dispositions to interact, as revealed
by activity in cortical premotor cortex (see Fig. 1).
3.3. Behavioural markers
The fact that emotional expressions trigger actions in the
observer is also consistent with recent behavioural studies.
Scholars agree that when exposed to emotional expressions
individuals display rapid facial reactions (RFRs) detectable by
electromyography (EMG) (Bush, McHugo, & Lanzetta, 1986;
Dimberg & Thunberg, 1998; Dimberg, Thunberg, & Elmehed,
2000; Hess & Blairy, 2001; McIntosh, 2006). In a recent study
(Grèzes et al., 2013), we raised the question of whether RFRs,
instead of reflecting the function of the shared-representations
system (see Section 2.3.), would reveal, in the observer, preparation of appropriate actions in response to social signal. To this end,
we manipulated two critical perceptual features that contribute to
determining the significance of others’ emotional expressions: the
direction of attention (toward or away from the observer) and the
intensity of the emotional display (Grèzes et al., 2013). Electromyographic activity over the corrugator muscle was recorded
while participants observed videos of neutral to angry body
expressions. From a shared motor representation perspective
(see Section 2.3.), one should expect either (1) no early RFRs in
absence of facial expressions as the body alone does not provide
the cues necessary for facial motor matching (strict perspective);
(2) congruent RFRs to others’ angry faces, irrespective of the
direction of attention of the emitter (Chartrand & Bargh, 1999)
or (3) less mimicry when directed at the observer as anger conveys
non-ambiguous signals of non-affiliative intentions (Bourgeois &
Hess, 2008; Hess, Adams, & Kleck, 2007). Yet, self-directed bodies
induced greater RFRs activity than other-directed bodies; additionally RFRs activity was only influenced by the intensity of anger
expressed by self-directed bodies (see Fig. 2). Our data clearly
indicate that facial reactions to body expressions of anger are not
automatic and cannot be interpreted as pure non-affective motor
mimicry. A strict motor mimicry process is indeed not sufficient to
explain why RFRs are displayed to non-facial and non-social
emotional pictures (Dimberg & Thunberg, 1998), emotional body
expressions (Magnee, Stekelenburg, Kemner, & de Gelder, 2007;
Tamietto et al., 2009) and vocal stimuli (Bradley & Lang, 2000;
Hietanen, Surakka, & Linnankoski, 1998; de Gelder, Vroomen,
Pourtois, & Weiskrantz, 1999), nor why they are occasionally
incongruent with the attended signals (Moody, McIntosh, Mann,
& Weisser, 2007). By revealing that RFRs were influenced by the
self-relevance of the emotional display which varies as a function
of the emitter's direction of attention and the intensity of his/her
emotional expression, our data rather suggests that RFRs are
behavioural markers of an emotion-to-action process allowing
for the preparation of adaptive but flexible behavioural responses
to emotional signals.
3.4. Interindividual variability in healthy populations and psychiatry
The idea that we perceive our physical and social environment
as consisting of numerous opportunities for action entails that
social understanding is intertwined with social interactive skills
(McGann & De Jaegher, 2009). Indeed, as mentioned above, the set
of opportunities for actions, as it emerges from the specific
relationship between a single agent and features of its environment, critically depends on this agent's abilities and preferences.
In this respect, disorders that impair an individual's ability to
accurately detect opportunities for action (Loveland, 2001), such as
autism spectrum disorders (ASD), should reveal abnormal interplay between limbic and motor systems.
ASD are neurodevelopmental disorders characterized by a unique
profile of impaired social interaction and communication (e.g. Lord
et al., 1989) with a major impact on social life (American Psychiatric
Association, 1994). Of importance here, individuals with autism display
“a pervasive lack of responsiveness to others” and “marked impairments in the use of multiple nonverbal behaviours to regulate social
interactions” (American Psychiatric Association, 1994). The facts that
remarkable maturation process of the brain's affective and social
systems spans from childhood to adulthood, and that social cognitive
skills need extensive tuning during development may explain why
ASD and other developmental disorders are often associated with
pervasive social skill impairments (Kennedy & Adolphs, 2012). Moreover, social cognitive abilities are subject to important inter-individual
variability: there are large individual differences even in healthy
individuals (Kennedy & Adolphs, 2012) and the severity of ASD
Please cite this article as: Grèzes, J., & Dezecache, G. How do shared-representations and emotional processes cooperate in response to
social threat signals? Neuropsychologia (2013), http://dx.doi.org/10.1016/j.neuropsychologia.2013.09.019i
J. Grèzes, G. Dezecache / Neuropsychologia ∎ (∎∎∎∎) ∎∎∎–∎∎∎
5
Fig. 1. (A) Group average activations elicited by fear vs. neutral dynamic expressions (top: Grèzes et al., 2007; bottom: Pouga et al., 2010), anger vs. neutral expressions
(middle: Pichon et al., 2008). (B) Bar charts representing the number of activations found during the observation of neutral actions (blue) (see meta-analysis by Morin &
Grèzes, 2008) and during the observation of facial and body expressions of fear and anger (green) (meta-analysis performed for this review on all papers that to our
knowledge found PM activity). The mean coordinates along the z axis for emotion is 50. (C) Border between the ventral and the dorsal premotor cortex in the human brain
(Tomassini et al., 2007). (D) On top, the lateral view of the monkey brain with the parcellation of the motor and parietal cortex (Rizzolatti, Fogassi, & Gallese, 2001), on the
bottom, drawings represent the final defensive postures evoked by the electrical stimulation of the border between PMv and PMd.
characteristics are posited to lie on a continuum extending into the
typical population (as measured by Autism Spectrum Quotient, ASQ)
(Baron-Cohen, Wheelwright, Skinner, Martin, & Clubley, 2001). For
example, typical adult performance on behavioural tasks that are
impaired in ASD individuals is correlated with the extent to which
they display autistic traits (Nummenmaa, Engell, von dem Hagen,
Henson, & Calder, 2012), strongly suggesting that the boundary
between typical and pathological populations is not clear cut and
might be better viewed as a continuum.
Given this (the importance of inter-individual variability in
social cognitive abilities as well as the blurred boundary between
typical and atypical populations), it is important to track the
development of pathological characteristics while consistently
collecting anatomical data. To our knowledge, only one study has
looked at age-related changes in AMG connectivity and showed
drastic changes in the intrinsic functional connectivity of the
basolateral nucleus of AMG with sensorimotor cortex, with weaker
integration and segregation of amygdala connectivity in 7-to 9-yold children as compared to 19-to 22-y-old young adults (Qin
et al., 2012). Also, Greimel et al. (2012) recently demonstrated
that age-related changes in grey matter volume in AMG and PM
differed in ASD as compared to typically developing (TD)
participants.
We revealed, in adults with ASD, atypical processing of emotional expressions subtended by a weaker functional connectivity
between AMG and PM (Grèzes et al., 2009). Similarly, Gotts et al.
(2012) showed, using a whole-brain functional connectivity
approach in fMRI, a decoupling between brain regions in the
evaluation of socially relevant signals from motor-related circuits
in ASDs. These results emphasize the importance of studying the
integrity of between regions (and even between-circuits) connectivity, rather than looking for mere localized abnormalities. They
also suggest the possibility that weak limbic-motor pathways
might contribute to difficulties in perceiving social signals as
opportunities for actions. Ultimately, such abnormal connectivity
should impact on the preparation of adaptive but flexible behavioural responses in social context.
4. Reuniting shared motor representations and emotion-toactions processes in a single cognitive and neural framework
Overall, the literature suggests that the AMG works in tandem
with cortical motor-related areas and critically raise the question
of the functional interplay between shared motor representations
and emotion-to-actions processes. To clarify the relationship
Please cite this article as: Grèzes, J., & Dezecache, G. How do shared-representations and emotional processes cooperate in response to
social threat signals? Neuropsychologia (2013), http://dx.doi.org/10.1016/j.neuropsychologia.2013.09.019i
6
J. Grèzes, G. Dezecache / Neuropsychologia ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Fig. 2. (A) Time course of the mean EMG corrugator supercilii activity as a function of the Target of Attention (S for Self (green), O for Other (blue)) and the Levels of Emotion
(1–4). Activity reflects average activation during each 100-ms time interval. B) Mean activity over the corrugator supercilii region between 300 and 700 ms. The mean (SEM)
activity is represented as a function of (Left) the Target of Attention (Self (green), Other (blue)) and the Levels of Emotion (1–4) and, (Right) only for Self-oriented conditions
for the 4 Levels of Anger. *p o0.05.
between these two processes, we studied the perception of
dynamic emotional body expressions. In our studies, we concentrated on two emotions: fear and anger expressions (both signal
threat), as the ability to quickly detect environmental danger and
to subsequently prepare a set of adequate motor commands is very
adaptive. These two emotions are ideal to address the link
between the motor and the limbic systems.
In our initial fMRI experiments (Grèzes, Pichon, & de Gelder,
2007; Pichon et al., 2008; Grèzes et al., 2009; Pouga et al., 2010),
we presented neutral and emotional body expressions displayed in
either still or dynamic formats. Such a factorial design allows us to
disentangle the involvement of shared motor representations from
emotion-to-actions processes during the perception of emotional
displays. The comparison between dynamic and static actions
triggered differential neural activity in brain areas associated with
shared motor representations (STS, parietal, somatosensory and
inferior frontal gyrus IFG44). On the other hand, the comparison
between emotional and neutral expressions prompted activity in
brain areas we suggested to be related to emotion-to-actions
processes (fusiform gyrus and STS, AMG and PM). Of interest here,
activity in the posterior part of inferior frontal gyrus (IFG44)) was
clearly related to the perception of dynamic actions, whether
neutral or emotional, whereas activity in the PM was only related
to the perception of angry dynamic expressions (see Fig. 2). Also,
the distribution of the activations reported during the perception
of threatening faces or bodies in several studies including ours is
different from the one found during the observation of neutral
actions (see Fig. 1B). Altogether, these results suggest that the two
systems could run in parallel.
To further explore the relation between these two processes,
we performed psycho-physiological interactions (PPI—functional
connectivity) analyses on data collected by Pouga et al. (2010)
to identify (i) changes in the connectivity pattern of shared
motor representations when an action becomes emotional; and
(ii) changes in the connectivity pattern of emotion-to-actions
processes when an fearful stimulus becomes dynamic (Grèzes
and Pouga, unpublished data—see Fig. 3). Two areas in the right
hemisphere were selected for shared motor representations, IFG44
and posterior part of STS, and two for emotion-to-actions, STS and
AMG. The STS is a brain region common to both processes (Pichon
et al., 2008).
The results revealed that when an action becomes emotional,
the STS informs a subcortical circuit that represents a major output
channel for the limbic system involved in visceral and basic
survival behaviours (Holstege, 1991) which is also under the
control of the orbitofrontal cortex (Price, 2003). In parallel, one
of the main nodes of shared motor representations network
(IFG44) increases its connectivity with somatosensory cortices
which could reflect the representation of the sensory and somatic
states (i.e. “what it feels like”) of the perceived body expression of
emotion (Gallese & Goldman, 1998; Adolphs, 2002) (see Fig. 3A—
Supplementary Table 1). Furthermore, our results further revealed
that, when fearful postures become dynamic, both the STS and the
AMG increased their connectivity with visual areas but more
interestingly here, with the pre-SMA and the border between
ventral and dorsal premotor cortex (PMd/PMv) (see Fig. 3B—
Supplementary Table 2). Together these results suggest that when
facing the emotional display of others, two processes are working
in parallel: shared motor representations, which comprise components of the perceived action and associated predicted somatosensory consequences that anticipate the unfolding of other's
impending behaviour and feelings, and the emotion-to-actions
processes that influence the preparation of adaptive responses in
the observed to emotional signal.
The picture provided by fMRI alone was however limited by the
poor temporal resolution of the BOLD response. Therefore, in a
follow-up study, we combined electroencephalography (EEG) with
fMRI to determine whether shared motor representations and
emotion-to-action processes interact, and if they do, when and
where this happens in the human brain (Conty, Dezecache,
Hugueville, & Grèzes, 2012). Participants viewed dynamic stimuli
depicting actors producing complex social signals involving gaze,
a pointing gesture, and the expression of anger. We demonstrated
that the emotional content of the stimuli was first processed in the
Please cite this article as: Grèzes, J., & Dezecache, G. How do shared-representations and emotional processes cooperate in response to
social threat signals? Neuropsychologia (2013), http://dx.doi.org/10.1016/j.neuropsychologia.2013.09.019i
J. Grèzes, G. Dezecache / Neuropsychologia ∎ (∎∎∎∎) ∎∎∎–∎∎∎
7
Fig. 3. (A) Left) Statistical parametric maps of brain activation in response to the observation of (a) dynamic versus static emotional expressions (red, top: anger (Pichon
et al., 2008), bottom: fear (Pouga et al., 2010)) and dynamic versus static neutral expressions (green). Right) Parameter estimates (arbitrary units, mean centered) of the PMv/
PMd border (top: xyz ¼50 2 48) and of the posterior part of inferior frontal gyrus IFG44 (bottom: xyz¼ 52 16 32). AS: Anger Static; AD: Anger Dynamic; NS: Neutral Static;
ND: Neutral Dynamic. (B) Using Psychophysiological Interaction (PPI), Grèzes and Pouga (unpublished data) addressed changes in the connectivity pattern of two brain areas
sustaining shared motor representations (black circles of the middle picture—the pSTS (xyz ¼ 54 46 8) and inferior frontal cortex IFG44 (xyz ¼ 46 12 26)) when an action
becomes emotional. Middle picture: statistical maps showing brain activations in the right hemisphere in response to the perception of dynamic body expressions vs. static
ones, irrespective of the emotional content, rendered on a partially inflated lateral view of the Human PALS-B12 atlas. Right picture: statistical maps showing increased
functional connection with the STS (purple) or IFG44 (green) (see Supplementary Table 1). (C) Changes in the connectivity pattern of emotion-to-actions brain areas (STS
(xyz¼ 50 40 2) and Amygdala (xyz ¼18 6 24)) when a static fearful stimuli becomes dynamic. Middle picture: statistical maps showing brain activations in the right
hemisphere in response to the perception of fearful expressions vs. neutral ones, irrespective of their nature (static or dynamic), rendered on a partially inflated lateral view
of the Human PALS-B12 atlas. Right picture: statistical maps showing increased functional connection with the STS (purple) or the Amygdala (red) (see Supplementary
Table 2).
Fig. 4. (Top) Stimuli examples. Here, the actor displays direct gaze (but could also in the experiment display averted gaze), angry or neutral facial expression, and a pointing
gesture or not. From the initial position, one (gaze direction only), two (gaze direction and emotional expression or gaze direction and gesture), or three (gaze direction,
emotional expression, and gesture) visual cues could change. (Bottom) Joint ERP-fMRI results (from Conty et al., 2012).
Please cite this article as: Grèzes, J., & Dezecache, G. How do shared-representations and emotional processes cooperate in response to
social threat signals? Neuropsychologia (2013), http://dx.doi.org/10.1016/j.neuropsychologia.2013.09.019i
J. Grèzes, G. Dezecache / Neuropsychologia ∎ (∎∎∎∎) ∎∎∎–∎∎∎
8
AMG (170 ms) before being integrated with other visual cues (gaze
and gesture) in the PM (200 ms). Of interest, the highest level of
activity in the PM was revealed for the condition which conveyed
the highest degree of potential interaction; i.e., viewing an angry
person with gaze and pointed finger aimed at oneself (see Fig. 4).
Only a combination of two complementary mechanisms explains
the highest level of activity in the PM we observe for the highest
degree of potential social interaction: (1) the estimation of prior
expectations about the perceived agent's immediate intent, which
most probably relies on shared motor representation (Kilner,
Friston, & Frith, 2007); and (2) the evaluation of the emotional
content and the selection of the appropriate action for the
observer to deal with the immediate situation. This study thus
provides evidence that shared motor representation and emotionto-actions processes can interact as early as 200 ms after the
appearance of a signal of threat.
Whether these two complementary mechanisms are also activated during the perception of emotional signals other than fear and
anger is an interesting question. While it is reasonable to think that
they are also crucial in the processing of other signals related to the
presence of threat in the environment (such as disgust), their
contribution to the processing of other social signals (e.g., joy) is an
empirical question. If electrophysiological recordings in rodents show
consistent limbic-motor interactions during sexually arousing contexts (Korzeniewska, Kasicki, & Zagrodzka, 1997), it is still unknown
whether they can be extended to other positive or socially rewarding
contexts, as well as whether they are preserved in humans.
5. Summary
In this paper, we argue that social signals that include emotional displays (here: signals related to threat) can be considered
as prompting a wide range of opportunities for actions in the
observer, even if these opportunities do not materialise into overt
actions. When facing threat displays of others, one need to
decipher the emitted emotional signal and predicts its immediate
future while preparing to respond to it in an adaptive way. We
argue that the two processes are sustained by shared motor
representations and emotion-to-actions mechanisms, respectively.
These two mechanisms can be prompted independently by the
same stimuli and can either run in parallel (Bavelas et al., 1986) or
together (Conty et al., 2012).
Acknowledgements
The presented work was supported by a Human Frontier
Science Program Grant (RGP 0054/2004), an EU Six Framework
Program (N1 NEST-2005-Path-IMP-043403), an ACI Neurosciences
intégratives et computationnelles 2004 program, an Agence
National of Research (ANR) “Emotion(s), Cognition, Comportement” 2011 program (Selfreademo), by the Fondation Roger de
Spoelberch and by INSERM. The department is supported by ANR11-0001-02 PSLn and ANR-10-LABX-0087. We wish to warmly
thank all our collaborators, and notably Terry Eskenazi and
Michèle Chadwick for their useful comments on the manuscript.
Appendix A. Supporting information
Supplementary data associated with this article can be found in
the online version at http://dx.doi.org/10.1016/j.neuropsychologia.
2013.09.019.
References
Adolphs, R. (2002a). Neural systems for recognizing emotion. Current Opinion in
Neurobiology, 12, 169–177.
Adolphs, R., Tranel, D., & Denburg, N. (2000). Impaired emotional declarative
memory following unilateral amygdala damage. Learning and Memory, 7,
180–186.
Ahs, F., Pissiota, A., Michelgard, A., Frans, O., Furmark, T., Appel, L., et al. (2009).
Disentangling the web of fear: Amygdala reactivity and functional connectivity
in spider and snake phobia. Psychiatry Research: Neuroimaging, 172, 103–108.
Amaral, D. G., & Price, J. L. (1984). Amygdalo-cortical projections in the monkey
(Macaca fascicularis). Journal of Comparative Neurology, 230, 496.
American Psychiatric Association (1994). Diagnostic and statistical manual of
mental disorders DSM-IV-TR 4th ed. Washington DC.
Avendano, C., Price, J. L., & Amaral, D. G. (1983). Evidence for an amygdaloid
projection to premotor cortex but not to motor cortex in the monkey. Brain
Research, 264, 111–117.
Bach, P., Bayliss, A., & Tipper, S. (2011). The predictive mirror: Interactions of mirror
and affordance processes during action observation. Psychonomic Bulletin &
Review, 18, 171–176.
Banissy, M. J., Garrido, L. þ ., Kusnir, F., Duchaine, B., Walsh, V., & Ward, J. (2011).
Superior facial expression, but not identity recognition, in mirror-touch
synesthesia. The Journal of Neuroscience, 31, 1820–1824.
Barbas, H. (2000). Connections underlying the synthesis of cognition, memory, and
emotion in primate prefrontal cortices. Brain Research Bulletin, 52, 319–330.
Baron-Cohen, S., Wheelwright, S., Skinner, R., Martin, J., & Clubley, E. (2001). The
autism-spectrum quotient (AQ): Evidence from Asperger syndrome/high-functioning autism, males and females, scientists and mathematicians. Journal of
Autism and Developmental Disorders, 31, 5–17.
Baumgartner, T., Matthias, W., & Lutz, J. (2007). Modulation of corticospinal activity
by strong emotions evoked by pictures and classical music: A transcranial
magnetic stimulation study. Neuroreport, 18, 261–265.
Bavelas, J. B., Black, A., Lemery, C. R., & Mullett, J. (1986). I show how you feel. Motor
mimicry as a communicative act. Journal of Personality and Social Psychology, 50,
322–329.
Borg, E. (2007). If mirror neurons are the answer, what was the question? Journal of
Consciousness Studies, 14, 5–19.
Bourgeois, P., & Hess, U. (2008). The impact of social context on mimicry. Biological
Psychology, 77, 343–352.
Bradley, M. M., & Lang, P. J. (2000). Affective reactions to acoustic stimuli.
Psychophysiology, 37, 204–215.
Bush, L. K., McHugo, G. J., & Lanzetta, J. T. (1986). The effects of sex and prior
attitude on emotional reactions to expressive displays of political leaders.
Psychophysiology, 23, 427.
Calvo-Merino, B., Grèzes, J., Glaser, D. E., Passingham, R. E., & Haggard, P. (2006).
Seeing or doing? Influence of visual and motor familiarity in action observation.
Current Biology, 16, 1910.
Carr, L., Iacoboni, M., Dubeau, M. C., Mazziotta, J. C., & Lenzi, G. L. (2003). Neural
mechanisms of empathy in humans: A relay from neural systems for imitation
to limbic areas. Proceedings of the National Academy of Sciences of the United
States of America, 100, 5497–5502.
Caspers, S., Zilles, K., Laird, A. R., & Eickhoff, S. B. (2010). ALE meta-analysis of action
observation and imitation in the human brain. NeuroImage, 50, 1148–1167.
Chareyron, L. J., Banta Lavenex, P., Amaral, D. G., & Lavenex, P. (2011). Stereological
analysis of the rat and monkey amygdala. The Journal of Comparative Neurology,
519, 3218–3239.
Chartrand, T. L., & Bargh, J. A. (1999). The chameleon effect: The perception–
behavior link and social interaction. Journal of Personality and Social Psychology,
76, 893–910.
Cisek, P. (2007). A parallel framework for interactive behavior. Progress in Brain
Research, 165, 478–492.
Cisek, P., & Kalaska, J. F. (2010). Neural mechanisms for interacting with a world full
of action choices. Annual Review of Neuroscience, 33, 269–298.
Coelho, C. M., Lipp, O. V., Marinovic, W., Wallis, G., & Riek, S. (2010). Increased
corticospinal excitability induced by unpleasant visual stimuli. Neuroscience
Letters, 481, 135–138.
Conty, L., Dezecache, G., Hugueville, L., & Grèzes, J. (2012). Early binding of gaze,
gesture, and emotion: Neural time course and correlates. The Journal of
Neuroscience, 32, 4531–4539.
Cooke, D. F., & Graziano, M. S. (2004). Sensorimotor integration in the precentral
gyrus: Polysensory neurons and defensive movements. Journal of Neurophysiology, 91, 1648–1660.
Coombes, S. A., Tandonnet, C., Fujiyama, H., Janelle, C. M., Cauraugh, J. H., &
Summers, J. J. (2009). Emotion and motor preparation: A transcranial magnetic
stimulation study of corticospinal motor tract excitability. Cognitive, Affective
and Behavioral Neuroscience, 9, 380–388.
Cross, E. S., Kraemer, D. J. M., Hamilton, A. F., Kelley, W. M., & Grafton, S. T. (2009).
Sensitivity of the action observation network to physical and observational
learning. Cerebral Cortex, 19, 315–326.
Cui, H., & Andersen, R. A. (2011). Different representations of potential and selected
motor plans by distinct parietal areas. The Journal of Neuroscience, 31,
18130–18136.
Damasio, A. R., Grabowski, T. J., Bechara, A., Damasio, H., Ponto, L. L., Parvizi, J., et al.
(2000). Subcortical and cortical brain activity during the feeling of selfgenerated emotions. Nature Neuroscience, 3, 1049–1056.
Please cite this article as: Grèzes, J., & Dezecache, G. How do shared-representations and emotional processes cooperate in response to
social threat signals? Neuropsychologia (2013), http://dx.doi.org/10.1016/j.neuropsychologia.2013.09.019i
J. Grèzes, G. Dezecache / Neuropsychologia ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Dapretto, M., Davies, M. S., Pfeifer, J. H., Scott, A. A., Sigman, M., Bookheimer, S. Y.,
et al. (2006). Understanding emotions in others: Mirror neuron dysfunction in
children with autism spectrum disorders. Nature Neuroscience, 9, 28–30.
Dayan, E., Casile, A., Levit-Binnun, N., Giese, M. A., Hendler, T., & Flash, T. (2007).
Neural representations of kinematic laws of motion: Evidence for actionperception coupling. Proceedings of the National Academy of Sciences, 104,
20582–20587.
de Gelder, B., Snyder, J., Greve, D., Gerard, G., & Hadjikhani, N. (2004). Fear fosters
flight: A mechanism for fear contagion when perceiving emotion expressed by
a whole body. Proceedings of the National Academy of Sciences of the United
States of America, 101, 16701–16706.
de Gelder, B., Vroomen, J., Pourtois, G., & Weiskrantz, L. (1999). Non-conscious
recognition of affect in the absence of striate cortex. Neuroreport, 10,
3759–3763.
de Vignemont, F., & Jacob, P. (2012). What is it like to feel another's pain? Philosophy
of Science, 79, 295–316.
de Vignemont, F., & Singer, T. (2006). The empathic brain: How, when and why?
Trends in Cognitive Sciences, 10, 435–441.
Decety, J., & Chaminade, T. (2003). When the self represents the other: A new
cognitive neuroscience view on psychological identification. Consciousness and
Cognition, 12, 577–596.
Dezecache, G., Conty, L., & Grèzes, J. (2013). Social affordances: Is the mirror neuron
system involved? Behavioral and Brain Sciences, 36, 417–418.
Dezecache, G., Mercier, H., & Scott-Phillips, T. C. (2013). An evolutionary perspective
to emotional communication. Journal of Pragmatics.
Dimberg, U., & Thunberg, M. (1998). Rapid facial reactions to emotional facial
expressions. Scandinavian Journal of Psychology, 39, 39–45.
Dimberg, U., Thunberg, M., & Elmehed, K. (2000). Unconscious facial reactions to
emotional facial expressions. Psychological Science, 11, 86–89.
Elsner, C., Falck-Ytter, T., & Gredebäck, G. (2012). Humans anticipate the goal of
other people's point-light actions. Frontiers in Psychology, 3.
Ferri, F., Campione, G., Dalla Volta, R., Gianelli, C., & Gentilucci, M. (2011). Social
requests and social affordances: How they affect the kinematics of motor
sequences during interactions between conspecifics. PLoS One, 6, e15855.
Frijda, N., & Gerrod Parrott, W. (2011). Basic emotions or ur-emotions? Emotion
Review, 3, 406–415.
Frijda, N. H. (1986). The emotions. Cambridge: Cambridge University Press.
Gallese, V. (2001). The ‘shared manifold’ hypothesis. From mirror neurons to
empathy. Journal of Consciousness Studies, 8, 33–50.
Gallese, V., Fadiga, L., Fogassi, L., & Rizzolatti, G. (1996). Action recognition in the
premotor cortex. Brain, 119, 593–609 (Pt 2).
Gallese, V., & Goldman, A. (1998). Mirror neurons and the simulation theory of
mind-reading. Trends in Cognitive Sciences, 2, 493–501.
Ghashghaei, H. T., & Barbas, H. (2002). Pathways for emotion: Interactions of
prefrontal and anterior temporal pathways in the amygdala of the rhesus
monkey. Neuroscience, 115, 1261–1279.
Ghashghaei, H. T., Hilgetag, C. C., & Barbas, H. (2007). Sequence of information
processing for emotions based on the anatomic dialogue between prefrontal
cortex and amygdala. NeuroImage, 34, 905–923.
Gotts, S. J., Simmons, W. K., Milbury, L. A., Wallace, G. L., Cox, R. W., & Martin, A.
(2012). Fractionation of social brain circuits in autism spectrum disorders. Brain
Graziano, M. S., & Cooke, D. F. (2006). Parieto-frontal interactions, personal space,
and defensive behavior. Neuropsychologia, 44, 845–859.
Greimel, E., Nehrkorn, B., Schulte-Ruther, M., Fink, G. R., Nickl-Jockschat, T.,
Herpertz-Dahlmann, B., et al. (2012). Changes in grey matter development in
autism spectrum disorder. Brain Structure and Function, 1–14.
Grèzes, J. (2011). Emotions motivate action and communication. Medical Science, 27,
683–684.
Grèzes, J., Adenis, M. S., Pouga, L., & Armony, J. L. (2012). Self-relevance modulates
brain responses to angry body expressions. Cortex, [Epub ahead of print].
Grèzes, J., & Decety, J. (2001). Functional anatomy of execution, mental simulation,
observation, and verb generation of actions: A meta-analysis. Human Brain
Mapping, 12, 1–19.
Grèzes, J., Philip, L., Chadwick, M., Dezecache, G., Soussignan, R., & Conty, L. (2013).
Self-relevance appraisal influences facial reactions to emotional body expressions. PLoS One, 8, e55885.
Grèzes, J., Pichon, S., & de Gelder, B. (2007). Perceiving fear in dynamic body
expressions. NeuroImage, 35, 959–967.
Grèzes, J., Wicker, B., Berthoz, S., & de Gelder, B. (2009). A failure to grasp the
affective meaning of actions in autism spectrum disorder subjects. Neuropsychologia, 47, 1816–1825.
Groenewegen, H., & Trimble, M. (2007). The ventral striatum as an interface
between the limbic and motor systems. CNS Spectrums, 12, 887–892.
Grosbras, M. H., & Paus, T. (2006). Brain networks involved in viewing angry hands
or faces. Cerebral Cortex, 16, 1087–1096.
Hadjikhani, N., Hoge, R., Snyder, J., & de Gelder, B. (2008). Pointing with the eyes:
The role of gaze in communicating danger. Brain and Cognition, 68, 1–8.
Hajcak, G., Molnar, C., George, M., Bolger, K., Koola, J., & Nahas, Z. (2007). Emotion
facilitates action: A transcranial magnetic stimulation study of motor cortex
excitability during picture viewing. Psychophysiology, 44, 91–97.
Hatfield, E., Cacioppo, J., & Rapson, R. (1993). Emotional contagion. Current
Directions in Psychological Science, 2, 96–99.
Hatfield, E., Cacioppo, J., & Rapson, R. L. (1994). Emotional contagion. Cambridge
England: Cambridge University Press.
9
Hess, U., Adams, R., & Kleck, R. (2007). Looking at you or looking elsewhere: The
influence of head orientation on the signal value of emotional facial expressions. Motivation & Emotion, 31, 137–144.
Hess, U., & Blairy, S. (2001). Facial mimicry and emotional contagion to dynamic
emotional facial expressions and their influence on decoding accuracy. International Journal of Psychophysiology, 40, 129–141.
Hesse, M. D., Sparing, R., & Fink, G. R. (2008). End or means: The what and how of
observed intentional actions. Journal of Cognitive Neuroscience, 21, 776–790.
Hietanen, J. K., Surakka, V., & Linnankoski, I. (1998). Facial electromyographic
responses to vocal affect expressions. Psychophysiology, 35, 530–536.
Holstege, G. (1991). Descending motor pathways and the spinal motor system:
Limbic and non-limbic components. Progress in Brain Research, 87, 307–421.
Hoshi, E., & Tanji, J. (2004). Functional specialization in dorsal and ventral premotor
areas. Progress in Brain Research, 143, 507–511.
Isenberg, N., Silbersweig, D., Engelien, A., Emmerich, S., Malavade, K., Beattie, B.,
et al. (1999). Linguistic threat activates the human amygdala. Proceedings of the
National Academy of Sciences, 96, 10456–10459.
Jacob, P. (2008). What do mirror neurons contribute to human social cognition?
Mind & Language, 23, 190–223.
Jeannerod, M. (2001). Neural simulation of action: A unifying mechanism for motor
cognition. NeuroImage, 14, S103–S109.
Jeannerod, M. (2006). Motor cognition: What actions tell the self. Oxford: Oxford
University Press.
Jola, C., bedian-Amiri, A., Kuppuswamy, A., Pollick, F. E., & Grosbras, M. H. l. n.
(2012). Motor simulation without motor expertise: Enhanced corticospinal
excitability in visually experienced dance spectators. PLoS One, 7, e33343.
Jürgens, U. (1984). The efferent and afferent connections of the supplementary
motor area. Brain Research, 300, 63–81.
Kanakogi, Y., & Itakura, S. (2011). Developmental correspondence between action
prediction and motor ability in early infancy. Nature Communications, 2.
Kaufmann, L., & Clément, F. (2007). How culture comes to mind: From social
affordances to cultural analogies. Intellectica, 2–3, 221–250.
Kennedy, D. P., & Adolphs, R. (2012). The social brain in psychiatric and neurological
disorders. Trends in Cognitive Sciences, 16, 559–572.
Kilner, J., Friston, K., & Frith, C. (2007). Predictive coding: An account of the mirror
neuron system. Cognitive Processing, 8, 159–166.
Korzeniewska, A., Kasicki, S., & Zagrodzka, J. (1997). Electrophysiological correlates
of the limbic-motor interactions in various behavioral states in rats. Behavioural
Brain Research, 87, 69–83.
LeDoux, J. E. (2000). Emotion circuits in the brain. Annual Review of Neuroscience, 23
(155-84).
Lee, T. W., Josephs, O., Dolan, R. J., & Critchley, H. D. (2006). Imitating expressions:
Emotion-specific neural substrates in facial mimicry. Social Cognitive and
Affective Neuroscience, 1, 122.
Llamas, A., Avendano, C., & Reinoso-Suarez, F. (1977). Amygdala projections to
prefrontal and motor cortex. Science, 195, 794–796.
Llamas, A., Avendano, C., & Reinoso-Suarez, F. (1985). Amygdaloid projections to the
motor, premotor and prefrontal areas of the cat's cerebral cortex: A topographical study using retrograde transport of horseradish peroxidase. Neuroscience, 15, 651–657.
Lord, C., Rutter, M., Goode, S., Heemsbergen, J., Jordan, H., Mawhood, L., et al. (1989).
Autism diagnostic observation schedule: A standardized observation of communicative and social behavior. Journal of Autism and Developmental Disorders,
19, 185–212.
Loveland, K. (2001). Toward an ecological theory of autism. In: C. K. Burack,
T. Charman, N. Yirmiya, & P. R. Zelazo (Eds.), The development of
autism: perspectives from theory and research (pp. 17–37). New Jersey: Erlbaum
Press.
Macchi, G., Bentivoglio, M., Rossini, P., & Tempesta, E. (1978). The basolateral
amygdaloid projections to the neocortex in the cat. Neuroscience Letters, 9,
347–351.
Magnee, M. J. C. M., Stekelenburg, J. J., Kemner, C., & de Gelder, B. (2007). Similar
facial electromyographic responses to faces, voices, and body expressions.
Neuroreport, 18, 369–372.
Maister, L., Tsiakkas, E., & Tsakiris, M. (2013). I feel your fear: Shared touch between
faces facilitates recognition of fearful facial expressions. Emotion, 13, 7–13.
McGann, M., & De Jaegher, H. (2009). Self-other contingencies: Enacting social
perception. Cognitive Science and Phenomenal, 8, 417–437.
McIntosh, G. J. (2006). Spontaneous facial mimicry, liking, and emotional contagion.
Polish Psychological Bulletin, 37, 31–42.
Mogenson, G. J., Jones, D. L., & Yim, C. Y. (1980). From motivation to action:
Functional interface between the limbic system and the motor system. Progress
in Neurobiology, 14, 69–97.
Molenberghs, P., Hayward, L., Mattingley, J. B., & Cunnington, R. (2012). Activation
patterns during action observation are modulated by context in mirror system
areas. NeuroImage, 59, 608–615.
Moody, E. J., McIntosh, D. N., Mann, L. J., & Weisser, K. R. (2007). More than mere
mimicry? The influence of emotion on rapid facial reactions to faces. Emotion, 7,
447–457.
Morecraft, R. J., McNeal, D. W., Stilwell-Morecraft, K. S., Gedney, M., Ge, J.,
Schroeder, C. M., et al. (2007). Amygdala interconnections with the cingulate
motor cortex in the rhesus monkey. The Journal of Comparative Neurology, 500,
134–165.
Morin, O., & Grèzes, J. (2008). What is “mirror” in the premotor cortex? A review.
Neurophysiologie Clinique ¼Clinical Neurophysiology, 38, 189–195.
Please cite this article as: Grèzes, J., & Dezecache, G. How do shared-representations and emotional processes cooperate in response to
social threat signals? Neuropsychologia (2013), http://dx.doi.org/10.1016/j.neuropsychologia.2013.09.019i
10
J. Grèzes, G. Dezecache / Neuropsychologia ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Mukamel, R., Ekstrom, A. D., Kaplan, J., Iacoboni, M., & Fried, I. (2010). Single-neuron
responses in humans during execution and observation of actions. Current
Biology, 20, 750–756.
Murata, A., Gallese, V., Luppino, G., Kaseda, M., & Sakata, H. (2000). Selectivity for
the shape, size, and orientation of objects for grasping in neurons of monkey
parietal area AIP. Journal of Neurophysiology, 83, 2580–2601.
Niedenthal, P. M. (2007). Embodying emotion. Science (New York, N.Y.), 316,
1002–1005.
Niedenthal, P. M., Barsalou, L. W., Winkielman, P., Krauth-Gruber, S., & Ric, F. (2005).
Embodiment in attitudes, social perception, and emotion. Personality and Social
Psychology Review, 9, 184–211.
Nummenmaa, L., Engell, A. D., von dem Hagen, E., Henson, R. N. A., & Calder, A.J
(2012). Autism spectrum traits predict the neural response to eye gaze in
typical individuals. NeuroImage, 59, 3356–3363.
Oliveri, M., Babiloni, C., Filippi, M. M., Caltagirone, C., Babiloni, F., Cicinelli, P., et al.
(2003). Influence of the supplementary motor area on primary motor cortex
excitability during movements triggered by neutral or emotionally unpleasant
visual cues. Experimental Brain Research, 149, 214–221.
Panksepp, J. (1998). The periconscious substrates of consciousness: Affective states
and the evolutionary origins of the self. Journal of Consciousness Studies, 5,
566–582.
Passingham, R. E. (1993). The frontal lobes and voluntary action. Oxford: Oxford
University Press.
Pichon, S., de Gelder, B., & Grèzes, J. (2008). Emotional modulation of visual and
motor areas by dynamic body expressions of anger. Social Neuroscience, 3,
199–212.
Pichon, S., de Gelder, B., & Grèzes, J. (2009). Two different faces of threat.
Comparing the neural systems for recognizing fear and anger in dynamic body
expressions. NeuroImage, 47, 1873–1883.
Pichon, S., de Gelder, B., & Grèzes, J. (2012). Threat prompts defensive brain
responses independently of attentional control. Cerebral Cortex, 22, 274–285.
Pitcher, D., Garrido, L., Walsh, V., & Duchaine, B. C. (2008). Transcranial magnetic
stimulation disrupts the perception and embodiment of facial expressions. The
Journal of Neuroscience, 28, 8929–8933.
Pouga, L., Berthoz, S., de Gelder, B., & Grèzes, J. (2010). Individual differences in
socioaffective skills influence the neural bases of fear processing: The case of
alexithymia. Human Brain Mapping, 31, 1469–1481.
Pourtois, G., Sander, D., Andres, M., Grandjean, D., Reveret, L., Olivier, E., et al.
(2004). Dissociable roles of the human somatosensory and superior temporal
cortices for processing social face signals. European Journal of Neuroscience, 20,
3507–3515.
Price, J. (2003). Comparative aspects of amygdala connectivity. Annals of the New
York Academy of Sciences, 985, 50–58.
Qin, S., Young, C. B., Supekar, K., Uddin, L. Q., & Menon, V. (2012). Immature
integration and segregation of emotion-related brain circuitry in young
children. Proceedings of the National Academy of Sciences, 109, 7941–7946.
Raos, V., Umiltá, M. A., Murata, A., Fogassi, L., & Gallese, V. (2006). Functional
properties of grasping-related neurons in the ventral premotor area F5 of the
Macaque Monkey. Journal of Neurophysiology, 95, 709–729.
Reid, V., Belsky, J., & Johnson, M. (2013). Infant perception of human action: Toward
a developmental cognitive neuroscience of individual differences. Cognition,
Brain, Behavior, 9, 210.
Rietveld, E., De Haans, S., & Denys, D. (2012). Social affordances in context: What is
it that we are bodily responsive to?, Behavioral and Brain Sciences
Commentary.
Rizzolatti, G., & Fadiga, L. (1998). Grasping objects and grasping action meanings:
The dual role of monkey rostroventral premotor cortex (area F5). Novartis
Foundation Symposium, 218, 95.
Rizzolatti, G., Fogassi, L., & Gallese, V. (2001). Neurophysiological mechanisms
underlying the understanding and imitation of action. Nature Reviews Neuroscience, 2, 661–670.
Rizzolatti, G., & Sinigaglia, C. (2010). The functional role of the parieto-frontal
mirror circuit: Interpretations and misinterpretations. Nature Reviews Neuroscience, 11, 264–274.
Roy, A. K., Shehzad, Z., Margulies, D. S., Kelly, A. M. C., Uddin, L. Q., Gotimer, K., et al.
(2009). Functional connectivity of the human amygdala using resting state
fMRI. NeuroImage, 45, 614–626.
Sartori, L., Becchio, C., Bulgheroni, M., & Castiello, U. (2009). Modulation of the
action control system by social intention: Unexpected social requests override
preplanned action. Journal of Experimental Psychology: Human Perception and
Performance, 35, 1490–1500.
Sato, W., Kochiyama, T., Yoshikawa, S., Naito, E., & Matsumura, M. (2004). Enhanced
neural activity in response to dynamic facial expressions of emotion: An fMRI
study. Brain Research. Cognitive Brain Research, 20, 81–91.
Saygin, A. P. (2007). Superior temporal and premotor brain areas necessary for
biological motion perception. Brain, 130, 2452–2461.
Schutter, D., & Honk, J. (2009). The cerebellum in emotion regulation: A repetitive
transcranial magnetic stimulation study. Cerebellum, 8, 28–34.
Schutter, D. J. L. G., Hofman, D., & van Honk, J. (2008). Fearful faces selectively
increase corticospinal motor tract excitability: A transcranial magnetic stimulation study. Psychophysiology, 45, 345–348.
Sebanz, N., Knoblich, G., & Prinz, W. (2003). Representing others’ actions: Just like
one's own? Cognition, 88, B11–B21.
Shaw, D. J., Grosbras, M. H., Leonard, G., Pike, G. B., & Paus, T. (2012). Development
of the action observation network during early adolescence: A longitudinal
study. Social Cognitive and Affective Neuroscience, 7, 64–80.
Sripanidkulchai, K., Sripanidkulchai, B., & Wyss, J. M. (1984). The cortical projection
of the basolateral amygdaloid nucleus in the rat: A retrograde fluorescent dye
study. Journal of Comparative Neurology, 229, 419–431.
Tamietto, M., Castelli, L., Vighetti, S., Perozzo, P., Geminiani, G., Weiskrantz, L., et al.
(2009). Unseen facial and bodily expressions trigger fast emotional reactions.
Proceedings of the National Academy of Sciences, 106, 17661–17666.
Thioux, M., Gazzola, V., & Keysers, C. (2008). Action understanding: How, what and
why. Current Biology, 18, R431–R434.
Tomassini, V., Jbabdi, S., Klein, J. C., Behrens, T. E. J., Pozzilli, C., Matthews, P. M., et al.
(2007). Diffusion-weighted imaging tractography-based parcellation of the
human lateral premotor cortex identifies dorsal and ventral subregions with
anatomical and functional specializations. Journal of Neuroscience, 27,
10259–10269.
Van Bavel, J. J., & Cunningham, W. A. (2012). A social identity approach to person
memory: Group membership, collective identification, and social role shape
attention and memory. Personality and Social Psychology Bulletin, 38,
1566–1578.
Van den Stock, J., Tamietto, M., Sorger, B., Pichon, S., Grèzes, J., & de Gelder, B.
(2011). Cortico-subcortical visual, somatosensory, and motor activations for
perceiving dynamic whole-body emotional expressions with and without
striate cortex (V1). Proceedings of the National Academy of Sciences of the United
States of America, 108, 16188–16193.
van Loon, A. M., van den Wildenberg, W. P. M., & van Stegeren, A. H. (2010).
Emotional stimuli modulate readiness for action: A transcranial magnetic
stimulation study. Cognitive, Affective and Behavioral Neuroscience, 10(174–181).
Van Overwalle, F. (2008). Social cognition and the brain: A meta-analysis. Human
Brain Mapping, 30, 829–858.
Voon, V., Brezing, C., Gallea, C., Ameli, R., Roelofs, K., LaFrance, W. C., et al. (2010).
Emotional stimuli and motor conversion disorder. Brain, 133, 1526–1536.
Vuilleumier, P., & Pourtois, G. (2007). Distributed and interactive brain mechanisms
during emotion face perception: Evidence from functional neuroimaging.
Neuropsychologia, 45, 174–194.
Warren, J. E., Sauter, D. A., Eisner, F., Wiland, J., Dresner, M. A., Wise, R. J. S., et al.
(2006). Positive emotions preferentially engage an auditory-motor mirror
system. The Journal of Neuroscience, 26, 13067–13075.
Whalen, P. J., Shin, L. M., McInerney, S. C., Fischer, H., Wright, C. I., & Rauch, S. L.
(2001). A functional MRI study of human amygdala responses to facial
expressions of fear versus anger. Emotion, 1, 70–83.
Williams, J. H. G., Whiten, A., Suddendorf, T., & Perrett, D. I. (2001). Imitation, mirror
neurons and autism. Neuroscience and Biobehavioral Reviews, 25, 287–295.
Wilson, M., & Knoblich, G. (2005). The case for motor involvement in perceiving
conspecifics. Psychological Bulletin, 131, 460–473.
Yabar, Y. C., & Hess, U. (2006). Display of empathy and perception of out-group
members. New Zealand Journal of Psychology, 36, 42–50.
Please cite this article as: Grèzes, J., & Dezecache, G. How do shared-representations and emotional processes cooperate in response to
social threat signals? Neuropsychologia (2013), http://dx.doi.org/10.1016/j.neuropsychologia.2013.09.019i
Is emotional transmission equivalent to contagion?
Following the Le Bonian tradition, recent psychological literature presents the phenomenon
of emotional transmission as analogous to a process of contagion: it is thought to be largely
automatic and irrepressible, both from the standpoint of the contaminator (who does not
have the intention of contaminating others), and from that of the receivers (who end up being involuntarily and unintentionally contaminated). In this respect, emotional transmission
is often characterized as automatic behavior (Hatfield et al., 1994), i.e., behavior triggered
by external events or features from the environment, in the absence of any control by the
subject (Bargh & Williams, 2006). In what follows, I tend to avoid the concept of "automaticity" and prefer to use that of "irrepressibility". Although they may appear semantically
interchangeable, these concepts encompass two distinct ideas: a process can be automatic (it
is triggered whenever a certain stimulus is present) but not irrepressible (it can be inhibited
even at an early stage).
In fact, both Le Bon and the current tradition are largely inaccurate when they describe the
process of emotional transmission as being a contagious process. Many factors are known
to influence the reception of emotions in observers. This is consistent with considering the
process of emotional transmission as a process of communication of information. In order
to be a stable, any medium of communication should allows both emitters and receivers to
benefit from exchanging information (Krebs & Dawkins, 1984): if transmitting information
is too costly (e.g., if emotional displays constantly reveal internal states that had better be
concealed), emitters will stop emitting; conversely, if receiving information is detrimental to
receivers (e.g., to be contamined by the joy of your enemy), receivers will stop considering
the signals of emitters. To consider emotional transmission as a contagious and irrepressible
process entails that receivers will constantly be contamined by the emotions of emitters. This
is implausible as it threatens the very stability of emotional transmission, which ultimately
is an instance of emotional communication. To keep emotional transmission stable, proximal
mechanisms must exist to make responses flexible over the course of emotional interactions.
Before addressing this question further, it is important to specify to what extent emotional
transmission can be construed as a process of transmission of information.
48
Emotional transmission as a process of influencing others
It is intuitively tempting to base our understanding of the transmission of emotional information on a human language-based metaphor, which consists in thinking that, over the course
of an emotional interaction, emitters provides meaning that is coded into a given structure
(the signal), and that receivers have to decode the signal to retrieve the meaning (Green &
Marler, 1979). Such application of linguistics-inspired features to emotional communication
suffers from a major inconsistency, i.e., the absence of clear representational parity in emitters and receivers. Indeed, the function of the production of, say, anger expressions is not
the share of a certain piece of information related to the emotional state of the emitter but
rather, to exert threat over the recipient.
Similar problems have been posed for non-human communication by the primatologist Drew
Rendall and his colleagues (Rendall, Owren, & Ryan, 2009). They promote the view that
animal signals are best understood as "tools for influencing the affect and behavior of others" (Rendall & Owren, 2010), a definition which is consistent with evolutionary biologists’
definition of communication. They define communication as a process of influencing or affecting other organisms rather than the transmission of a concrete entity ("the mental and
emotional states of the sender") which would have to be decoded from the signal itself
(Maynard-Smith & Harper, 2004; Scott-Phillips, 2008). The use of such a definition has
further lead Drew Rendall and colleagues to the provocative claim that the vervet monkey
leopard-related calls (famously described in Seyfarth, Cheney, & Marler, 1980) would mean
"run into a tree!", if they ever happened to carry information at all (Scott-Phillips, 2010).
Thus, non-human signals can be said to carry a form of imperative content (Sterelny, 2011),
rather than a descriptive one. Doing so, they stress the idea that non-human signals do not
intend to convey meaning but are rather means to influence others.
Such signals would be effective through the exploitation, by their producers, of sensory biases
in the receivers. Producers would produce signals that have evolved to match preexisting
sensory biases in order to generate desirable effects (behavioral responses) in the receiver.
Good examples of these signals are the squeaks, shrieks and screams typically produced by
many primate species. Because of their highly aversive aspect, these screams can effectively
49
have the desire effect on the receiver, such as forcing a mother to nurse, or preventing
aggressors to carry on with their attacks (Rendall et al., 2009; Rendall & Owren, 2010).
Such a definition of communication may also be appropriate when dealing with the transmission of emotional information. Yet, where the question of the meaning of emotional signals
is directly addressed in the literature, a linguistically-inspired information frame is generally
employed, and entails representational parity between emitters and receivers. The classical
view indeed assumes that, during the course of any emotional communication event (i.e.,
somebody expressing fear), observers are committed to the job of fully interpreting others’
emotional life through the use of a set of decoding processes whose function is to extract
a meaning out of a structure that have previously been encoded by the producer (Scherer,
2009). As a consequence, emotion theorists could be better off if they were considering emotional communication as a process of influence (where emitters try to exert pressures on
recipients) rather than as a process of exchange of information (that ultimately supposes
representational parity between emitters and recipients).
Emotional transmission 6= contagion
As mentioned above, if emotional transmission is a considered as communication, it follows that the process cannot be completely equated with contagion. In any communication
system, the cognitive systems of emitters and receivers are tuned to selectively emit and respond to others’ signals. This has dramatic consequences on our understanding of so-called
emotional “contagion”: the reception of emotional information cannot be irrepressible.
This, however, does not mean that emitters and receivers have control over their acts during
the course of emotional communication: features of the cognitive system may indeed have
been selected to selectively produce and react to emotional displays without the subjects
experiencing voluntary control (such feelings being situated at the proximal level of analysis).
This issue is the subject of an article I have co-authored with Dr. Hugo Mercier and Dr.
Thom Scott-Phillips, which is to be found in the international peer-reviewed journal Journal
of Pragmatics (July 2013).
50
+ Models
PRAGMA-3768; No. of Pages 13
Available online at www.sciencedirect.com
Journal of Pragmatics xxx (2013) xxx--xxx
www.elsevier.com/locate/pragma
An evolutionary approach to emotional communication
Guillaume Dezecache a,b,*, Hugo Mercier c,**, Thomas C. Scott-Phillips d
b
a
Laboratory of Cognitive Neuroscience (LNC), Inserm U960 & IEC, Ecole Normale Superieure (ENS), 75005 Paris, France
Institut Jean Nicod (IJN), UMR 8129 CNRS & IEC, Ecole Normale Superieure & Ecole des Hautes Etudes en Sciences Sociales
(ENS-EHESS), 75005 Paris, France
c
CNRS, L2C2, UMR5304, Institut des Sciences Cognitives (ISC), 69675 Bron Cedex, France
d
School of Psychology, Philosophy and Language Sciences, University of Edinburgh, Edinburgh EH8 9AD, United Kingdom
Received 8 May 2012; received in revised form 18 June 2013; accepted 19 June 2013
Abstract
The study of pragmatics is typically concerned with ostensive communication (especially through language), in which we not only
provide evidence for our intended speaker meaning, but also make manifest our intention to do so. This is not, however, the only way in
which humans communicate. We also communicate in many non-ostensive ways, and these expressions often interplay with and
complement ostensive communication. For example, fear, embarrassment, surprise and other emotions are often expressed with
linguistic expressions, which they complement through changes in prosodic cues, facial and bodily muscular configuration, pupil
dilatation and skin colouration, among others. However, some basic but important questions about non-ostensive communication, in
particular those concerned with evolutionary stability, are unaddressed. Our objective is to address, albeit tentatively, this issue, focusing
our discussion on one particular class of non-ostensive communication: emotional expressions. We argue that existing solutions to the
problem of stability of emotional communication are problematic and we suggest introducing a new class of mechanisms---mechanisms of
emotional vigilance---that, we think, more adequately accounts for the stability of emotional communication.
© 2013 Elsevier B.V. All rights reserved.
Keywords: Evolution; Ostensive communication; Non-ostensive communication; Emotional signals; Vigilance
1. Introduction
Communication is ostensive if, when we communicate, we not only provide evidence for our intended speaker
meaning, but we also make manifest our intention to do so (Grice, 1989; Sperber and Wilson, 1995). Much human
communication is ostensive---but we also communicate in many non-ostensive ways, such as body language and various
expressions of emotion. Often, ostensive and non-ostensive behaviours complement one another. For example, fear,
embarrassment, surprise, and other emotions are often expressed ostensively---with, say, linguistic expressions. At the
same time, they are also expressed in non-ostensive ways with, among others, changes in prosodic cues (Frick, 1985),
facial (Ekman, 1993) and bodily muscular configuration (James, 1932), pupil dilatation (Bradley et al., 2008) and skin
colouration (Shearn et al., 1990). These ostensive and non-ostensive expressions are typically expected to be consistent
with one another: we do not verbally express fear but at the same time produce the facial expressions associated with
* Corresponding author at: Laboratory of Cognitive Neuroscience, Ecole Normale Supérieure, 29 rue d’Ulm, 75005 Paris, France.
Tel.: +33 144322638; fax: +33 144322974.
** Corresponding author at: L2C2, Institut des Sciences Cognitives, 67 boulevard Pinel, 69675 Bron Cedex, France.
E-mail addresses: [email protected] (G. Dezecache), [email protected] (H. Mercier),
[email protected] (T.C. Scott-Phillips).
0378-2166/$ -- see front matter © 2013 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.pragma.2013.06.007
Please cite this article in press as: Dezecache, G., et al. An evolutionary approach to emotional communication,
Journal of Pragmatics (2013), http://dx.doi.org/10.1016/j.pragma.2013.06.007
+ Models
PRAGMA-3768; No. of Pages 13
2
G. Dezecache et al. / Journal of Pragmatics xxx (2013) xxx--xxx
happiness, or vice versa. If such contradictions occur, observers have to reject one or the other observation as an
unreliable guide to the focal individual’s state of mind.
Although pragmatics is typically concerned with ostensive communication, non-ostensive behaviour also plays an
important role in communication, and it often interacts with ostensive communication in non-trivial ways (e.g., in
conversational contexts, see Proust, 2008). However this is not sufficiently reflected in the literature. Many fundamental
questions about the role of non-ostensive behaviour in communication are unaddressed. In this paper, we specifically
consider the evolution of non-ostensive communication. From an evolutionary perspective, a central question for any
communication system is ‘‘what prevents dishonesty?’’. Although there is a substantial literature devoted to the evolution
of the abilities that allow humans to produce and understand language (e.g., Pinker and Bloom, 1990; Bickerton, 1992;
Jackendoff, 2003), the problem of honesty has not figured greatly in those discussions (but see Dessalles, 2007; ScottPhillips, 2008a; Sperber et al., 2010). The ‘honesty’ of non-ostensive communication is even less studied, at least from an
evolutionary perspective.
In this paper, our agenda is principally diagnostic: we wish to highlight the evolution of non-ostensive communication
as a topic worthy of future research and suggest tentative answers intended to spur future research. Moreover, our
discussion of non-ostensive communication will be focused on one particular type of non-ostensive communication:
emotional signals. Note that we are specifically concerned with the involuntary use of emotional signals. Emotional
expressions can also be used voluntarily, and this opens up an interesting current area of pragmatic research (see e.g.,
Wharton, 2009).
There are three reasons for our focus on emotional expressions.1 First, as discussed above, they are frequently
displayed alongside ostensive signals, which they complement and interact with (Proust, 2008). Second, the functioning
of emotional expressions has been the focus of much previous research (e.g., Fridlund, 1994; Ekman, 2003a), and so our
discussion can be informed by a wealth of previous work. Third, there have been tentative answers to the issue of the
honesty of emotional communication (e.g., Mortillaro et al., 2012; Mehu et al., 2012; Owren and Bachorowski, 2001;
Hauser, 1997; Ekman, 2003b) but none are wholly satisfactory, for reasons we shall document.
The paper is structured as follows: in the next section (section 2) we introduce a broad evolutionary framework,
including definitions of key terms, providing the scaffolding for subsequent discussion; in section 3 we ask whether
emotional expressions can be seen as communicative at all; in sections 4 and 5 we discuss the risks of deception in
emotional communication, and how they can be avoided; in these sections, we also discuss and reject conventional
hypotheses that have been proposed to account for the stability of emotional communication, and introduce a new class of
mechanisms which, we think, may allow for the stability of emotional communication---mechanisms of emotional vigilance;
in section 6, we discuss the question of why non-ostensive communication persists in humans; finally, in section 7, we
discuss the question of interplay between the so-called emotional vigilance mechanisms, and those of epistemic vigilance
that have previously been introduced by Sperber et al. (2010).
2. Communication and its evolution
We define communication in the following way: communication occurs when an action (a signal) produced by an
individual organism causes a change (a reaction) in another organism, where both the signal and the reaction have been
designed for these purposes (Scott-Phillips, 2008b; Table 1). If the action has been designed for these purposes, but the
reaction has not, then the interaction is coercive; and if the reaction has been designed for these purposes but the action
has not, then the interaction is a cue. The overall situation is summarised in Table 1.
Fig. 1 gives an everyday example of all three types of interaction. The example of signalling/communication in the
figure is ostensive, but the definition applies equally well to non-ostensive communication. Indeed, this framework is a
generalised version of one developed in evolutionary biology (Maynard-Smith and Harper, 2003; Scott-Phillips, 2008b),
which is concerned with the communication systems of a wide variety of different species, almost all of which do not
involve ostension. There are other approaches to defining communication (e.g., Hauser, 1997; Reboul, 2007; see ScottPhillips, 2008b for a review). We adopt the definition that we do for two reasons. First, it is the only approach that works
across a range of prima facie cases, in the sense that they correspond to our intuitions about what is and is not a signal/
cue/coercive behaviour (Scott-Phillips, 2008b). Second, the clear functional distinction that it makes between cues and
signals is particularly important for questions concerning the evolution and stability of communication systems.
From an evolutionary perspective, the classic question in the study of communication is stability (Maynard-Smith and
Harper, 2003; Searcy and Nowicki, 2007). Signallers should presumably evolve to send signals leading to responses that
are in their best interests. Yet, these interests may conflict with the receivers’ best interests. If such is the case, receivers
should in turn evolve not to attend to the signal, and this would then lead the system to collapse. The same logic can also
1
Expressions and signals are used as synonyms.
Please cite this article in press as: Dezecache, G., et al. An evolutionary approach to emotional communication,
Journal of Pragmatics (2013), http://dx.doi.org/10.1016/j.pragma.2013.06.007
+ Models
PRAGMA-3768; No. of Pages 13
G. Dezecache et al. / Journal of Pragmatics xxx (2013) xxx--xxx
3
Table 1
Definitions of signal/response, cue and coercion (Y = yes; N = no). See main text for discussion.
Signal/response
Cue
Coercion
Function of action to cause reaction?
Function of reaction to be caused by action?
Y
N
Y
Y
Y
N
Adapted from Scott-Phillips (2008a).
[(Fig._1)TD$IG]
Fig. 1. Everyday examples of the distinction between signal/response, cue and coercion. (from Scott-Phillips and Kirby, 2013): this image depicts
a young man (in the centre) pushing his colleague from her chair. This act involves three kinds of interaction with the audience: the first interaction
involves the young man and the colleague he is pushing. This interaction is an example of coercion. The second interaction, involving the young
man and the colleague he is laughing with, is a case of communication: the act of pushing is a signal whose purpose is to affect the female
colleague who is witnessing: her smile is the response. Finally, in the third interaction -- between the young man and his boss [this latter being at
the left side of the image] --, the pushing is a cue: it informs the boss about the behaviour of his employee, even though this was not its function.
Note that the example of signalling behaviour in this figure is ostensive but the definition also applies to non-ostensive communicative behaviours,
as we will argue throughout the paper.
apply over individual lifetimes: if an individual’s communication is regularly unreliable (for example, because she is
dishonest), then others will learn not to pay too much attention, if any, to what the focal individual has to say. This is exactly
the outcome described in Aesop’s fable of the Boy That Cried Wolf.
The evolution and stability of communication thus presents a strategic problem: what prevents widespread deception,
and the consequent collapse of the system? (Note that we are using ‘deception’ in functional terms. There is deception
when one organism exploits another’s organism sensitivity to certain signals to its own benefits. As such, and as long as it
has to do with non-ostensive communication,2 deception does not entail volition or consciousness from the sender.)
Whatever the answers to these questions are in any particular case, the consequence of these strategic concerns is that
stable communication systems should be beneficial for both parties (Scott-Phillips, 2010b). If they were not then one party
would stop emitting or attending to the signal.
It is important to recognise that this is a problem for all evolved communication systems, and not just those where signal
production is voluntary, or intentional. This is because it is a problem at the ultimate, rather than proximate, level of
analysis. Ultimate explanations are concerned with why a behaviour exists; proximate explanations with how it works (see
Scott-Phillips et al., 2011 for extensive discussion). The dynamic of natural selection leads to organisms whose behaviour
is designed to maximise their inclusive fitness (Grafen, 2006), and ultimate explanations of behaviour explain how a
particular behaviour contributes to that. For example, if lying is explained in terms of how it will lead to beneficial outcomes
for the speaker, then that is an ultimate explanation if the beneficial outcomes eventually lead to positive fitness
consequences, on average. If, on the other hand, lying is explained in terms of psychological motivations, then this is a
proximate explanation: this sort of explanation is concerned with how the benefits are achieved, i.e., how behaviour
operates. Both ultimate and proximate explanations are complementary and required for a proper understanding of
behaviour (Mayr, 1963; Tinbergen, 1963; Scott-Phillips et al., 2011).
In the case of emotional communication, solutions to the problem of honesty and stability of emotional communication
have often confused the proximal and ultimate level of explanation by suggesting that emotional expressions are honest
2
While deception in the context of non-ostensive communication does not presuppose any conscious intention to manipulate the receiver, it is
not true of ostensive communication where senders often consciously intend to fool receivers (Maillat and Oswald, 2009).
Please cite this article in press as: Dezecache, G., et al. An evolutionary approach to emotional communication,
Journal of Pragmatics (2013), http://dx.doi.org/10.1016/j.pragma.2013.06.007
+ Models
PRAGMA-3768; No. of Pages 13
4
G. Dezecache et al. / Journal of Pragmatics xxx (2013) xxx--xxx
because they are involuntary, being mandatorily associated with a corresponding emotional experience (e.g., the
Duchenne smile3 in Owren and Bachorowski, 2001). The issue is that a proximal mechanism (the involuntariness of
emotional displays) is offered to answer an ultimate question (about the honesty of emotional communication). We detail
below the more specific issues with this answer. Moreover, neglecting the ultimate/proximal distinction can also lead to
other misunderstandings, such as equating signals defined at the ultimate level, as we have done, with signals that result
from intentional, voluntary, and strategic decision-making (which are all proximate level phenomena).
With this in mind, we now turn to the question of what keeps human communication stable. For ostensive
communication, this question receives relatively commonsensical answers. The ultimate reason it is stable is that the
benefits to dishonesty are outweighed by the social costs of ostracism that will follow if one is perceived as a liar or an
otherwise unreliable communicator (Lachmann et al., 2001; Scott-Phillips, 2010a) (this is indeed what happens to the boy
in Aesop’s fable). From a proximate perspective, a suite of cognitive mechanisms allows humans to be vigilant towards
communicated information: we filter the information we receive via communication so that we are not unduly misled.
These mechanisms have recently been termed epistemic vigilance (Sperber et al., 2010). However with regard to nonostensive communication, and expressions of emotion in particular, the situation is less immediately clear. We will
address this issue in section 4. However before we are able to do that, we must address an important preliminary question,
about exactly what we mean by emotional displays, and whether they qualify as signals at all.
3. Are emotional expressions genuine signals?
Our goal is to apply the logic of the evolution of communication to emotional expressions. A first and necessary step
must be to establish that emotional expressions are indeed signals, following the definition offered above. Although it
intuitively seems as if the function4 of at least some emotional expressions is communicative---to let others know we need
them when distress is expressed, for example---this assumption will not always be justified. Behaviours often inform others
only incidentally (i.e., they are cues). Indeed, Darwin (1872) suggested that this is exactly what emotions are. For
example, the widening of the eyes created by fear could allow the individual to enlarge her visual field and be better
prepared to react to potential threats (Susskind et al., 2008). The question is whether these pre-existing behaviours
underwent later selection because of any informative function they might have.
How can we show this? A signal entails specific adaptations for the signal in both senders and receivers. The sender
must do more than merely coerce the receivers and the receivers must do more than merely respond to a cue.
Unfortunately, it is extremely difficult to provide conclusive arguments for either of these claims. Instead, two weaker types
of evidence are usually provided in support of the claim that emotional expressions are genuine signals.
The first argument is simply that some traits of emotional expressions are difficult to account for without recourse to
their role in communication. This is quite commonsensical in the case of sadness and joy for instance. Other cases are
more ambiguous. As suggested above the expression of fear could have adaptive effects as action preparation (Susskind
et al., 2008; Vermeulen and Mermillod, 2010). Similarly, the facial features of disgust can have direct adaptive
consequences, such as narrowing the eyes to prevent exposure to potentially toxic substances (Susskind et al., 2008).
Yet it has been argued that these potential benefits are slight and unable to account for the whole expression. For
instance, Susskind et al. (2008) have shown that the functional importance of sensory acquisition in fearful expressions is
limited to the upper visual field. For the individual producing the display, the benefits in terms of sensory acquisition
enhancement might then be relatively small compared to the costs involved in making the display highly discriminable. If
this is the case, it would be more likely that fearful expressions would have ultimately been selected for a signalling
purpose by virtue of their high discriminability. More generally, the difficulty in accounting for the configuration of emotional
displays in terms of efficient action preparation suggests that action preparation might not be their function (Fridlund,
1994).
It does not follow from this first argument that emotional expressions are adaptive as signals; they could still be mere
accidents. If emotional expressions are signals, they should be designed as such. In particular, they should be sensitive to
the social context. It makes little sense to emit a signal if there is no one to receive it. More complex social modulations
could also be expected: hiding distress from an enemy, concealing envy from a friend, etc. At a very broad level, Dobson
has shown that, among non-human primates, facial mobility (namely, the set of facial movements a species can produce)
is predicted by group size (Dobson, 2009). This result suggests that the evolution of facial mobility on the whole serves
social functions. Other evidence from non-human animals indicates that the expression of fear is socially modulated
(Sherman, 1977; Alcock, 1984; Chapman et al., 1990). In humans, it has been shown that the social context modulates
3
Duchenne smiles are considered as genuine smiles (Ekman et al., 1990), smiles that are associated with the experience of joy, and which
differ from faked or ‘polite’ smiles that do not involve the activation of the palpebral part of the orbicularis oculi.
4
Our use of the concept of function reflects that of proper function, following Millikan (1989).
Please cite this article in press as: Dezecache, G., et al. An evolutionary approach to emotional communication,
Journal of Pragmatics (2013), http://dx.doi.org/10.1016/j.pragma.2013.06.007
+ Models
PRAGMA-3768; No. of Pages 13
G. Dezecache et al. / Journal of Pragmatics xxx (2013) xxx--xxx
5
the expression of pain and distress (Badali, 2008) and sadness (Zeman and Garber, 1996). Similarly, smiling has been
shown to be very heavily socially modulated (Fridlund, 1994). For instance Kraut and Johnston (1979), observed the
smiling behaviour of people in various social settings, and concluded that smiling was so strongly associated with social
motivation that its link with an internal experience of positive feelings was tenuous.
The evidence that emotional expressions are socially modulated is very suggestive that they are indeed signals.
However, prima facie, it is also compatible with the hypothesis that they are but a form of coercion. If emotional
expressions have evolved to influence the behaviour of other individuals, they should be also expected to be socially
modulated. For instance, observers may have started to respond to some purely accidental features of distress as cues
that help was required. It would then have been beneficial to exaggerate distress cues in order to influence observers
more easily, and the behaviour would have become merely coercive. If it was in the observers’ best interest to specifically
attend to these new, exaggerated behaviours, they could evolve specific mechanisms to do so. Thus to complete the
demonstration that emotional expressions are signals, it would therefore be necessary to show that observers are not
merely responding to cues. We know of no strong empirical evidence supporting this claim. However, in many cases the
usefulness of the original cue, whatever that may have been, is limited. It would be extremely surprising if the mechanisms
designed to detect and react to the emotional expression were still only targeting that cue. To take an example, the raising
of the lips triggered by anger cannot be a reliable cue that we are about to be bitten. Clearly, the detection of the cue has
evolved beyond its original function.
The arguments exposed here may fall short of a strong demonstration that emotional expressions are genuine signals.
Yet we consider them to be sufficiently suggestive to at least shift the burden of proof to those who would claim that
emotional expressions have no signalling function. It is now possible to turn to the challenges raised by emotional
expressions due to their communicative character.
4. The risks of emotional signals and how to ward them off
Like all communication, emotional signals can be dangerous. In particular, receivers run the risk of being deceived by
senders. An individual who would always submit to anger displays or help in response to signals of sadness would be
easily abused. In section 2, we distinguished between proximate and ultimate answers to this question, and emphasised
that both are needed for a proper understanding of a trait. As we will shortly discuss, previous psychological research on
emotional displays has suggested and described a range of proximate mechanisms that may be involved. However the
ultimate question of how these mechanisms maintain the stability of emotional communication has received less attention.
More precisely, as detailed below, previous explanations are problematic as they are based on the dubious assumption
that there is an unfakeable relationship between emotional display and emotional experience.
Evolutionary theory suggests three broad classes of ways in which communication systems can be kept evolutionarily
stable at the ultimate level (Davies et al., 2011; Maynard-Smith and Harper, 2003): (i) individuals may share a common
interest, such that there is no incentive to lie; (ii) there may be a causal, unfakeable relationship between signal form and
signal meaning (an index); or (iii) there may be costs associated with the signal. These costs may in turn be either
handicaps, where the costs are associated with the production of the signal itself, and are paid by honest signallers as a
guarantee of honesty (Zahavi, 1975; Grafen, 1990; Godfray, 1991); or they may be deterrents, where the costs are
associated with the consequences of the signal, and are hence paid by dishonest signallers (Lachmann et al., 2001). The
question is: which of these most likely describes emotional communication?
The most common answer has been to rely on explanation (ii), stressing the fact that producing dishonest emotional
signals can be very difficult (see Owren and Bachorowski, 2001, 2003; Hauser, 1997 for crying). At a proximal level of
explanation, Ekman and his colleagues have tried to demonstrate that some emotional signals---such as the famous
Duchenne smile---are practically impossible to voluntarily fake (Ekman et al., 1980). The logic behind this argument is that
the honesty of emotional signals is guaranteed by the lack of voluntary control. Someone who would want to fake genuine
joy, for instance, would simply be unable to do so. As a result, at the proximal level, receivers would be certain at least that
when an emotion is expressed, it is genuine. By contrast, Ekman allows for the possibility that the suppression of (some)
emotional signals can be learned in the form of ‘display rules’ (Ekman et al., 1969). Ekman may be right that the main
danger faced by receivers is not the voluntary inhibition but the voluntary production of emotional signals. Still, even if we
assume that he is right and emotional expressions are, most of the time, involuntarily produced, his answer is unsatisfying
for at least three reasons.
The first reason is that this explanation lies at the proximal level of analysis and says little or nothing about the ultimate
one. The problem of honesty occurs regardless of what the proximate mechanism is; that is precisely why it is the defining
problem of animal signalling theory, where the vast majority of signals, if not all of them, are ‘involuntary’ (see MaynardSmith and Harper, 2003).
The second reason that makes accounts relying on lack of voluntary control unconvincing is that they are not
evolutionary plausible. As pointed out by Frank (1988) and Fridlund (1994), if the voluntary control of emotional signals
Please cite this article in press as: Dezecache, G., et al. An evolutionary approach to emotional communication,
Journal of Pragmatics (2013), http://dx.doi.org/10.1016/j.pragma.2013.06.007
+ Models
PRAGMA-3768; No. of Pages 13
6
G. Dezecache et al. / Journal of Pragmatics xxx (2013) xxx--xxx
had brought fitness benefits, it would have evolved. Indeed, there appears to be no essential physiological constraint that
would have prevented natural selection from selecting the ability to voluntarily produce emotional displays. The voluntary
use of emotional displays is evolutionary plausible as it involves structures that already exist (neural and motor pathways
for one’s voluntary control over most of facial muscles (Rinn, 1984)). In fact, the ‘‘involuntariness’’ argument rests upon
one single example, that only concerns facial expression and neglect other muscular events that play a crucial role in
emotional attribution (such as bodily movements, see: De Gelder, 2006; Grèzes et al., 2007): the mention of the orbicularis
oculi, a muscle that is involved in blinking and in the production of so-called Duchenne smiles, and whose palpebral part is
considered to be impossible to activate voluntarily. Yet, this is one of the few examples of muscles that cannot be activated
voluntarily and that play a substantial role in emotional communication. Moreover, as Ekman (2003b) acknowledges, the
characteristics that may allow observers to distinguish between faked and spontaneous emotional expressions are subtle
(i.e., morphology, symmetry, duration, speed of onset, apex overlap, ballistic trajectory and overall cohesion of the display
given the context) and one has to carefully pay attention to them in order to detect liars. In fact, it is still to be shown that the
non-credibility of a display, investigated in laboratory settings (e.g., Mehu et al., 2012) where participants are urged to pay
a lot of attention to the emotional displays, can reliably and rapidly be detected in more ecological contexts. Together,
these elements cast serious doubts on the idea that the difficulty to control emotional displays can be part of the
explanation for the honesty of emotional communication.
The third (and probably the more serious) problem is that voluntarily faked emotional signals are not always the main
threat to receivers. Take anger as an example. Let’s assume that the function of anger expressions is to signal a readiness
to inflict costs to another individual if that individual fails to submit in some way. If the response to anger displays were
automatic submission, a receiver would clearly be at risk of senders voluntarily expressing anger to make them submit for
no good reason. But the receiver would also be at risk if senders were actually angry, but not in a position to follow up on
their threat. Just as evolution could have led to the development of voluntary control of emotions, evolution could have led
to the development of ‘fake’ emotions. For instance, senders could become genuinely angry even when they are not really
willing to engage in a potentially costly confrontation. The emotion, including its cognitive, physiological and expressive
correlates would be exactly similar to anger, except that if the receiver failed to submit, the sender would not assault her.
We may note another implausibility in Ekman’s account, one that relates to the costs potentially incurred by senders
instead of receivers. As noted by Fridlund ‘‘any reasonable account of signalling must recognise that signals do not evolve
to provide information detrimental to the signaller.’’ (1994, p. 132). For instance, if expressing distress in some
circumstances could regularly hurt a sender’s interests---by making enemies aware of one’s weaknesses, say---then the
expression of distress could not be automatic, it would have to be modulated, whether it is by voluntary control or not. We
mentioned above that Ekman allows for the learning of display rules to inhibit the expression of emotions. Yet this is more
likely to be a cultural innovation than the built-in mechanism we should expect.
Besides the explanation based on the involuntariness of emotional displays, another type of explanation can be found
in the literature. It suggests that certain emotional signals are handicaps, and are, as a consequence, honest. Such an
explanation has for instance been offered for tears (Hasson, 2009; Hauser, 1997) that are indeed difficult to produce
spontaneously. Because they considerably handicap perception and are not easily fakeable, tears honestly signal one’s
distress. Such explanation face the same problem as above: evolution could have favoured the deliberate use of tears
whenever it is in the interest of the signaller.
The alternative explanation we suggest is that receivers are endowed with a suite of mechanisms designed to
modulate their responses to emotional signals. These mechanisms might be termed emotional vigilance, in order to
emphasise that the functional role they play is equivalent to the role played by mechanisms for epistemic vigilance (see
section 2) in ostensive communication. However, it should be emphasised that we do not mean to suggest that the
mechanisms involved in the two processes are similar, nor even that defence against misleading emotional signals
necessarily requires high level cognitive abilities. Our only objective with this term is to draw attention to the functional
equivalence of the two sets of mechanisms.
Mechanisms of emotional vigilance are confronted with a complex task. Figuring out when it is beneficial to respond to
any given emotional signal requires integrating numerous variables such as the type of signal, its intensity, its source, as
well as many features of the specific context in which it is emitted (e.g., Barrett et al., 2007; Barrett and Kensinger, 2010). A
child’s extreme anger display when she is told that she cannot have a second serving of ice cream should not elicit
submission; a raised eyebrow by a mafia Don may.
A complete analysis of the mechanisms of emotional vigilance would therefore require a lengthy emotion by emotion
analysis, which is not within the scope of this article. Indeed, one of the strengths of an explanation based on mechanisms
of emotional vigilance is that it does not rely on one or a few very specific examples, such as the Duchenne smile or tears.
Instead, it can readily extend its logic to all emotional signals, even if we should expect different heuristics to be at play for
different emotions.
Two general dimensions of vigilance, likely to be observed for all emotions to varying degrees, are delineated. In the
case of epistemic vigilance, it has proven useful to distinguish between issues of competence and issues of benevolence
Please cite this article in press as: Dezecache, G., et al. An evolutionary approach to emotional communication,
Journal of Pragmatics (2013), http://dx.doi.org/10.1016/j.pragma.2013.06.007
+ Models
PRAGMA-3768; No. of Pages 13
G. Dezecache et al. / Journal of Pragmatics xxx (2013) xxx--xxx
7
(e.g., Mascaro and Sperber, 2009; see also Sperber et al., 2010). An ostensive message can be misleading either
because of the deceitful intent of the sender or because she is merely mistaken. In both cases caution should be exerted
about the message. The same distinction can be applied to emotional vigilance, as a first step towards more specific
characterizations.
In the case of epistemic vigilance competence can be seen as a relatively objective measure: did the sender form false
beliefs by mistake? But it is far from clear what it means---if it even means anything---for an emotional state to be true. As a
result, issues of competence have to be treated differently for epistemic and emotional vigilance. The closest equivalent of
the notion of an incompetent sender would be an individual who expresses emotional signals that bear no adaptive
relationship whatsoever to the context. The emotional expressions of an individual whose emotional systems would be highly
dysfunctional should not be trusted. Such cases, however, should be relatively rare: severely emotionally impaired
individuals face a steep evolutionary challenge. Moreover, if the disorder is consistent, it should be relatively easy to flag
these individuals as being unreliable and either not pay attention to their emotional signals or at least not react to them in the
typical way.
Other competence concerns that would have likely been more frequent stem from asymmetries between the incentives
of the sender and the receiver. For instance, someone with a strong allergic reaction to bee stings should not be deemed
incompetent for expressing a strong fear in the presence of a bee. This fear signal should not be discounted as it provides
important information regarding the behaviour of that individual and the appropriate course of action to be taken. Yet it
should not produce in observers what is usually thought of as being the automatic reaction to fear, which is fear. The
asymmetry in competence does not need to be permanent or long lasting, as in the case of the allergy. For instance,
someone can be confronted with a dominant individual whose anger she knows to be based on a mistaken belief. Even if
the receiver would otherwise be inclined to submit, it may be worth in this case trying to correct the dominant’s beliefs first.
The second broad issue that mechanisms of emotional vigilance have to deal with is the benevolence of senders. In
rare occurrences the interests of senders and receivers are perfectly aligned, but in the vast majority of cases there will be
some discrepancy. Some very general metrics can be useful to judge the level of interest alignment. Someone’s interests
are more likely to align with her in-group than her out-group, with a friend than a stranger, with a brother than a third cousin,
etc. Yet even the interests of very close individuals can diverge. When a child expresses pain, there is usually little conflict
of interest with her parents. When she expresses anger for not receiving the latest toys, the interests are much more poorly
aligned. Similarly, couples often have an incentive to misrepresent their emotions to each other. The converse is also true:
the interests of strangers can converge. If we find ourselves stranded on a boat that requires two people for rowing, our
interests can become very much aligned with those of a perfect stranger from a group we may otherwise not deem
trustworthy. Even if general metrics---in-group vs. out-group and the like---can be useful, they must be supplemented by an
assessment of each situation’s specificities.
A crucial difference between competence and benevolence issues is in their evolutionary dynamic. In the case of
competence, there is no selection pressure to deceive receivers. The individual who is allergic to bee stings is not better
off if others also experience fear. By contrast, a stranger who gets angry with us would benefit if we were automatically
submissive. As a result, the latter individual has an incentive to deceive us, for instance with anger displays that
exaggerate the actual threat. There can therefore be an arms race between senders and receivers, with senders trying to
pass through the receivers’ vigilance and receivers evolving more complex mechanisms of emotional vigilance. Such an
arms race would not arise in issues of competence. Moreover, the costs incurred by the wrong response to an emotional
signal are likely to be higher when the issue is one of benevolence rather than competence. A deceitful sender might
purposefully try to inflict the maximum cost upon a receiver---by making her experience an emotion at the worst possible
moment---which is not the case for incompetent signals. It is thus reasonable to assume that issues of benevolence rather
than competence were the main driver behind the evolution of emotional vigilance.
5. Evidence of mechanisms of emotional vigilance
While it is not possible here to make precise predictions regarding the working of mechanisms of emotional vigilance, we
can make some more general suggestions. At the most general level, reactions to emotional signals are very unlikely to be
automatic, or reflex-like. Instead, they should be heavily modulated by the social context. The competence and benevolence
of the source in each particular context should play a role in the response to emotional signals. If the competence or
benevolence of the source is dubious, the reaction should be either dimmed or adapted to the specific circumstances.
Unfortunately, there is a dearth of relevant evidence. Importantly, the relative paucity of empirical evidence should not
be taken as evidence that there is no or little contextual modulation. Simply, the issue has not received the attention it
deserves. Instead, research has focused on showing that some reactions to emotional signals are automatic (as in cases
of primitive emotional contagion, see Hatfield et al., 1994). Such research would seem to be in direct contradiction with the
present predictions. Yet this contradiction is more apparent than real. Most studies on automaticity in this area bear on
very quick and subtle reactions such as slight facial movements (e.g., Dimberg et al., 2002) or variations in skin
Please cite this article in press as: Dezecache, G., et al. An evolutionary approach to emotional communication,
Journal of Pragmatics (2013), http://dx.doi.org/10.1016/j.pragma.2013.06.007
+ Models
PRAGMA-3768; No. of Pages 13
8
G. Dezecache et al. / Journal of Pragmatics xxx (2013) xxx--xxx
conductance (e.g., Esteves et al., 1994). Our predictions do not bear mainly on such reactions, but on potentially more
costly behaviour. The danger stemming from an automatic reaction to a fear signal, for instance, is unlikely to come from a
micro-contraction of some facial muscles. For these reactions to be evolutionarily relevant, they would have to have a
substantial impact on behaviour. Another problem with most studies of automaticity is that the relevant contextual
modulations are not introduced. Participants have no reason to question the competence or benevolence of the people
depicted in the stimuli, so that claims of automaticity cannot be thoroughly tested.
It may be worth saying a quick word about the supposed cases of emotional contagion that involve seemingly costly
behaviour such as the ‘‘laughter epidemics’’ (e.g., Ebrahim, 1968). If a whole school can start laughing uncontrollably
because emotions spread from student to student, it seems as if the automaticity of emotional responses trumps
emotional vigilance. Such a conclusion would be hasty, for three reasons. First, emotion epidemics are exceedingly rare--that is what makes them so startling. Evidently, a laughter epidemic is not started every time someone laughs. Second,
this type of epidemic only spreads within a closely knit group. In terms of benevolence, these are among the people one
should trust the most. Third, it is possible that expressing these emotions may in fact serve the individuals’ interests at that
particular time. Laughter epidemics can get students out of school for a few days. Other emotion epidemics have given
factory workers a break (see Evans and Bartholomew, 2009). In some contexts, and for some people, emotional vigilance
may therefore have no reason to break the spread of these epidemics. Far from undermining the idea of emotional
vigilance, the characteristics of emotion epidemics are in fact better explained by postulating mechanisms of emotional
vigilance than an automatic response to emotional signals (Mercier, 2013).
Some studies have directly tackled the question of the contextual modulation of reactions to emotional signals. Most of
them bear on issues of benevolence while only a few results shed light on the treatment of competence. For instance,
Zeifman and Brown (2011) have shown that tears are more efficient at conveying sadness when they are shed by adults
than by children or infants. A possible interpretation is that infants and children are much more likely than adults to cry in
situations that would not qualify as sadness, such as anger (for children) or hunger (for infants). In a way, they are treated
as less competent. To the extent that this result would carry to parents’ reactions to their children vs. adult strangers, it
would offer a nice contrasting case with benevolence. In the vast majority of cases, a parent’s interests are more in line
with those of her child than that of a stranger. Yet, because children have emotional reactions that differ from those of
adults, reactions to their emotional signals can be more heavily modulated. More generally, it is important to see
caregivers as active in their interaction with crying infants (see Owings and Zeifman, 2004).
Another interesting piece of evidence comes from Hepach et al. (2012) who have shown that children, as early as
3 years old, modulate empathetic response towards others according to the appropriateness of their distress (where the
target could have been genuinely harmed, could be over-reacting or could signal distress for no reason at all). This result
is especially relevant as it shows that, from very early on, reactions to emotional expressions are not automatic but rather
heavily modulated by contextual cues.
As argued above, competence is not the main issue that receivers have to deal with. Senders whose interests do not
align with receivers generally pose a more critical threat. Determining whose interests align with hers is, for the receiver,
an arduous task. Many types of cues are likely to be taken into account in order to yield an appropriate assessment. Some
of these cues can cover large, fixed categories. Out-group members are less likely to have interests aligned with those of a
receiver than her in-groups. It is thus not surprising that people show different responses to emotions expressed by
members of these two categories. For instance, ‘‘positive responses to fear expressions and negative responses to joy
expressions were observed in outgroup perceivers, relative to ingroup perceivers.’’ (Weisbuch and Ambady, 2008:1; see
also Xu et al., 2009; Gutsell and Inzlicht, 2010; Mondillon et al., 2007; Nugier et al., 2009). Other general markers can be
used to modulate one’s emotional responses. Attitude towards the sender modulates the receiver’s response, such that
when that attitude is negative, the receiver’s facial mimicry can weaken or even become incongruent with the emotion
expressed by the sender (Likowski et al., 2008). Similarity between the sender and the receiver is another important
moderator of the response to emotional signals (Heider, 1982; Feshbach and Roe, 1968; Sullins, 1991; Epstude and
Mussweiler, 2009).
Beyond these traits of the receivers, momentary features of the situation are also taken into account. When an
individual who would otherwise be trusted---the experimenter---behaved extremely rudely towards the participants, they
seemed to rejoice in his misery rather than empathize with it (Bramel et al., 1968). Similarly, Lanzetta and Englis (1989)
told participants that they would be either cooperating or competing in a game. While those set to cooperate showed
empathetic responses to displays of pleasure and distress, those set to compete showed either no reaction or displayed
‘‘counterempathy’’ (p. 534).
6. Why do we still have emotional signals?
So far we have considered ‘‘pure’’ emotional signals, as they are expressed for instance in facial expressions. Yet most
emotional signals are in fact mixed with other types of communication---ostensive communication, mostly. The emotional
Please cite this article in press as: Dezecache, G., et al. An evolutionary approach to emotional communication,
Journal of Pragmatics (2013), http://dx.doi.org/10.1016/j.pragma.2013.06.007
+ Models
PRAGMA-3768; No. of Pages 13
G. Dezecache et al. / Journal of Pragmatics xxx (2013) xxx--xxx
9
tone helps disambiguate ‘‘I’m scared’’ (that I won’t find a job) from ‘‘I’m scared!’’ (someone is breaking into my house).
Moreover, some of these emotional tones are likely to be universal (Scherer et al., 2001; Bryant and Barrett, 2008; Sauter
et al., 2010). The linguistic context also plays an important role in disambiguating emotional signals (Barrett et al., 2007).
These elements point to the co-evolution of emotional signals and ostensive communication in humans; for instance, if
emotional signals are adapted to transmit information with the tone of voice used in spoken communication, they probably
co-evolved with language.5
This essential interplay between ostensive and non-ostensive signals becomes very clear when one considers
conversational situations. As suggested by Proust (2008), non-ostensive signals may play an important role in
transmitting information about one’s own uncertainty in conversational contexts: senders can communicative
hesitation about the information they are conveying (e.g., by rolling one’s eyes); recipients can signal their level of
understanding (e.g., frowning when something that has been said is unclear). Yet it might not always be advantageous
to provide a recipient with sensitive data such as one’s doubts, or as Proust put it, ‘‘evaluations of [one’s own]
incompetence’’; conversely, it might be advantageous for the recipient to communicate false information about one’s
own uncertainty. Both strategies may threaten the stability of communication and the use of meta-cognitive gestures.
Proust’s solution to this puzzle is to argue that the use of meta-cognitive gestures is highly flexible (by selectively
restricting others’ access to one’s own meta-cognitive states), and that meta-cognitive gestures are used in situations
where cooperation is a priori guaranteed. We think that such solution itself presupposes the combined use of
epistemic and emotional vigilance mechanisms that continuously track, over the course of the conversation, possible
divergence of interests between signalers and receivers, and regulate the use of meta-cognitive gestures accordingly.
We therefore see Proust’s proposal as also suggesting that such mechanisms are needed to explain the stability of
emotional communication.
Given the continued importance of emotional signals in human communication, we feel entitled to ask a question that
may seem whimsical: why do we still have emotional signals? Ostensive communication clearly has a far greater
expressive potential. In conversational contexts, senders may well signal their uncertainty using appropriate words;
conversely, recipients may signal their lack of understanding using adequate expressions. Yet, non-ostensive signals
continue to play an important role in maintaining conversation. Why is it the case? The answer, we surmise, rests at least
in part on the argument we have developed in the previous sections: the mechanisms of vigilance that help stabilise
emotional signals also explain their continued relevance.
Intuitively, it may seem as if emotional signals still exist simply because they express some things better than, or at
least differently from, ostensive communication. For instance, when someone tells you, in the course of a face-to-face
discussion ‘‘I’m scared’’ with a relatively neutral tone, you don’t infer that she is currently experiencing a high level of fear--maybe she’s worried about her job prospects. It would be hard to convey the level of fear experienced in, say, a home
invasion without using at least the fear tone, and probably the facial expression too: it seems that only emotional signals
can adequately communicate some emotional states. This, however, may be an artefact of our habits. Imagine that you
are chatting with a friend on an instant messaging service. You know she is alone in her house. Interrupting the
conversation, she writes to you that she’s hearing someone breaking in, and then ‘‘I’m scared.’’ We suspect that you would
have no trouble inferring her emotional state, just as if she had said it with the right tone and facial expression in a face-toface discussion. The reason that a toneless ‘‘I’m scared’’ in face-to-face discussion is not effective at communicating fear
is that the fear tone is expected; in its absence, we interpret the utterance differently.
Still, one could argue that a lot of context is necessary to disambiguate ‘‘I’m scared’’ in the absence of emotional
signals. Emotional signals could therefore be necessary when the context is unclear and there is no time to make it
explicit. Again, we suspect this is not a hard limitation of ostensive communication. It is difficult to imagine why a word or an
expression with the primary meaning ‘‘I am experiencing a high level of fear’’ (and effective in conveying high levels of
arousal) could not have appeared, had it been necessary. One last edge that emotional signals seem to have over
ostensive communication is their speed: a simple facial expression can be sufficient to communicate fear. Maybe even a
monosyllabic expression would have to take a few more milliseconds to be processed, giving indeed a small edge to
emotional signals. Yet this would hardly be critical for most emotional signals such as joy or even anger. And even when
speed is of the essence---in the case of fear maybe---the increased expressivity of ostensive communication would
probably compensate for the slowdown.6
5
Note that this scenario is not incompatible with scenarios linking emotional communication to paralinguistic systems of communication (e.g.,
Deacon, 1997) that presuppose independent evolution between those systems of communication and those of linguistic communication. Yet,
given their constant interplay in human communication (e.g., in conversational contexts; see Proust, 2008), it is most likely that linguistic and
paralinguistic systems of communication have co-evolved more recently.
6
Note that the arguments against emotional signals being preserved thanks to their communicative properties carries to Fridlund’s hypothesis
that they serve to convey social motives rather than reveal internal emotional states (Fridlund, 1994).
Please cite this article in press as: Dezecache, G., et al. An evolutionary approach to emotional communication,
Journal of Pragmatics (2013), http://dx.doi.org/10.1016/j.pragma.2013.06.007
+ Models
PRAGMA-3768; No. of Pages 13
10
G. Dezecache et al. / Journal of Pragmatics xxx (2013) xxx--xxx
While it is still possible that the question of the continued relevance of emotional signals could be found in their
communicative power, it is worthwhile to consider other potential explanations. One such explanation is that the
mechanisms that help insure the stability of emotional signals are very specific and could not easily be replaced by those
used to insure the stability of ostensive communication. In particular, the evaluation of ostensive communication relies to
some extent on what can be called ‘‘coherence checking’’ (see Mercier, 2012). Coherence checking consists in pitting
communicated information against background beliefs. If inconsistencies are detected, they make it less likely that the
communicated information is accepted. The efficacy of coherence checking depends on the relative ease of access to
knowledge in the relevant domain. If John tells Sarah something about someone she does not know at all, her coherence
checking mechanisms will not have much to work with. Emotional states are likely to create such asymmetries in access to
information: observers have less access to an individual’s emotional states than the individual herself. This is also true of
the social motives---willingness to aggress, etc.---that emotional signals may communicate (Fridlund, 1994). As a result,
coherence checking is not a very practical way to evaluate emotional signals.
Given that we cannot easily rely on coherence checking to evaluate them, how do emotional signals manage to remain
stable? A possibility is that emotional signals are indices. Their stability would be guaranteed by the unfakeability of the
signals: emotional signals would simply be too costly to fake, making them intrinsically honest. While it is difficult to
discount this hypothesis, as we pointed out earlier, the specific mechanisms that make emotional signals so hard to fake
have remained elusive. The other possibility is that emotional signals remain stable because senders are deterred from
sending too many dishonest signals. For deterrence to be possible, receivers must have some way of reacting
appropriately to emotional signals. If their reactions were fully automatic, not modulated by source or context, all senders
would be equally successful, precluding the possibility of deterrence. Following the arguments and evidence reviewed in
sections 4 and 5, we argue that humans are endowed with mechanisms that allow them to appropriately react to emotional
signals. These mechanisms make deterrence possible and contribute to the stability of emotional signals.
Whether emotional signals remain stable because they are indices or because dishonest signals are deterred,
specialised mechanisms tailored to emotional signals are required. As a result, we surmise that they are in part
responsible for the continued relevance of emotional signals.
7. The interplay between epistemic vigilance and emotional vigilance mechanisms
Sperber et al. (2010) have hypothesised that mechanisms of epistemic vigilance have evolved to protect people from
deleterious communicated information. One possibility is that the term applies to all the mechanisms allowing humans to
perform this function, whether the signals are ostensive or not, emotional or not, etc. In this case, emotional vigilance
would be a special case of epistemic vigilance. Another possibility is that epistemic vigilance mostly refers to ostensive
communication, in which case emotional vigilance could be seen as a companion set of mechanisms. In any case, this is a
purely semantic point.
The more substantial issue is the following: can there be specialised heuristics to ward-off deception in non-ostensive
(and, more specifically, emotional) vs. ostensive communication? If yes, then using the term ‘emotional vigilance’ to refer
to the mechanisms that instantiate these heuristics seems warranted. While the issue is ultimately an empirical one, a
strong a priori argument can be offered that such specialised heuristics exist, as so many parameters differ between nonostensive, emotional communication and ostensive communication, from their phylogenetic history, to the format in which
they are encoded, or the cues on which they are based. Moreover, it is even likely that there are heuristics that only apply
to the signals associated with one emotion. While we do not deny that there might be heuristics that are valid for emotional
and ostensive communication, they are likely to interact with more specific ones. Generally, the massively modularist point
of view tacitly adopted by Sperber et al. (2010), supports the existence of distinct mechanisms of emotional vigilance.
8. Conclusion
Our goal in this paper has been to apply the logic of the evolution of communication to emotional expressions. If
emotional expressions are genuine communicative signals, we need to explain what keeps them stable. In other words:
what prevents senders from manipulating receivers, and how do receivers stay safe?
While there might exist a consensual explanation for what keeps ostensive communication stable (namely, that
dishonesty is not a payoff strategy given the social costs of ostracism), it was not clear what could be the explanation for
guaranteeing the stability of non-ostensive communication. This state of affairs, we think, is due to the popularity of
Ekman’s view among emotion psychologists: being involuntary, emotional expressions are honest and therefore safe for
receivers to accept. As we have pointed out, this is a proximal account and does not explain why we would not have
evolved the capacity to fake emotional signals. What we need is therefore an explanation at the ultimate level.
At an ultimate level, communication can either be kept stable (i) if senders and receivers share a common interest, (ii) if
signals cannot be faked, or if (iii) signals induce costs. In the case of emotional communication, (i) and (ii) are unlikely. In
Please cite this article in press as: Dezecache, G., et al. An evolutionary approach to emotional communication,
Journal of Pragmatics (2013), http://dx.doi.org/10.1016/j.pragma.2013.06.007
+ Models
PRAGMA-3768; No. of Pages 13
G. Dezecache et al. / Journal of Pragmatics xxx (2013) xxx--xxx
11
this paper, we have shown that the most likely option lies in the third kind of explanations: being equipped with
mechanisms of emotional vigilance, receivers would be able to evaluate the signals they receive and to punish dishonest
signallers who would provide them with false information. These mechanisms would act as deterrents: dishonest signaller
would run a risk, at least that of losing their ability to influence receivers in the future. This, we believe, would have led to
the stability of emotional communication.
This position can also have important implications at the proximal level: a broad range of phenomena in social
psychology (e.g., emotional contagion, empathy) are often described as being automatic responses to some stimuli. What
our position predicts, by contrast, is that receivers’ reactions are heavily modulated by their source: being emotionally
vigilant, receivers will selectively react to emotional signals. More empirical work will be needed to better understand
mechanisms of emotional vigilance.
References
Alcock, John, 1984. Animal Behaviour: An Evolutionary Approach, 3rd ed. Sinauer, Sunderland, MA.
Badali, Melanie, 2008. Experimenter Audience Effects on Young Adults’ Facial Expressions During Pain. University of British Columbia,
Vancouver.
Barrett, Lisa F., Kensinger, Elizabeth A., 2010. Context is routinely encoded during emotion perception. Psychological Science 21 (4), 595.
Barrett, Lisa F., Lindquist, K., Kristen, A., Gendron, Maria, 2007. Language as context for the perception of emotion. Trends in Cognitive Sciences
11 (8), 327--332.
Bickerton, Derek, 1990. Language & Species. University of Chicago Press, Chicago 60637.
Bradley, Margaret M., Miccoli, Laura, Escrig, Miguel A., Lang, Peter J., 2008. The pupil as a measure of emotional arousal and autonomic
activation. Psychophysiology 45 (4), 602--607.
Bramel, Dana, Taub, Barry, Blum, Barbara, 1968. An observer’s reaction to the suffering of his enemy. Journal of Personality and Social
Psychology 8 (4, Pt. 1), 384--392.
Bryant, Gregory A., Barrett, H. Clark, 2008. Vocal emotion recognition across disparate cultures. Journal of Cognition and Culture 8 (1--2),
135--148.
Chapman, Colin A., Chapman, Lauren J., Lefebvre, Louis, 1990. Spider monkey alarm calls: honest advertisement or warning kin? Animal
Behaviour 39, 197--198.
Darwin, Charles, 1872. The Expression of Emotion in Man and Animals. Murray, London.
Davies, Nicholas B., Krebs, John R., West, Stuart A., 2011. An Introduction to Behavioral Ecology. Wiley-Blackwell.
De Gelder, Beatrice, 2006. Towards the neurobiology of emotional body language. Nature Reviews Neuroscience 7 (3), 242--249.
Deacon, Terrence William, 1997. The Symbolic Species: The Co-evolution of Language and The Brain. WW Norton & Company, New York.
Dessalles, Jean-Louis, 2007. Why We Talk: The Evolutionary Origins of Language. Oxford University Press, New York.
Dimberg, Ulf, Thunberg, Monika, Grunedal, Sara, 2002. Facial reactions to emotional stimuli: automatically controlled emotional responses.
Cognition & Emotion 16 (4), 449--471.
Dobson, Seth D., 2009. Socioecological correlates of facial mobility in nonhuman anthropoids. American Journal of Physical Anthropology 139
(3), 413--420.
Ebrahim, G.J., 1968. Mass hysteria in school children. Notes on three outbreaks in East Africa. Clinical Pediatrics 7 (7), 437.
Ekman, Paul, 1993. Facial expression and emotion. American Psychologist 48 (4), 384--392.
Ekman, Paul, 2003a. Emotions Revealed. Times Books, New York.
Ekman, Paul, 2003b. Darwin, deception, and facial expression. Annals of the New York Academy of Sciences 1000 (1), 205--221.
Ekman, Paul, Sorenson, E.E., Richard, Friesen, Wallace, V., 1969. Pan-cultural elements in facial displays of emotion. Science 164 (3875), 86.
Ekman, Paul, Roper, Gowen, Hager, Joseph C., 1980. Deliberate facial movement. Child Development 51 (3), 886--891.
Ekman, Paul, Davidson, Richard J., Friesen, Wallace V., 1990. The Duchenne smile: emotional expression and brain physiology: II. Journal of
Personality and Social Psychology 58 (2), 342.
Epstude, Kai, Mussweiler, Thomas, 2009. What you feel is how you compare: how comparisons influence the social induction of affect. Emotion 9
(1), 1--14.
Esteves, Francisco, Dimberg, Ulf, Ӧhman, Arne, 1994. Automatically elicited fear: conditioned skin conductance responses to masked facial
expressions. Cognition & Emotion 8 (5), 393--413.
Evans, Hilary, Bartholomew, Robert E., 2009. Outbreak!: The Encyclopedia of Extraordinary Social Behaviour Anomalist Books, San Antonio.
Feshbach, Norma D., Roe, Kiki, 1968. Empathy in six-and seven-year-olds. Child Development 39 (1), 133--145.
Frank, Robert H., 1988. Passions Within Reason: The Strategic Role of The Emotions. WW Norton & Co.
Frick, Robert W., 1985. Communicating emotion: the role of prosodic features. Psychological Bulletin 97 (3), 412.
Fridlund, Alan J., 1994. Human Facial Expression: An Evolutionary View. Academic Press, San Diego.
Godfray, H. Charles J., 1991. Signalling of need by offspring to their parents. Nature 352, 328--330.
Grafen, Alan, 1990. Biological signals as handicaps. Journal of Theoretical Biology 144 (4), 517--546.
Grafen, Alan, 2006. Optimization of inclusive fitness. Journal of Theoretical Biology 238 (3), 541--563.
Grèzes, J., Pichon, S., de Gelder, B., 2007. Perceiving fear in dynamic body expressions. Neuroimage 35 (2), 959--967.
Grice, Paul, 1989. Studies in the Way of Words. Harvard University Press.
Gutsell, Jennifer N., Inzlicht, Michael, 2010. Empathy constrained: prejudice predicts reduced mental simulation of actions during observation of
outgroups. Journal of Experimental Social Psychology 46 (5), 841--845.
Hasson, Oren, 2009. Emotional tears as biological signals. Evolutionary Psychology 7 (3), 363--370.
Hatfield, Elaine, Cacioppo, John T., Rapson, Richard L., 1994. Emotional Contagion. Cambridge University Press, New York.
Hauser, Marc D., 1997. The Evolution of Communication. MIT Press, Cambridge, MA.
Please cite this article in press as: Dezecache, G., et al. An evolutionary approach to emotional communication,
Journal of Pragmatics (2013), http://dx.doi.org/10.1016/j.pragma.2013.06.007
+ Models
PRAGMA-3768; No. of Pages 13
12
G. Dezecache et al. / Journal of Pragmatics xxx (2013) xxx--xxx
Heider, Fritz, 1982. The Psychology of Interpersonal Relations. Lawrence Erlbaum, New York.
Hepach, Robert, Vaish, Amrisha, Tomasello, Michael, 2012. Young children sympathize less in response to unjustified emotional distress.
Developmental Psychology 49 (6), 1132--1138.
Jackendoff, Ray, 2003. Foundations of Language: Brain, Meaning, Grammar, Evolution. Oxford University Press, New York.
James, William T., 1932. A study of the expression of bodily posture. Journal of General Psychology 7, 405--437.
Kraut, Robert E., Johnston, Robert E., 1979. Social and emotional messages of smiling: an ethological approach. Journal of Personality and
Social Psychology 37 (9), 1539.
Lachmann, Michael, Szamado, Szabolcs, Bergstrom, Carl T., 2001. Cost and conflict in animal signals and human language. Proceedings of the
National Academy of Sciences 98 (23), 13189--131194.
Lanzetta, John T., Englis, Basil G., 1989. Expectations of cooperation and competition and their effects on observers’ vicarious emotional
responses. Journal of Personality and Social Psychology 56 (4), 543.
Likowski, Katja U., Mühlberger, Andreas, Seibt, Beate, Pauli, Paul, Weyers, Peter, 2008. Modulation of facial mimicry by attitudes. Journal of
Experimental Social Psychology 44 (4), 1065--1072.
Maillat, Didier, Oswald, Steve, 2009. Defining manipulative discourse: the pragmatics of cognitive illusions. International Review of Pragmatics 1
(2), 348--370.
Mascaro, Olivier, Sperber, Dan, 2009. The moral, epistemic, and mind reading components of children’s vigilance towards deception. Cognition
112 (3), 367--380.
Maynard-Smith, John, Harper, David, 2003. Animal Signals. Oxford University Press, New York.
Mayr, Ernst, 1963. Animal Species and Their Evolution. Harvard University Press, Cambridge, MA.
Mehu, Marc, Mortillaro, Marcello, Bänziger, Tanja, Scherer, Klaus R., 2012. Reliable facial muscle activation enhances recognizability and
credibility of emotional expression. Emotion 12 (4), 701--715.
Mercier, Hugo, 2012. The social functions of explicit coherence evaluation. Mind & Society 11 (1), 81--92.
Mercier, Hugo, 2013. Our pigheaded core: how we became smarter to be influenced by other people. In: Sterelny, Kim, Joyce, Richard, Calcott,
Brett, Fraser, Ben (Eds.), Cooperation and Its Evolution. MIT Press, Cambridge.
Millikan, Ruth Garrett, 1989. In defense of proper functions. Philosophy of Science 56 (2), 288--302.
Mondillon, Laurie, Niedenthal, Paula M., Gil, Sandrine, Droit-Volet, Sylvie, 2007. Imitation of in-group versus out-group members’ facial
expressions of anger: a test with a time perception task. Social Neuroscience 2 (3--4), 223--237.
Mortillaro, Marcello, Mehu, Marc, Scherer, Klaus, 2012. The evolutionary origin of multimodal synchronisation and emotional expression. In:
Altenmüller, Eckart, Schmidt, Sabine, Zimmermann, Elke (Eds.), The Evolution of Emotional Communication: From Sounds in Nonhuman
Mammals to Speech and Music in Man. Oxford University Press, Oxford.
Nugier, Armelle, Niedenthal, Paula M., Brauer, Markus, 2009. Influence de l’appartenance groupale sur les réactions émotionnelles au contrôle
social informel. Année Psychologique 109 (1), 61.
Owings, Donald H., Zeifman, Debra M., 2004. Human infant crying as an animal communication system: insights from an assessment/
management approach. In: Kimbrough Oller, D., Griebel, Ulrike (Eds.), Evolution of Communication Systems: A Comparative Approach. MIT
Press, Cambridge, MA, pp. 151--170.
Owren, Michael, Bachorowski, Jo-Anne, 2001. The evolution of emotional expression: a ‘‘selfish gene’’ account of smiling and laughter in early
hominids and humans. In: Mayne, T., Bonanno, G. (Eds.), Emotions. Guilford Press, New York.
Owren, Michael J., Bachorowski, Jo-Anne, 2003. Reconsidering the evolution of nonlinguistic communication: the case of laughter. Journal of
Nonverbal Behavior 27 (3), 183--200.
Pinker, Steven, Bloom, Paul, 1990. Natural language and natural selection. Behavioural and Brain Sciences 13 (4), 707--784.
Proust, Joëlle, 2008. Conversational metacognition. In: Wachmuth, I., Knoblich, G. (Eds.), Embodied Communication. Oxford University Press,
Oxford, pp. 329--356.
Reboul, Anne, 2007. Does the Gricean distinction between natural and non-natural meaning exhaustively account for all instances of
communication? Pragmatics & Cognition 15 (2), 253--276.
Rinn, William E., 1984. The neuropsychology of facial expression: a review of the neurological and psychological mechanisms for producing facial
expressions. Psychological Bulletin 95 (1), 52--77.
Sauter, Disa, Eisner, Frank, Ekman, Paul, Scott, Sophie K., 2010. Cross-cultural recognition of basic emotions through nonverbal emotional
vocalizations. Proceedings of the National Academy of Sciences 107 (6), 2408--2412.
Scherer, Klaus R., Banse, Rainer, Wallbott, Harald G., 2001. Emotion inferences from vocal expression correlate across languages and cultures.
Journal of Cross-Cultural Psychology 32 (1), 76.
Scott-Phillips, Thomas C., 2008a. On the correct application of animal signalling theory to human communication. In: The Evolution of Language:
Proceedings of the 7th International Conference on the Evolution of Language. pp. 275--282.
Scott-Phillips, Thomas C., 2008b. Defining biological communication. Journal of Evolutionary Biology 21 (2), 387--395.
Scott-Phillips, Thomas C., 2010a. Evolutionarily stable communication and pragmatics. In: Benz, A., Ebert, C., Jaeger, G., van Rooij, R. (Eds.),
Language, Games and Evolution. Springer, Berlin, pp. 117--133.
Scott-Phillips, Thomas C., 2010b. The evolution of relevance. Cognitive Science 34 (4), 583--601.
Scott-Phillips, Thomas C., Kirby, Simon, 2013. Information, influence and inference in language evolution. In: Stegmann, U. (Ed.), Animal
Communication Theory: Information and Influence. CUP, Cambridge, pp. 421--442.
Scott-Phillips, Thomas C., Dickins, Tom E., West, Stuart A., 2011. Evolutionary theory and the ultimate--proximate distinction in the human
behavioural sciences. Perspectives on Psychological Science 6 (1), 38.
Searcy, William A., Nowicki, Stephen, 2007. The Evolution of Animal Communication. Princeton University Press, Princeton.
Shearn, Don, Bergman, Erik, Hill, Katherine, Abel, Andy, Hinds, Lael, 1990. Facial coloration and temperature responses in blushing.
Psychophysiology 27 (6), 687--693.
Sherman, Paul W., 1977. Nepotism and the evolution of alarm calls. Science 197 (4310), 1246--1253.
Sperber, Dan, Wilson, Deirdre, 1995. Relevance: Communication and Cognition. B. Blackwell, Oxford.
Please cite this article in press as: Dezecache, G., et al. An evolutionary approach to emotional communication,
Journal of Pragmatics (2013), http://dx.doi.org/10.1016/j.pragma.2013.06.007
+ Models
PRAGMA-3768; No. of Pages 13
G. Dezecache et al. / Journal of Pragmatics xxx (2013) xxx--xxx
13
Sperber, Dan, Clément, Fabrice, Heintz, Christophe, Mascaro, Olivier, Mercier, Hugo, Origgi, Gloria, Wilson, Deirdre, 2010. Epistemic vigilance.
Mind & Language 25 (4), 359--393.
Sullins, Ellen S., 1991. Emotional contagion revisited: effects of social comparison and expressive style on mood convergence. Personality and
Social Psychology Bulletin 17 (2), 166.
Susskind, Joshua M., Lee, Daniel H., Cusi, Andrée, Feiman, Roman, Grabski, Wojtek, Anderson, Adam K., 2008. Expressing fear enhances
sensory acquisition. Nature Neuroscience 11 (7), 843--850.
Tinbergen, Niko, 1963. On aims and methods of ethology. Zeitschrift für Tierpsychologie 20 (4), 410--433.
Vermeulen, Nicolas, Mermillod, Martial, 2010. Fast emotional embodiment can modulate sensory exposure in perceivers. Communicative &
Integrative Biology 3 (2), 184.
Weisbuch, Max, Ambady, Nalini, 2008. Affective divergence: automatic responses to others’ emotions depend on group membership. Journal of
Personality and Social Psychology 95 (5), 1063.
Wharton, Tim, 2009. Pragmatics and Non-verbal Communication. Cambridge University Press, Cambridge.
Xu, Xiaojing, Zuo, Xiangyu, Wang, Xiaojing, Han, Shihui, 2009. Do you feel my pain? Racial group membership modulates empathic neural
responses. Journal of Neuroscience 29 (26), 8525.
Zahavi, Amotz, 1975. Mate selection---a selection for a handicap. Journal of Theoretical Biology 53 (1), 205--214.
Zeifman, Debra M., Brown, Sarah A., 2011. Age-related changes in the signal value of tears. Evolutionary Psychology 9 (3), 313--324.
Zeman, Janice, Garber, Judy, 1996. Display rules for anger, sadness, and pain: it depends on who is watching. Child Development 67 (3), 957-973.
Guillaume Dezecache is a PhD student in cognitive science (research supported by a DGA-MRIS scholarship). His doctoral work is dedicated to
emotional transmission in crowd situations, combining experimental work with a theoretical approach.
Hugo Mercier is a cognitive scientist interested in the way people reason. With Dan Sperber, he has developed an argumentative theory of
reasoning according to which our reasoning abilities have been designed by evolution to allow us to exchange arguments rather than to reason on
our own. He is now a CNRS researcher at the L2C2 in Lyon.
Thomas C. Scott-Phillips is a research fellow at Durham University. His principal research area is the evolution and cognition of human
communication and language. In 2010 he received the British Psychological Society’s award for Outstanding Doctoral Research, and in 2011 the
New Investigator Award from the European Human Behaviour and Evolution Association.
Please cite this article in press as: Dezecache, G., et al. An evolutionary approach to emotional communication,
Journal of Pragmatics (2013), http://dx.doi.org/10.1016/j.pragma.2013.06.007
Summary of this chapter
Three main lessons can be drawn from this second chapter dedicated to the question of how
emotions propagate in crowds:
– Firstly, and although emotional transmission has been investigated mainly in dyadic settings, emotions of fear and joy can be transmitted beyond dyads, and may therefore
give rise to emotion-based collective behavior. They can, notably, be transmitted in an
unintentional and subtle fashion.
– Secondly, emotional transmission can be considered as a special case of emotional communication, where agents happen to experience emotional states which share similar features.
This does not mean, however, that the cognitive mechanisms of emotional transmission
are special. Emotional transmission between A and B may well be conceptualized in terms
of B’s reaction to A’s displays. In this respect, emotional transmission and other processes
of emotional communication may share a similar cognitive and neural framework.
– Thirdly, emotional transmission cannot be equated with a contagious process. In fact,
emotional transmission is known to be modulated by many factors and is not irrepressible.
64
Chapter Three: Why do emotions of
fear and joy propagate in crowds?
Why should we expect emotions to spread in crowds?
As has been shown in the previous chapter, emotions of fear and joy may spread in an
unintentional and subtle manner beyond dyads, and could therefore give rise to larger
emotion-based collective behavior. I have already proposed tentative alternative answers
to the question of the proximate mechanisms at the basis of this process, but the question of
“why” emotions of fear and joy are so spontaneously transmitted now needs to be examined.
Information acquisition and sharing at the basis of emotional crowds
Imagine the following situation: you are walking in a crowded but quiet street. Suddenly, you
see dozens of people turning round and starting to scream, while running in your direction. As
one of them (we will call him Peter) comes nearer you, you perceive his face, which displays
terror: his eyebrows are raised; his eyes and mouth are wide open. Given such information,
it is most likely that you will immediately run for your life, long before even realizing that
these people are panicking, and long before you have fully assessed the situation. As you
try and escape the main street, and without any intention of doing so, you yourself display
signs of anxiety, through the configuration of your face and body, and, possibly through your
production of vocalizations. These signals inevitably inform your neighbor (“Mary”) of the
presence of a threat in her immediate environment. Mary, in turn, shows anxiety, ultimately
65
spreading the information that something has to be avoided.
This chain of acts of information transmission may lead to the emergence of collective behavior (such as collective flight), based on the very information (the presence of a threatening
element in the environment nearby) that you yourself have contributed to spread. In sum,
the resulting collective behavior is the outcome of many local processes of informational
transmission. It should be noted that, as developed in the previous chapter, one does not
need to understand the process of emotional information transmission as anything other
than a process of influence of others’ behavior (Dezecache, Mercier, & Scott-Phillips, 2013).
Over and above the question of the proximate mechanisms (the how-question) that make
these transmissions possible (subject of the previous chapter), it is also relevant to question their biological function (the why-question): (i) firstly, why are people so inclined to
rely on others’ informational resources – especially in crowd contexts? Why receivers feel
emotions upon receiving another’s emotional display? (ii) Secondly, why do we have the impression that people spontaneously share information in crowd contexts? Why do emitters
feel “compelled” to display their own emotional expression?
Question (i) is, in fact, fairly easy to answer. Human sensory processing is limited: when
navigating in an uncertain and informationally-rich environment, we are particularly attentive to others’ behavior. Since only a small number of us may have perceptual access to
adaptive features of this uncertain and highly fluctuating environment, it is advantageous
to keep constant track of others’ overall behavior, as this might provide us with information
about that environment, thus reducing its uncertainty (Couzin, 2009; Couzin, 2007). Such
monitoring of others’ behavior is particularly obvious when considering the scenario proposed above, which is based on the transmission of fear: Peter’s fearful facial displays (along
with the many other cues provided by his overall behavior) are important to track, as they
allow for the production of rich inferences about the danger level in the environment. In a
similar vein, facial displays related to the experience of joy, although they are not linked to
survival issues, are also worth tracking as they could be associated with others’ cooperative
intents (Mehu, Little, & Dunbar, 2007; Reed, Zeglen, & Schmidt, 2012). More generally, it
seems highly beneficial to share others’ experience of joy as it may benefit the organism
66
(Fredrickson, 1998; 2004). Our experiment reported in chapter 2 and Dezecache et al., 2013
suggests that we are indeed endowed with mechanisms that are tuned to react to others’
emotional signals of fear and joy. Interestingly, we are also endowed with mechanisms tuned
to produce emotional signals of fear and joy that can cause emotional reactions in others.
But why should we be so inclined to share these emotions of fear and joy with others? This
issue relates to the question of the biological function of the mechanism producing emotional
reactions when confronted by emotional signals. As seen in chapter 2 and in Dezecache et
al., 2013, for emotional homogeneity to be achieved in crowds, it is not sufficient for you to
receive the information from Peter; it is also important that the information reaches Mary.
If Mary can easily infer the presence of danger from the numerous cues provided by your
own action of running away, some of your behavior (such as facial or vocal displays) may
have evolved specifically to inform her. This implies that the mechanisms producing these
displays are selectively tuned to her informational needs. Spontaneous emotional reactions
to emotional events related to the experience of fear and joy might be a function of the
relevance of the information to the audience, as well as of the composition of that audience.
We experimentally addressed this question for a subset of emotional displays, i.e., facial
expressions of emotion of fear and joy by manipulating the relevance of the information
for others. The results are presented in the following manuscript, which has not yet been
submitted. The experiment was conceived and designed by Dr. Julie Grèzes, Dr. Laurence
Conty, Lise Hobeika, and myself; I collected the data with Lise Hobeika, and analyzed them
by myself. Finally, I subsequently wrote the paper together with Dr. Julie Grèzes and Dr.
Laurence Conty.
67
1
Humans spontaneously compensate for others’ informational needs in
2
threatening contexts
3
Dezecache G.,1,2,* Hobeika L.,1 Conty L.,3, Jacob P.2 & Grèzes J.1*
4
5
1
6
Supérieure (ENS), 75005 Paris, France; 2Institut Jean Nicod (IJN) – UMR 8129 CNRS & IEC –
7
Ecole Normale Supérieure & Ecole des Hautes Etudes en Sciences Sociales (ENS-EHESS),
8
75005 Paris, France; 3Laboratory of Psychopathology and Neuropsychology (LPN, EA2027),
9
Université Paris 8, Saint-Denis 93526 cedex, France
10
*
Laboratory of Cognitive Neuroscience (LNC) - INSERM U960 & IEC - Ecole Normale
Authors for correspondence ([email protected] & [email protected])
11
Abstract
12
It is often said that, at a certain stage of their evolutionary history, mechanisms producing
13
facial reactions have been selected for communication of adaptive-value information to
14
conspecifics. Yet, strong empirical evidence for this claim is lacking. Here, we tested whether
15
the apparatus producing emotional facial displays is spontaneously sensitive to others’
16
informational needs. Participants were confronted with content that varied in adaptive value
17
(fearful, joyful, and neutral content) and were sitting next to conspecifics who had more or
18
less informational access to the same content. Larger electromyographic activity over fear-
19
specific facial muscle during the perception of fear and when conspecifics’ informational
20
access was at its lowest indicated that participants involuntarily and spontaneously
21
compensated others’ informational needs, at no benefit to their performance at the task.
22
Beyond confirming the communicative function of spontaneous emotional facial displays of
23
fear, these results also suggests an interesting parallel with language, both apparatus being
24
tuned to selectively produce signals that are intended to reduce other’s uncertainty.
25
26
27
Introduction
28
When confronted by emotional events, humans typically produce, along with bodily and
29
vocal signals, sets of distinct involuntary facial movements [1] whose configuration is known
30
to be emotion-specific [2,3]. Although the proximate cognitive mechanisms mediating those
31
facial reactions have been considerably investigated (see [4] for a review), the question of
32
their biological function (i.e., why they exist at all) has little been studied.
33
According to the two-stage models of the evolution of facial displays [5,6], facial expressions
34
of emotions have first originated for intrapersonal sensory regulatory functions before being
35
selected, later during evolutionary history, for their communicative function. While
36
footprints of the first selective pressure (the selection of mechanisms designed to optimize
37
sensory acquisition through specific facial muscular configuration) can indeed been found
38
(for instance, expressing fear enhances sensory acquisition [6,7]), evidence for the
39
subsequent selection of mechanisms designed to optimize communication of adaptive-value
40
information to conspecifics are often restricted to “audience effects”, i.e., more frequent or
41
larger displays when conspecifics are present compared to when there is no conspecifics to
42
pick up the information [8–11]. Much stronger evidence of this second selective pressure
43
could however be found in investigating the extent to which the production of facial displays
44
is sensitive to others' perspective during emotional co-perception. Particularly relevant is the
45
question of whether others’ informational needs influence the production of such displays.
46
The aim of the present study was to uncover whether one individual’s (A) involuntary facial
47
reactions in response to an emotional event vary as a function of his/her knowledge about
48
another’s (B) informational access to that emotional scene. To this end, we manipulated A’s
49
belief about B’s informational access while recording A’s facial muscular activity. We
50
predicted that the amplitude of facial expressions over emotion-specific muscles in a
51
participant A would increase as her co-participant B’s knowledge about the emotional
52
scenes declines.
53
Showing that the mechanism underlying facial emotional expression production
54
spontaneously takes into account other’s informational needs would constitute strong
55
evidence for the operation of a past pressure for the selection of mechanisms optimizing
56
communication of adaptive-value information to others. It would also shed light on the
57
evolution of other communicative mechanisms (such as language) that are known to actively
58
track others’ knowledge states so as to produce signals that are intended to reduce others’
59
uncertainty [12].
60
61
Results
62
Thirty participants were assigned the A-role; each of them was paired by an unfamiliar same-
63
sex participant who was assigned the B-role.
64
A’s electromyographic activity per trial was obtained by extracting, for each trial, the mean
65
change from the baseline level occurring in a specific 500-ms time window, after z-score
66
transformation for each trial. This 500-ms time window was defined independently for ZM
67
and CS. The start of this time window was obtained by selecting, using t-tests, the 100-ms
68
time bin where EMG activity began to be statistically larger for the emotion the muscle was
69
specific to (fear for CS; joy for ZM) compared to the two other conditions (Fear > Joy and
70
Fear > Neutral for CS; Joy > Fear and Joy > Neutral for ZM). Figure 1 shows activity of both
71
muscles over time; the grey square shows, for each muscle, the selected 500-ms time
72
windows (700 - 1200 ms for CS; 1600 - 2100 ms for ZM). Data were then submitted,
73
separately for each physiological measure, to repeated measures ANOVA using Emotion
74
(fear vs. neutral vs. joy) and Information (20% vs. 60% vs. 100%) as within-subject factors. In
75
a second ANOVA, we compared activity for Social vs. Solitary blocks using Emotion (fear vs.
76
neutral vs. joy) and Sociality (Social blocks vs. Solitary blocks) as within-subject factors.
77
Bonferroni corrections were employed to account for multiple testing. Post-hoc comparisons
78
were also performed for the analysis of simple main effects. Results are summarized on
79
Figure 2.
80
81
82
83
[FIGURES 1 AND 2 ABOUT HERE]
84
CS activity in A during the perception of fear is modulated according to B’s informational
85
access
86
As revealed by a main effect of factor Information (F2,48 = 3.607, p = .035, η2 = 0.131), CS
87
activity was larger for blocks where A represented B’s informational access to the video as
88
being restricted to 20%, compared to when A thought B would have full access to them
89
(100%) (t24 = 3.045, p < .01), whatever the content of the video was. As the goal of the study
90
was to investigate the modulation of emotion-specific muscular activity (fear for CS; joy for
91
ZM), we systematically compared the impact of A’s representation of B’s informational
92
access on CS responses for each emotional content independently, even if we did not find an
93
interaction between factors Emotion and Information. Consistent with our hypothesis,
94
results revealed differences between 20% and 100% (t24 = 2.930, p < .01) and 60% and 100%
95
(t24 = 2.578, p = .016), a pattern which was specific to fear condition (all other ps > .1). Such
96
modulation of CS activity during fear perception according to B’s informational access was
97
independent of A’s affective appraisal of the stimuli: mean level of EMG responses over the
98
CS was indeed not correlated with A’s judgments about perceived intensity of fear videos (r2
99
= .-327, p > .1). These results indicate that CS activity in A during the perception of fear is
100
affected by B’s informational access: A’s representation of B’s low informational access
101
during the perception of fear favors larger activity in the CS.
102
103
ZM activity in A during the perception of joy is independent of B’s informational access
104
As for the ZM activity, statistical analysis revealed neither effect of factor Information nor
105
interaction between factors Emotion and Information (both ps > .1). These results suggest
106
that ZM activity in A during the perception of joy is independent of B’s informational access.
107
ZM responses could neither be explained by A’s affective appraisal of the stimuli, as
108
muscular activity over this muscle was not correlated with A’s judgments about perceived
109
intensity of joy videos (r2 = .010, p > .1).,
110
111
112
CS and ZM activity in A in presence or absence of B (Social vs. Solitary conditions)
113
Finally, ANOVAs using Emotion and Sociality as factors revealed neither effect of Sociality
114
nor interaction between factors Emotion and Sociality for CS (Sociality: F1,24 = 0.110, p > .1;
115
Emotion*Sociality: F1,24 = 1.421, p > .1).
116
Concerning ZM activity, we found no effect of factor Sociality (F1,24 = 0.183, p > .1) but an
117
interaction between factors Emotion and Sociality (F1,24 = 3.382, p < .05). Post-hoc tests
118
revealed that ZM responses in A were larger when facing neutral stimuli in B’s presence than
119
when alone (t24 = -2.111, p < .05); there was also a tendency for EMG responses to be higher
120
in the ZM when facing fear stimuli during Solitary blocks than during social blocks (t24 =
121
1.819, p < .1).
122
If the absence of increased activity in Social conditions seem to contradict conventional
123
findings that facial activity is more ample in the presence than in the absence of others [13],
124
it should be noted that B’s activity during Solitary blocks was left undetermined to A. As a
125
consequence, A could have formed the belief that B was in fact watching a similar content in
126
a different room. This is consistent with results by Fridlund and colleagues [10] showing that
127
implicit audience (an audience that is absent but nonetheless, and elsewhere, engaged in a
128
task related to that of the participant) can potentiate facial activity in participants.
129
130
Discussion
131
Overall, our findings indicate that, when confronted by threat-related stimuli, humans
132
spontaneously and unintentionally take conspecifics’ informational needs into account. They
133
modulate their specific-muscle movements accordingly and at no obvious benefit for
134
themselves. Indeed, spontaneous activity of CS, a muscle implicated in the expression of
135
fearful expression (among other negative emotions) was found to be larger when B had low
136
access to the informational content of the videos (20 or 60% of their content) compared to
137
when B was fully aware of it (100% block), even if B’s informational access was absolutely
138
irrelevant to A’s task during the experiment. This pattern of results should be related to an
139
intrinsic motivation to selectively inform others [14]. Indeed, CS responses during fearful
140
condition were not correlated with perceived risks (as captured by the Intensity scores).
141
Of interest here, B’s perspective had no impact on A’s ZM behavior: indeed, ZM activity was
142
not larger during joy condition when B’s suffered from low informational access. This might
143
be explained by the function of emotional displays involving activity of the ZM, such as
144
smiles that are largely known to originate from greeting signals of non-human primates [15]
145
and therefore, from dyadic contexts. Consequently, A’s potential lack of consideration for B’s
146
informational access during the perception of joy stimuli might correspond to a
147
communication between A and the actors displayed in the videos [16].
148
The fact that CS activity during the perception of fearful expressions, unlike ZM’s during the
149
perception of joy, was modulated by A’s representation of B’s perceptual access is
150
particularly interesting: unlike fearful-related content which bears immediate survival-value,
151
joy would not have constituted a relevant piece of information to be shared with B. In this
152
respect, relevance of the information to B might not only be a function of A’s representation
153
of her conspecific’s perceptual access, but might also be linked to the value of the
154
information for her immediate survival. This is consistent with previous results showing that,
155
unlike the perception of joy, the perception of fear may lead, in observers, to explicit facial
156
signals [14].
157
These results are important for the question of the evolution of facial musculature as they
158
strongly suggest that facial movements, at least when produced as responses to emotional
159
events in the environment, may have not only been selected for self sensory-regulation but
160
have also ultimately been selected to communicate survival-value information to
161
conspecifics. Indeed, the mechanisms producing fearful displays spontaneously track others’
162
informational needs in threatening contexts. This is consistent with the two-stage models of
163
the evolution of emotional expressions which support that facial expressions of emotion
164
would have first originated for sensory regulation, before being more recently co-opted for
165
communicative purposes [5–7].
166
Finally, our results are particularly relevant to the question of the evolution of
167
communicative abilities. According to several authors [12,17], a critical feature of human
168
language is the selective production of signals that are intended to reduce other’s
169
uncertainty. Recently, it has been shown that wild chimpanzees take into account others’
170
knowledge when producing alert hoos [18]. Our results are in the same line as they support
171
the view that the production of facial emotional displays, a primitive and evolutionary old
172
means of communication (as it is shared with other social mammals [1,19]), is sensitive to
173
others’ informational needs.
174
175
Methods
176
Ethics. We obtained ethics approval from the local research ethics committees (CPP Ile de
177
France III).
178
Participants. Thirty participants (16 females; mean age 23.3 y ±0.51 SEM, range 20–30 y)
179
were assigned the A-role; thirty others (mean age 23.6 y ±0.46 SEM, range 20–30 y) the B-
180
role. All participants had normal or corrected-to-normal vision, were naive to the aim of the
181
experiment and presented no neurological or psychiatric history. All provided written
182
informed consent according to institutional guidelines of the local research ethics committee
183
and were paid for their participation. All the participants were debriefed and thanked after
184
their participation.
185
Overall procedure. Participants A and B were seated next to each other with a folding screen
186
making each invisible to the other. Both had a computer in front of them and were wearing
187
headphones. Each experimental session was composed of four blocks repeated two times,
188
for a total of 8 blocks: in 6 of the 8 blocks, A and B were reunited in the room (Social blocks);
189
in the two other blocks, A was alone in the room (Solitary blocks).
190
Specific procedure for participants A. Participants A were told that they were going to
191
watch videos either in the presence of another same-sex participant B (6 Social Blocks) or
192
alone (2 Solitary blocks). No reason was provided for the presence of B. Participants A were
193
merely told that, during the Social blocks, B would watch the same videos as they would,
194
although B would have access to more or less content (20%, 60%, 100%). A sample of what B
195
could see was provided to A before the experiment (the image of the video was more or less
196
blurred and the sound distorted, resulting in impaired recognition of the content of the
197
video for 20% and 60% blocks). Interactions between A and B were minimal: they greeted
198
each other before the start of the experiment; also, B was in charge of starting up each block
199
(to the exception of Solitary blocks that were launched by the experimenter), resulting in A
200
and B making sure that they were both ready before B pressed the Start button.
201
Specific procedure for participants B. B’s task consisted in answering to questions related to
202
the content of the videos. In fact, they were never exposed to noisier version of A’s videos;
203
they were nonetheless told not to communicate with A about their content. They were
204
instructed to start the Social blocks after having made sure that A was ready to start. Before
205
the Solitary blocks started, B were escorted outside the room by the experimenter.
206
Videos. There were 45 videos (mean duration 6060±20 ms, range 6000–6400 ms) of size
207
620×576 pixels projected on a 19-inch black LCD screen. Those of emotional conditions
208
depicted 15 actors (8 females, 7 males) playing fear (n = 15) and joy (n = 15), using facial,
209
bodily as well as vocal cues. They were extracted from sessions with professional actors from
210
the Ecole Jacques-Lecoq, in Paris (France). The videos of the neutral condition (n = 15)
211
displayed fixed shots of landscapes. All videos were validated in a forced-choice task (see
212
[14]).
213
Data acquisition. Using the acquisition system ADInstruments (ML870/Powerlab 8/30), we
214
continuously recorded the EMG activity of A using Sensormedics 4 mm shielded Ag/AgCl
215
miniature electrodes (Biopac Systems, Inc) (sample rate: 2 kHz; range: 20 mV; spatial
216
resolution: 16 bits). Before attaching the electrodes, the target sites on the left of A’s face
217
were cleaned with alcohol and gently rubbed to reduce inter-electrode impedance. Two
218
pairs of electrodes filled with electrolyte gel were placed on the target sites: left ZM and left
219
CS muscles [11]. The ground electrode was placed on the upper right forehead. Last, the
220
signal was amplified, band-pass filtered online between 10–500 Hz, and then integrated.
221
Integral values were then offline subsampled at 10 Hz resulting in the extraction of 100 ms
222
time bins.
223
Data analysis. EMG trials containing artifacts were manually rejected, following a visual
224
inspection. Participants with a high rate of trial rejection (> 25%) were excluded from the
225
statistical analysis for the relevant signal, (n = 2 for CS, n = 2 for ZM). Also, due to technical
226
problems, 3 participants were excluded (n = 3 for CS, n = 3 for ZM) prior to the analysis,
227
leaving a total of n = 25 for CS and n = 25 for ZM.
228
References
229
230
1.
Darwin C, Ekman P, Prodger P (2002) The expression of the emotions in man and
animals. Oxford University Press, USA.
231
232
2.
Ekman P (2007) Emotions revealed: Recognizing faces and feelings to improve
communication and emotional life. Macmillan.
233
234
3.
Ekman P, Friesen WV (1978) Facial action coding system: A technique for the
measurement of facial movement. Consulting Psychologists Press, Palo Alto, CA.
235
236
237
4.
Blair RJR (2003) Facial expressions, their communicatory functions and neuro–cognitive
substrates. Philos Trans R Soc Lond B Biol Sci 358: 561–572.
doi:10.1098/rstb.2002.1220.
238
239
5.
Shariff AF, Tracy JL (2011) What Are Emotion Expressions For? Curr Dir Psychol Sci 20:
395–399. doi:10.1177/0963721411424739.
240
241
6.
Susskind JM, Lee DH, Cusi A, Feiman R, Grabski W, et al. (2008) Expressing fear
enhances sensory acquisition. Nat Neurosci 11: 843–850.
242
243
7.
Lee DH, Susskind JM, Anderson AK (2013) Social Transmission of the Sensory Benefits of
Eye Widening in Fear Expressions. Psychol Sci.
244
245
8.
Kraut RE, Johnston RE (1979) Social and emotional messages of smiling: An ethological
approach. J Pers Soc Psychol 37: 1539.
246
247
9.
Chovil N (1991) Social determinants of facial displays. J Nonverbal Behav 15: 141–154.
doi:10.1007/BF01672216.
248
249
10. Fridlund AJ (1991) Sociality of solitary smiling: Potentiation by an implicit audience. J
Pers Soc Psychol 60: 229.
250
251
11. Fridlund AJ, Cacioppo JT (1986) Guidelines for human electromyographic research.
Psychophysiology 23: 567–589.
252
12. Pinker S (2010) The language instinct: How the mind creates language. HarperCollins.
253
254
255
13. Fridlund AJ (1994) Human facial expression: An evolutionary view. Academic Press San
Diego, CA. Available: http://www.getcited.org/pub/103169100. Accessed 27 March
2013.
256
257
258
14. Dezecache G, Conty L, Chadwick M, Philip L, Soussignan R, et al. (2013) Evidence for
Unintentional Emotional Contagion Beyond Dyads. PLoS ONE 8: e67371.
doi:10.1371/journal.pone.0067371.
259
260
15. Burrows AM (2008) The facial expression musculature in primates and its evolutionary
significance. BioEssays 30: 212–225. doi:10.1002/bies.20719.
261
262
263
16. Bavelas JB, Black A, Lemery CR, Mullett J (1986) “I show how you feel”: Motor mimicry
as a communicative act. J Pers Soc Psychol 50: 322–329. doi:10.1037/00223514.50.2.322.
264
265
266
17. Sperber D, Wilson D (1986) Relevance: Communication and cognition. Harvard
University Press Cambridge, MA. Available: http://aune.lpl.univaix.fr/~hirst/articles/1988_hirst.pdf. Accessed 15 December 2012.
267
268
269
18. Crockford C, Wittig RM, Mundry R, Zuberbühler K (2012) Wild Chimpanzees Inform
Ignorant Group Members of Danger. Curr Biol 22: 142–146.
doi:10.1016/j.cub.2011.11.053.
270
271
19. Andrew RJ (1963) Evolution of Facial Expression. Science 142: 1034–1041.
doi:10.1126/science.142.3595.1034.
272
273
Acknowledgments
274
The research was supported by a DGA-MRIS scholarship and the Agence National of Research (ANR)
275
"Emotion(s), Cognition, Comportement" 2011 program (ANR 11 EMCO 00902), as well as an ANR-11-
276
0001-02 PSL*, and an ANR-10-LABX-0087. The funders had no role in study design, data collection
277
and analysis, decision to publish, or preparation of the manuscript.
278
279
280
281
282
283
284
285
286
287
288
289
Figure Legends
290
291
292
Fig. 1 Muscular activity for CS (top) and ZM (bottom) for each emotion and over time. The
293
grey square shows the 500-ms time window where activity becomes to be statistically
294
significant for the emotion the muscle is known to be specific to (higher for fear than for joy
295
and neutral for CS; higher for joy than for fear and neutral for ZM).
296
297
Fig. 2 Muscular activity for CS (left) and ZM (right) for each emotion and for each level of A’s
298
representation of B’s informational access (20%, 60% and 100%). Black lines indicate
299
significant effects at *P<0.05; **P<0.01. Error bars indicate SEM.
300
Summary of this chapter
The results presented in the previous section of this chapter suggest that, when confronted
by emotional signals of fear, humans spontaneously and unintentionally take into account
the informational needs of their neighbors, even when there is no benefit to be gained for
their own performance in the task. We think that this could be the mark that emotional
transmission of information related to fear via facial activity is cooperative behavior, the
purpose of which is to selectively inform others. It also appears that, as we failed to show
any impact of others’ informational needs when participants were confronted by emotional
signals of joy, the relevance of the information to others is a critical parameter that can
modulate the intensity of emotional facial activity. Information related to threat would be
more relevant for others, as it is linked to immediate adaptive challenges in the environment.
The impact of this research for the wider understanding of emotional crowd behavior is
obvious: when information is linked to adaptive challenges, the spread of emotions could be
facilitated by mechanisms whose function is to intensify the activity of emotional signaling
media (here, emotional facial behavior) to increase informational access to other crowd
members, and help them prepare adaptive responses by triggering motor reactions in them
(De Gelder, Snyder, Greve, Gerard, & Hadjikhani, 2004; Grèzes et al., 2007).
Although this study did not explore other parameters, the relevance of information to the
audience should not be the only significant one to be taken into account. The composition
of the audience (kin/non-kin; familiar/unfamiliar individuals; in-group/out-group members)
may also influence the sharing of adaptive information with others in threatening contexts.
Even if it has largely been shown that the identity of the emitter is important for the
transmission of emotion to the receiver (see Hatfield et al., 1994 for a review), the impact
of the receivers’ identity on the propensity to share emotional information with others has
not been investigated. Research dealing with the collective response to threat suggests that
seeking proximity and maintaining contact with familiar individuals is a primary drive in
individuals confronted by threatening elements (see Mawson, 2005 for a review). “Familiar
people” can include all those towards whom we are supportive (such as kin [Madsen et al.,
2007] and other members of personal social networks [Dunbar & Spoors, 1995]).
81
Both the relevance of the piece of information to be shared and the composition of the audience and its degree of cooperativeness are expected to modulate an individual’s propensity
to share the emotional information one is confronted with.
Finally, although we have concentrated on facial expressions, the intensity of vocal and bodily
displays in reaction to threatening signals might well be sensitive to others’ informational
demands. In fact, all types of signaling media (including verbal signals: Luminet, Bouts,
Delie, Manstead, & Rimé, 2000) can be expected to be sensitive to others’ informational
demands.
82
Chapter Four: General discussion
Summary of the main findings
Throughout this thesis, we have shown that (i) humans are endowed with cognitive mechanisms that are tuned to react spontaneously and involuntarily (though selectively (Dezecache, Mercier, Scott-Phillips, 2013) to others’ emotional signals of fear and joy, in a way
that can be congruent with others’ emotional experience (Grèzes & Dezecache, in press).
(ii) In response to these signals, we in turn spontaneously and involuntarily produce signals
(they can be subtle) which can induce emotional experience of fear and joy in a third-party
who has no access to the emotional source (Dezecache et al., 2013). (iii) Emitters are also
endowed with mechanisms that can selectively affect the intensity of facial response as a
function of the relevance of the information to the audience (Dezecache et al., in prep). A
combination of these elements contributes to an explanation of how and why emotions of
fear and joy can spread on a large scale.
Emotions of fear and joy can be transmitted beyond dyads
Studies investigating the process of emotional transmission (again, the book of Hatfield and
colleagues [Hatfield et al., 1994] is a major reference here), have only considered emotional
transmission in dyadic contexts, where observers (B) “catch” the emotion of emitters (A).
For emotions of fear and joy to become collective, third-parties (C) who have perceptual
access to B but not to A must also be “contamined” by A’s emotions of fear and joy, through
83
B.
In Dezecache et al., 2013 (Chapter 2), we reproduced a minimal crowd situation where an
agent C was observing an agent B, herself observing an agent A. Crucially, C did not have
perceptual access to A, and B was not aware of being monitored by C. We were able to
show that emotions of fear and joy could be transmitted from A to C, via B, even if B
did not produce signals that could be explicitly recognized. In this respect, we found that
expressions of joy in B were recognized below chance level by independent judges, while
signals of fear could be detected above chance level. This suggests that the spread of fear, as
opposed to that of joy, is facilitated and that there is a tendency, in B, to exaggerate facial
expressions when they are of immediate relevance for others (fear signals a direct threat in
the environment). This is compatible with our findings in chapter 4 and in Dezecache et al.
(in preparation).
Emotional transmission as a process of influencing others
These findings raise an important theoretical question. In our paper Dezecache, Mercier
& Scott-Phillips (2013) (chapter 2), we made a distinction between cues, in general, from
signals, which are a special type of cue. Cues can be defined as stimuli that elicit a specific
cognitive or behavioral response going beyond the mere perception of the cue itself. Signals
can be defined as cues that have the function of eliciting such a response (Scott-Phillips,
2008). Are the subtle emotional cues produced by B a mere side effect of B’s emotional
arousal caused by the perception of A’s emotion, or do these cues have the function of
eliciting a similar emotional response in others? In other words, are they not just cues but
signals?
A trait can have a function in two ways: by being a biological adaptation that contributes
to the fitness of the organisms endowed with it; or by being intentionally used by an agent
in order to fulfill this function. In our study however, B did not know that she was being
observed and thus did not intend to signal anything by means of her facial expression
(which she may well have been unaware anyway). The fact that, at least in the case of
84
joy, these expressions were not recognized by judges strongly suggests that participant C’s
use of these cues was not intentional either. The cues we are talking about are neither
intentionally emitted nor intentionally attended to; they don’t have an intended function.
Are these emotional cues therefore biological adaptations, whose function is to transmit an
emotion in a non-intentional way? And if so, how is this function adaptive?
One possibility we explored was that facial activity in B is an evolved, cooperative type
of behavior that consists in the involuntary and spontaneous signaling of information of
adaptive value, which induces appropriate emotional and preparatory behavior in our conspecifics. Such a mechanism would be adaptive, on the one hand, in threatening situations
where flight and mobbing behaviors are optimal strategies; and, on the other hand, in favorable situations where signaling to conspecifics the presence of non-scarce rewarding features
of the environment may foster social bonds.
It is an open question whether unintended and non-consciously attended cues of specific
emotion are in fact evolved signals that contribute to the fitness of the people who produce
them and to that of those who are influenced by them. If unintentional cues of emotions are
mere side effects of the emotional state, then their amplitude should vary only in relation to
the intensity of the arousal of which they are a side effect. If, on the other hand, these cues
are signals, then their amplitude should vary in relation to the adaptive value of their being
picked up by other individuals. Such audience-directed variations could be triggered by:
(i). The relevance of the information to the audience (the type of emotion and its relevance
to the situation at stake): emotions that are advantageous to share (such as fear)
should determine (everything else, and the degree of arousal in particular, being equal)
stronger unintentional cues than emotions the sharing of which may be less important
(such as joy) or harmful (such as boredom or envy). In addition, the relevance of the
information should matter: unintentional cues should be stronger when the audience
stands to gain useful information from sharing it than when it does not. A recent study
of vocal signals in wild chimpanzees (Crockford, Wittig, Mundry, & Zuberbühler, 2012)
offers a suggestive point of comparison in this respect. Their results revealed that the
best predictor of call rates in response to the presence of a snake was the state of
85
knowledge of the conspecifics, thereby demonstrating that threat-related vocal signals
are selectively produced according to their informational value for others. It could be
that implicit emotional cues among humans are selectively produced according to their
informational value for others.
(ii). The presence of an audience and its composition: unintentional cues should be stronger
when there are others to pick them up. Also, unintentional cues of emotions should be
stronger when the individual emitting them has already had cooperative interactions
with the audience.
In the experimental design developed in our research article Dezecache et al. (2013) (chapter
2), participant B did not know that she was being watched, so none of the conditions were
satisfied. The fact that she nevertheless produced unintentional cues strong enough for them
to influence participant C can be interpreted either as evidence that these cues are mere
side effects, or as evidence that, even in the absence of reinforcing factors, these effects are
strong enough to serve a communication function.
Emotional transmission is sensitive to others’ informational
needs
In our research paper Dezecache et al. (in preparation) presented in Chapter 4, we specifically
addressed the question of whether spontaneous and involuntary subtle facial cues that are
produced when confronting emotional stimuli of fear and joy (these have been documented
in numerous studies, where they are termed “mimicry”, see Dimberg et al., 1998; 2000;
Moody et al., 2007; Soussignan et al., 2013) are mere cues (that can have the function of
optimizing the observer’s preparatory behavior [Susskind et al., 2008; Vermeulen, Godefroid,
& Mermillod, 2009]), or whether they have evolved to specifically inform others.
To test this, we chose to manipulate the relevance of that piece of information to the audience: information could be important to share (fear), less important (joy), or of no relevance
to the audience (neutral content). Neighbors could also have greater or lesser perceptual
access to the source, thus making the information more or less relevant to share. Crucially,
86
the task performed by our participants was not linked to the sharing of information. They
were merely informed that they would be accompanied by another participant who would
watch the same videos, with more or less informational access.
Our results revealed that spontaneous and involuntary facial reactions to a fear content in
participants were modulated by the perceptual access of their neighbors, even at no benefit
for their performance at the task. The fact that such sensitivity to others’ informational
demands was not found when observers were confronted with joy can be explained, either by
the weaker relevance of a joy content to neighbors, or by the fact that participants’ smiling
behavior (a facial expression that is typical of the experience of joy and often associated
with appeasement and affiliative intentions (Fridlund, 1994; Goldenthal, Johnston, & Kraut,
1981; Kraut & Johnston, 1979; Mehu & Dunbar, 2008)) could have been directed towards
the characters expressing joy in the source stimuli.
It could be said that observers, when confronted with signals of threat, show a certain sense
of responsibility (though unintentional) towards their neighbors as they spontaneously and
unconsciously compensate for the latter’s lack of informational access. However, it must be
pointed out that, for the sharing of emotional information to remain beneficial for senders
(an issue which is further developed in chapter 2 and in my article Dezecache, Mercier &
Scott-Phillips, 2013), sharing should be restricted to potential cooperators. This point is a
subject worthy of empirical investigation.
Beyond audience effects: how others’ mental states can influence transmission of emotional information behavior
These results are also of interest for the fierce debate that took place between the “emotional
readout” and the “behavioral ecology” views of facial behavior, some twenty years ago.
For emotional readout theorists (Ekman, 2007; Izard, 1971, 1977), core emotions (which
include joy and fear) consist of affect programs that, when activated by the presence of
emotional stimuli, trigger characteristic muscular and physiological patterns, as well as a
87
distinct phenomenological experience. Although social conventions can modulate their intensity through the operation of display rules (Malatesta & Haviland, 1982), facial emotional
displays are held to express inner emotional states.
Against this perspective, behavioral ecologists (Bavelas, Black, Lemery, & Mullett, 1986;
Chovil, 1991, 1997; Fridlund, 1994) have argued that facial emotional displays are “social
tools” (Smith, 1980) which influence other people’s behavior, and signal the senders’ intentions towards recipients (Fridlund, 1994). Expressions that are typically associated with the
experience of fear by emotional readout theorists, signal a readiness to submit, to flee, or an
invitation to others to run for their lives. Expressions that are typically related to joy signal
a readiness to appease, to play, or to affiliate. In short, facial displays can well be seen as
social motives (Chovil, 1997).
One strong argument in support of the behavioral ecologists was that the intensity and
occurrence of facial displays are heavily modulated by so-called “audience effects”. Bowlers
do not smile because of the solitary emotion experienced when winning a game, but because
and when they are interacting with other bowlers. Similarly, hockey fans’ smiling behavior,
although related to the outcome of the game, is found to be more strongly dependent on
whether they were with friends or facing opponent fans (Kraut & Johnston, 1979). Similar
results were obtained in laboratory settings: electromyographic activity over the zygomaticus
major was recorded in four conditions varying in their degree of sociality. A monotic increase
was found ranging from (a) a condition where participants were alone, (b) a condition
where participants were alone but believed that there was a friend nearby, (c) a condition
where participants were alone but thought that their friend was viewing the same video in a
different room, to (d) a condition with presence of a friend. This increase was independent of
participants’ ratings of the amusing videotapes viewed (Fridlund, 1991). As well as showing
that audiences’ emotional reactions can be implicitly elicited, such results strongly suggest
that smiling behavior is principally related to audience effects, and associates poorly with
vicarious experiences of joy.
In these studies by behavioral ecologists, sociality is defined as “the extent to which individuals can fully interact with each other through the auditory and visual channels of language”
88
(Chovil, 1991). Our proposal is more ambitious, as it takes sociality to include the state
of knowledge of other individuals, and demonstrates that the “relevance of the information
to others” (defined as a function of the uncertainty states of others) is also an important
component in the expression of facial displays.
Emotional transmission is not contagion
In this thesis, we have also discussed the theoretical legacy of early crowd psychologists
for today’s understanding of the transmission of emotional information in crowds. They
conceptualized emotions as germs or disease, and such metaphors continue to haunt our
representation of the process of emotional transmission. Emotional transmission is indeed
often conceptualized as a passive, mandatory and irrepressible process (e.g., Hatfield et al.,
1994).
However, the conception of emotional transmission as emotional communication rules out
the possibility of the process being mandatory and irrepressible. For any communication
to evolve and be stable, both emitters and receivers must find benefits in communicating.
If receivers had always been receptive to emotional signals in a way which was beneficial
to emitters but detrimental to themselves, emotional communication would have collapsed.
Conversely, if emitters had not found any benefit in communicating, emotional communication would not have been stabilized.
After ruling out conventional hypotheses to account for the stability of emotional communication, we argued (Dezecache, Mercier & Scott-Phillips, 2013 – chapter 2) that the only
mechanisms capable of explaining the stability of emotional communication would suppose
that receivers react flexibly to emitters’ emotional signals, by actively (although not necessarily consciously) evaluating the emitters’ level of benevolence and competence. This
contradicts the very idea that, as in the transmission of disease, emotional transmission
is an inflexible and compulsory process. In fact, many factors can impact and reduce the
phenomenon.
To link this with the wider issue of the transmission of emotion in collective contexts,
89
the considerations made in Dezecache, Mercier & Thom-Scott-Phillips (2013) might also
contribute to explaining the phenomenon of hysterical laughter, as in the case of the “Tanganyika laughter epidemic”, which occured in Tanzania in 1962 (Rankin & Philip, 1963).
Several schools were then closed following attacks of laughter among pupils. In total, the
spread of laughter lasted for a total of eighteen months, had an impact on fourteen schools,
and affected approximately one thousand children. Such cases do not contradict the fact
that emotional transmission is a flexible process but rather confirms it: being the victim of a
laughter attack has little cost and high benefits. Instead of blocking the spread of emotions,
mechanisms of emotional vigilance could have facilitated it.
As contemporary crowd psychologists (Drury, 2002; Reicher et al., 2004) have repeatedly
suggested, an attempt should be made to avoid conceptualizing the process of emotional
information in terms of “contagion”. Since Le Bon’s work, the disease metaphors have been
contaminating (no pun intended) our understanding of the process of emotional transmission. For scholars investigating the process of the spread of emotions, use of this metaphor
has prejudicial consequences, as it supposes that the process is irrepressible and mandatory.
An effective strategy to provoke some sort of “paradigm shift” in emotional transmission
psychologists would most probably be multiple: one would be to avoid the concept of “contagion”; and another to explore empirically the many factors that can restrain the process.
If I may suggest a third one, this would be to stress the idea that, as emotional transmission (of fear, joy, and probably other emotions), is, after all, a process of communication of
information, it is flexible in nature: all depends on the identity of the emitter, on that of the
receiver, and crucially, on the relevance of the information to be shared by the emitter, and
to be adopted by the receiver.
90
Epilogue: Emotional transmission
beyond triads: implications and
limitations of our findings for the
understanding of emotional crowd
behavior
The work developed in this thesis is based on the notion that early crowd psychologists were
accurate in their accounts of how an emotional crowd behaves, i.e., that emotions could
spread in an irrepressible fashion and that people fled irrationally and with no consideration
for their crowd neighbors. Throughout this dissertation, however, crucial aspects of these
accounts have been undermined. Even if emotions could indeed be spontaneously, involuntarily and subtly transmitted beyond dyads, I have argued that emotional transmission is
not an irrepressible and mandatory process. This view is consistent with modern accounts
of crowd psychology. Their proponents have been trying to debunk the myths of popular
representations of emotional crowd behavior for more than a decade.
91
Where traditional views might have gone wrong
Revising the key-characteristics of crowd behavior
Let us first of all return to the key characteristics presented in the first chapter, considered
to be accurate descriptions of the dynamics of crowd behavior. Crowds were thought to be
irrational, emotional, suggestible, destructive, spontaneous, anonymous and unanimous.
These characteristics painted an extravagant picture of the crowd and have contributed to
the popular success of “crowd behavior” as a topic of investigation. Crucially, they have also
influenced public order policing (Hoggett & Stott, 2010) and have had tremendous consequences for the scientific investigation of crowd behavior, by virtually imposing ideological
views (Quarantelli, 2001):
“Crowds are the elephant man of the social sciences. They are viewed
as something strange, something pathological, something monstrous.
At the same time they are viewed with awe and with fascination.
However, above all, they are considered to be something apart. We
may choose to go and view them occasionally as a distraction from
the business of everyday life, but they are separate from that business
and tell us little or nothing about normal social and psychological
realities.” Reicher (2001)
Stephen Reicher’s view is consistent with David Schweingruber and Ronald T. Wohlstein’s
(2005) review of numerous sections in sociology textbooks dedicated to crowd behavior. Their
samples show that scientific discourse on crowds is – just like the popular representations
of them – largely contaminated by the majority of the seven key-characteristics mentioned
above. More precisely, and according to Stephen Reicher and Jonathan Potter (1985), layunderstanding of crowd behavior and traditional discourse share typical features such as (i) a
constant de-contextualization (the interpretation of crowd movements is detached from their
ideological motives, e.g., rioters appear brutal when one fails to mention the things they are
92
fighting against; people running for their lives seem to be behaving irrationally when their
reason for doing so is not taken into consideration, etc.), (ii) a serious lack of interest in the
dynamics and internal processes of crowd formation, and (iii) an inappropriate emphasis on
the negative consequences of crowd events. These typical errors must have been committed
intentionally as they form part of a genuine ideological agenda:
“On an ideological level, Le Bon’s ideas serve several functions.
Firstly, it acts as a denial of voice. If crowds articulate grievances
and alternative visions of society - if, in Martin Luther King’s resonant phrase, crowds are the voice of the oppressed - then Le Bonian
psychology silences that voice by suggesting that there is nothing to
hear. Crowd action by definition is pathological, it carries no meaning and has no sense. Secondly, this psychology serves as a denial
of responsibility. One does not need to ask about the role of social
injustices in leading crowds to gather or the role of state forces in
creating conflict. Being outside the picture they are not even available for questioning. Violence, after all, lies in the very nature of
the crowd. Thirdly, Le Bon’s model legitimates repression. Crowds,
having no reason, cannot be reasoned with. The mob only responds
to harsh words and harsh treatment. Like the mass society perspective from which it sprang, but with more elaboration and hence with
more ideological precision, the Le Bonian position defends the status
quo by dismissing any protests against it as instances of pathology.”
Reicher (2001)
In fact, as for sociologist Vincent Rubio (2008), the very concept of a crowd is a longstanding ideological construct (its origins and characteristics can be traced back to Plato’s
Republic), which aims at legitimizing social control and strengthening political order over
recurrent mass movements. Indeed, there appear to be no parameters that would help define
or recognize crowds: a crowd is neither a precise set of n individuals, nor a density of
93
population within a delimited space. Rather, the word “crowd” might simply be a pejorative
term to designate a group of people we disagree with, and against whom we presume that
repression is justified. The fact that certain social policies in today’s world are indeed based
on traditional assumptions of crowd behavior (Drury, Novelli, & Stott, 2013; Drury, 2002;
Hoggett & Stott, 2010) supports this claim.
Does this mean that the key-characteristics proposed by traditional views are all to be
left behind? For David Schweingruber and Ronald T. Wholstein (2005), accounts of crowd
behavior would be more accurate if freed from them. In emergency situations, people do
not lose their minds (as the “irrationality” and “emotionality” characteristics predict), they
most frequently react calmly, cooperate actively and attempt to evacuate buildings in an
orderly fashion (Bryan, 1980; Clarke, 2002; Drury, Cocking, & Reicher, 2009a; Johnson,
1987; Keating, 1982). Similarly, the characteristic of “destructiveness” has to be rejected.
Crowds are seldom violent, and antisocial behavior is often the fact of isolated and small
groups within the crowd (Reicher, Stott, Cronin, & Adang, 2004; Stott & Reicher, 1998).
Moreover, crowd situations do not appear to favor anonymity, as crowds are often formed of
subgroups composed of individuals known to each other (Aveni, 1977). Finally, “unanimity”
might only be an illusion, resulting from observers having to deal with many individual
behaviors (McPhail, 1991).
Two characteristics, however, seem to resist thorough examination: “spontaneity” (emergency
situations necessarily emerge suddenly and unexpectedly) and “suggestibility”. This latter
characteristic is particularly relevant to the present work, as “suggestibility” has to do with
the power of emotions to spread. Although the concept has long been a pillar of popular
representations of emotional crowd behavior, the idea that suggestibility is a key-crowd
characteristic is seriously undermined by modern accounts of crowd behavior. In fact, it
appears that traditional views on crowd behavior has generally neglected social aspects
of the situation (the fact that crowd members share a common social identity that can
decisively modulate the spread of emotions), and put too much stress on the irrepressibility
of emotional transmission in crowds, and its role in structuring crowd behavior.
94
Are crowd members suggestible?
As set out in the first chapter, three main factors were thought to contribute to the emergence
of emotional crowd behavior: firstly, “deindividuation” hinders fully-fledged self-monitoring
processes in crowd members; secondly, “mental contagion” results in mental and emotional
unanimity among crowd members; finally, “suggestion” restricts the range of ideas and emotions that can be shared within the group. The role played by these factors in the emergence
of crowd behavior has been severely criticized by more recent accounts of such behavior.
Firstly, the idea that being part of a crowd entails deindividuation needs to be revised.
Anonymity can well lead to generous and pro-social behaviors (Johnson & Downing, 1979).
On more theoretical grounds, while advocates of deindividuation consider people as having
one single identity and set of norms, a meta-analysis of sixty studies has proved them wrong
by showing that people in groups take on situation-specific identities and norms. Grouping
does not therefore necessarily entail anti-social behavior; crowd members can, alternatively,
adopt a social identity favoring pro-social conduct (Postmes & Spears, 1998). Be that as it
may, the belief that the collective threatens individuality is still doing well in academic as
well as non-academic circles.
The second general cause, that of mental contagion, has also been discussed by later works,
and the criticisms made can be summarized as follows: even if it is true that individuals
within crowds could adopt their neighbors’ behavior, they are only likely to do so if they share
the same ideological motives and endorse a similar social identity. In this respect, an analysis
of the St Paul’s riots in Bristol (United Kingdom) in 1980 confirms that rioters reacted
selectively to other group members’ actions, in accordance with their conformity to a shared
ideology and identity. St Paul’s rioters stoned banks and police officers, but immediately
reprimanded isolated crowd members that were aiming at public transport buses and other
ideologically irrelevant targets (Reicher, 1984). In fact, far from being anarchic, crowd actions
are complexly structured; mental contagion would therefore be restricted or selective. One
could argue that, as far as fear and joy-based collective behaviors are concerned, it might
not be obvious to see where and when the sharing of a social identity could play a role in
structuring crowd members’ personal behavior (e.g., why I would choose any other option
95
but to escape in emergency situations). Again, it has been shown, surprisingly, through field
studies and virtual reality experiments, that social identity in fact promotes coordination
and cooperation in emergency cases and prevents people from panicking (Drury, Cocking,
& Reicher, 2009a, 2009b; Drury et al., 2009). For example, public safety agencies could
encourage solidarity between crowd members in evacuation situations, by priming people
with a collective identity label to be used in public announcements (such as, "Parisians",
"Fans of the Paris Saint-Germain", etc.). All in all, mental and emotional transmission
appears to be a more flexible and repressible process in crowds, compared to what was
previously thought.
Finally, the third cause, that of “suggestion” is dubious. More modern accounts of crowd
behavior (such as Reicher, 2001) argue that the range of possible actions is not fixed (as the
so-called racial unconscious described by Gustave Le Bon [1896] suggests) but ultimately
determined by the situation at stake, as well as by the social identity adopted by crowd
members. Being contaminated by your crowd neighbors’ anger is largely dependent on your
own assessment of the situation and how far you approve their actions (whether or not they
share your political ideas, for instance).
All in all, the whole narrative of crowd behavior needs to be revised, to read as follows: a
newcomer, upon entering the crowd, may or may not adopt the collective identity that has
contributed to its formation; having adopted this identity, he will be contaminated by passing
ideas and emotions (a form of mental “contagion”, if we may say so) that are consistent with
the endorsed social identity. This is the reason why crowds can act pro-socially as well as
anti-socially. Modern accounts of crowd behavior stress the idea that social identity plays an
important role in shaping emotion-based collective behavior. When reading contemporary
crowd psychologists, it would even appear that social identity plays the prominent role in the
emergence of emotional crowd behavior: social identity structures emotions by facilitating
or inhibiting their spread.
For social neuroscientists, this state of affairs can sound surprising as it conflicts with the
somewhat primitive aspect of emotional transmission (Hatfield et al., 1994), a process that
is well beyond voluntary and conscious control. How could social identity shape such a
96
process? This issue is difficult to tackle, especially because crowd psychologists and social
neuroscientists study at different levels of analysis. However, as I have argued in chapter 2
and in Dezecache, Mercier & Scott-Phillips (2013), emotional transmission is not a mandatory and non-repressible process. Part of the explanation may therefore lie in the presence
of inhibitory mechanisms that operate to slow down or stop the process of emotional transmission when it becomes too costly for receivers. In this event, it would help them select
complementary responses, instead of reacting congruently with the emitter.
Be that as it may, one conclusion reached by modern accounts of emotional crowd behavior seems literally to contradict one of our own. As stated above, people seldom panic
in emergency situations, and yet, we found that fear is spontaneously and unintentionally
transmitted beyond dyads (chapter 2 and Dezecache et al., 2013). Moreover, people seem
to share spontaneously the information that something must be avoided, by unintentionally intensifying their emotional facial activity, when confronted by threatening elements
(chapter 3 and Dezecache et al., in prep.). Our experimental results thus suggest that fear
should spread rapidly, widely and intensely in groups. This would ultimately lead to panic
in crowds, i.e., situations where, because they see or foresee a major physical danger as
imminent and know that escape routes are limited (Quarantelli, 1954), people react with
excessive fear and self-preservation behaviors (such as immediate flight, a behavior that is
selfish in nature) (Mawson, 2005).
The “myth” of crowd panics
The idea that crowds do not panic in emergency situations is counter-intuitive. At a personal
level, I have tried to convince people of this state of affairs on many occasions. Reactions
are always immediate and virulent. People are skeptical, although they cannot remember
ever having been stuck themselves in a panicking crowd. They prefer to try and to prove
you wrong on the basis of the many mass media reports they can recall of ‘stampedes’
occurring all over the world. Needless to say, these reports are ideologically oriented, in the
sense that they rely on traditional views of crowd behavior (Tierney, Bevc, & Kuligowski,
2006). In fact, deaths in crowds are very often due to compressive asphyxia, which itself is
97
due to space limitations (Fruin, 1993; Helbing, Johansson, & Al-Abideen, 2007; Zhen, Mao,
& Yuan, 2008). Emotional states of “panic” are therefore not necessary elements to explain
deaths in crowd disasters. The laws of Physics governing forces are sufficient to explain
how a large group of people can exert physical force that could result in others being fatally
crushed. Moreover, the judgment that panic occurred is often made by observers without the
participants’ point of view being taken into consideration (Fahy, Proulx, & Aiman, 2009).
Participants’ reports concerning mass emergency events do indeed paint a very different
picture of collective reactions to threat.
In this respect, Guylène Proulx and Rita F. Fahy (2004) analyzed 745 first-hand reports
from 435 survivors of the attacks on the World Trade Center in September 2001 (New-York,
USA). These personal reports were collected during the year following the attacks, from
newspaper articles, television programs, personal websites, and through email exchanges
with survivors. They were examined through content analysis, where a set of questions
allowed for “interviewing” the report (Johnson, 1987). Questions focused on the means of
exit, the types of cues that gave the survivors information about the disaster, the starting
time and progress of the evacuation, their perception of others, help received from other
survivors and help given. These personal reports revealed that, although the perception of
risk was high (84% of the participants had moderate or full knowledge of what had happened:
they knew something major was happening not necessarily knowing exactly what it was),
and half of participants reported obstruction during evacuation (e.g., debris, doors jammed,
overcrowdedness, smoke), mutual help between survivors was frequent (found in 46% of the
reports) and people perceived their fellows as reacting calmly and in an orderly way (57%).
Only one third of the survivors reported that others were upset (crying, shouting, or showing
signs of anxiety or nervousness). In fact, as far as panic-related behavior is concerned, panic
was individual, not collective. It is interestingly to note that, even if people sometimes
describe their own behavior as panicky in emergency contexts, objective examination in fact
shows that behavior in such contexts is often rational and prudent (Brennan, 1999).
Many case studies confirm this view. These include Norris R. Johnson’s reassessment of
survivors reports from the “The Who concert stampede” in Cincinnati, USA in 1979 (John-
98
son, 1987) attended by more than 18,000 people. Although the 11 deaths by crushing were
reported by the media as being the outcome of a general panic, Johnson showed that the 46
statements (from police officers, employees and private security guards) did not report competition between crowd members for gaining access to safer locations (as could be expected
when collective panic occurs), but instead revealed that people were frequently helpful and
tried to prevent others from being crushed (40% of the reports). This suggests that “social
norms”, rather than being extinguished in crowds, continue to prevail widely and to structure
crowd behavior.
Why don’t crowds panic? Tentative explanations
Emotional crowd behavior is regulated by emerging social norms
How can social norms still prevail in emergency situations? According to social psychologists
John Drury and Stephen Reicher (Drury & Reicher, 2010), “people in a crowd develop a
shared social identity based on their common experience during an emergency. This promotes
solidarity which results in coordinated and beneficial actions”.
Evidence comes from a collection of 45-to-90-minute interviews with survivors of mass emergency events, i.e. situations involving a large number of people, including the presence of a
clear threat, and with limited exit possibilities (Drury et al., 2009a). Major events included
the Sinking of the Jupiter in Greece (1968) – where more than 400 people had to escape
a boat which was sinking; four people died in this tragedy –, the evacuation of Canary
Wharf in London, United Kingdom (September 11th, 2001) – where people quickly evacuated skyscrapers after news of the attacks on the World Trade Center in New York, as
similar attacks were foreseen in the business districts of other major cities throughout the
world –, and the Bradford football stadium fire in the United Kingdom (1985) – where fiftysix people died following burns or smoke inhalation. All the eleven major events reported in
the study presented ideal conditions for people to panic on a massive scale, i.e., to display
selfish self-preserving behaviors.
99
After giving free accounts of their story, participants were questioned about their own behavior as well as that of other participants: what they themselves did, how quickly others
evacuated, whether evacuation was smooth, whether people cooperated or helped each other,
or whether they behaved selfishly. Crucially, they were also asked about their identification
with other crowd members: how they felt towards them and whether they felt any sense of
unity with each other.
Coding of the interviews revealed, as expected, that helping was more commonly reported
than selfish, self-preservation behaviors (forty-eight instances of helping behavior reported
vs. nine instances of selfish behaviors). Interestingly, most participants identified strongly
with other crowd members (twelve out of twenty-one). A survivor from the Hillsborough
stadium disaster in Nottingham, United Kingdom (1989), where ninety-six people died,
reported:
“All of a sudden everyone was one in this situ- when when a disaster
happens when a disaster happens, I don’t know, say in the war somesomewhere got bombed it was sort of that old that old English spirit
where you had to club together and help one another, you know, you
had to sort of do what you had to do, sort of join up as a team,
and a good example of that would be when some of the fans got the
hoardings and put the bodies on them and took them over to the
ambulance.”
It should be noted that explicit reference is made to the “old English spirit” which is supposed
to bond crowd members together. As predicted by social identity theorists (Hogg & Williams,
2000; S. Reicher, 2001; Reicher, 1987; Tajfel, 1978), people adopt specific social identity
depending on the situation. Among these high-social identification reports, 92% reported
a feeling of shared or common fate (vs. 67% for people who weakly identified). They were
also more likely (compared to people who identified weakly) to report a feeling of being
personally endangered (67% vs. 56%) Interestingly, helping was more frequently reported in
high-identification survivors (34 recollection of helping events vs. 14 for low-identification
100
survivors).
Altogether, these results suggest that the adoption of a social identity, linked to a greater
sense of shared or common fate, favors mutually-supportive behavior between crowd members. Similar results have also been found for survivors of the London bombings in 2005
(Drury et al., 2009b)
Causality between a shared sense of danger and pro-social tendencies is, however, very difficult to establish. Being a survivor of a dramatic event, and having felt a sense of togetherness
with other crowd members could lead to reinterpreting behavior that was self-preserving as
being more altruistically-oriented. In other words, survivors could have a brighter picture
of what happened, merely because they felt good towards others, or because they survived
the disaster. This brings us back to a set of methodological issues which might well explain
why social neuroscientists and social psychologists may disagree when trying to explain why
people do not succumb to massive panic in emergency contexts.
Modern crowd psychologists face serious methodological issues
While social neuroscientists study emotional transmission in an implicit fashion, by measuring behaviors that are involuntary in their activity (such as emotional facial activity,
skin conductance, cardiac rhythm etc.), social psychologists rely on participants’ reports.
They may base their accounts on interviews in newspapers, using partial responses to questions they know nothing about. When they do directly confront survivors, interviews are
conducted many years after the disaster occurred. During this lapse in time, the survivors’
interpretation of events could become greatly distorted. Memory distortion following traumatic events is indeed known to occur (e.g., Schmolck, Buffalo, & Squire, 2000). Because
they have survived a major disaster, participants also tend to give a more positive view of
what actually happened. This would also explain why, in Drury et al.’s study cited above,
people who felt a sense of togetherness with others reported more pro-social behaviors between crowd members. In fact, they might have interpreted behavior in a way which made
their sinister fellows appear nicer.
101
John Drury and colleagues used virtual reality to challenge such a claim. They examined
whether spontaneous behavior in virtual crowds could be consistent with explicit reports
made by crowd disaster survivors (Drury et al., 2009). In three experiments, participants
were immersed in a video game and were required to evacuate, as quickly as possible, a tube
station on fire. They could be hindered in this process by the rest of the crowd. Fortunately,
they were able to push (as many times as they wanted to) in order to make their way to the
exit more quickly. They were even encouraged to do so as exit time was limited: a “danger
of death” gauge on the top of the screen showed time running out. During the course of
their escape, they were confronted by four injured virtual characters that they could help
(at a cost for their escape time), or ignore (at no cost). The overall behavior of participants
was rated according to the number of “pushing” and “helping decisions” they took during
the course of the escape. Results of the three experiments showed that perception of threat
could indeed enhance identification with the group as a whole, and that people who identified
strongly with others pushed less and helped more than those with low-identification.
Regrettably, however, no financial incentive was offered to motivate participants to escape
as quickly as possible. On a more general note, using virtual reality experiments to simulate
events where people risk death might only partially reveal how people would actually behave
in real-life situations. Such experimental protocols might therefore fall short of encouraging
the kind of spontaneous and implicit behaviors triggered in actual emergency situations.
One methodological possibility to pursue would be to explore emotional crowd behavior in
emergency contexts using video recordings of real events. Data exist (such as for the pilgrimage of Makkah in Saudi Arabia: Johansson, Helbing, Al-Abideen, & Al-Bosta, 2008)
but, as far as I know, they have not yet been coded to investigate individual behavior
or pro-social and selfish self-preservation tendencies in crowd members. Opportunities for
studying collective reaction to threat could also be found in the collection of video recordings of haunted house attractions. The Fear Factory at the Niagara Falls, in Canada (see:
http://www.nightmaresfearfactory.com/) upload photos of people’s reactions to a threatening element each week. Studying and coding people’s behavior towards others (whether they
grip the others, and whether they are gripped back, whether they try to escape or seek social
102
comfort) could provide priceless data of how people react collectively when confronted by
with immediate threat.
Different levels of analysis at the source of the dilemma
Apart from the methodological issues faced by modern crowd psychologists, another reason
for the discrepancy between social neuroscientists’ perception of emotional “contagiousness”
(or the capacity of emotions to be transmitted in a spontaneous and wide-spread fashion)
and the quasi-absence of emotionally-driven collective behavior in actual crowds reported
by modern accounts of crowd psychology, might lie at the heart of the difference in levels of
analysis explored by scholars of each discipline.
When conducting the experiment on emotional transmission beyond dyads (chapter 2 Dezecache et al., 2013), we recorded activity by certain muscles as well as skin conductance
response. The increased responses of these indices during emotional content only serve to
confirm that emotional information can indeed be transmitted beyond dyads, not that participants will react in a panicky way on the behavioral scale. In fact, regulation of affect can
occur, through the operation of inhibitory mechanisms (Kim & Hamann, 2007; Ochsner,
Bunge, Gross, & Gabrieli, 2002; Sagaspe, Schwartz, & Vuilleumier, 2011), and emotional information of fear can well be received and have an impact on the recipient’s facial muscular
and autonomous nervous system activities, but nonetheless lead to a fully-fledged non-self
preserving response at the macroscopic level.
Natural reactions to threat: affiliative tendencies vs. self-preservative
responses
Other possibilities worth exploring are the natural reactions to threat. When re-examined,
they could give clues about the reasons why people do not massively take flight in emergency
contexts.
According to Anthony R. Mawson (2005), and despite the fact that classical views assume
that typical individual responses to threat are self-preserving in nature (the well-known
103
“flight or fight” motto, where flight is spontaneously directed towards a safer place, with no
special consideration for others’ behavior) (Quarantelli, 2001), it appears that responses to
threat (in non-human as well as in human animals) are primarily affiliative (people seek for
familiar individuals, Bowlby, 1975) and are not solely guided by the will to flee towards a
safer environment.
During natural disasters (such as a tornado: Form & Nosow, 1958), people were shown to turn
towards loved ones before deciding to flee (Fitzpatrick & Mileti, 1991); in fire emergencies,
people tend to form clusters of familiar individuals (Bryan, 1980, 1985), and, again, it is often
difficult to get people evacuated as they tend to wait for all their familiar individuals to be
reunited before considering evacuation (Sime, 1983). Over and above the ties of familiar
individuals, there is ample evidence that people in emergency situations continue to act
in their social role (by, for example, helping weaker people) before evacuating themselves
(Feinberg & Johnson, 2001).
In fact, it appears that crowd members, even if they do take into account the exit possibilities,
tend to move towards familiar persons as well as towards the exits, both of which are signals
of safety (Sime, 1985). Evacuation movements thus tend to be a complex interaction between
movements away from danger and towards places and figures that appear safer.
Such data could resolve our dilemma as it would explain why, instead of leading to widespread
collective flight, emotional transmission of fear could well be effective by promoting the social
bonding and concern for others that are typically observed in emergency situations.
Summary
This epilogue has revisited the postulates of the traditional investigation of crowd behavior.
By examining the conclusions of modern accounts of crowd behavior, we have seen that, far
from being irrational and overwhelmed by fear and anxiety, people in emergency situations
tend to stay calm and even show pro-sociality towards their fellows. They are even more
prone to do so if they identify themselves with the group as a whole.
104
This state of affairs appears to contradict the idea that fear can be widely transmitted, and,
more generally, the basic intellectual project of this thesis. This dilemma can, however, be
resolved by stressing that (i) modern crowd psychologists and social neuroscientists study
behavior at different levels of analysis, and that (ii) the idea that fear spreads among large
groups does not mean that people will react with self-preserving behavior (such as fleeing),
as natural fearful responses are mostly affiliative (seek for familiar places and people).
105
General references
Aronfreed, J. (1970). The socialization of altruistic and sympathetic behavior: Some theoretical and experimental analyses. Altruism and Helping Behavior, 103–126.
Aveni, A. F. (1977). The Not-So-Lonely Crowd: Friendship Groups in Collective Behavior.
Sociometry, 40(1), 96-99. doi:10.2307/3033551
Bandura, A. (1969). Principles of behavior modification. New York: Holt, Rinehart & Winston.
Bargh, J. A., & Williams, E. L. (2006). The automaticity of social life. Current Directions
in Psychological Science, 15(1), 1–4.
Barrett, L. F. (2011). Was Darwin wrong about emotional expressions? Current Directions
in Psychological Science, 20(6), 400–406.
Bavelas, J. B., Black, A., Lemery, C. R., & Mullett, J. (1986). "I show how you feel": Motor
mimicry as a communicative act. Journal of Personality and Social Psychology, 50(2), 322329. doi:10.1037/0022-3514.50.2.322
Bernieri, F. J. (1988). Coordinated movement and rapport in teacher-student interactions.
Journal of Nonverbal behavior, 12(2), 120–138.
Bernieri, F. J., & Rosenthal, R. (1991). Interpersonal coordination: Behavior matching and
interactional synchrony. Fundamentals of Nonverbal Behavior, 401.
Bernstein, I. (1990). The New York City Draft Riots: Their Significance for American Society
and Politics in the Age of the Civil War. Oxford University Press.
106
Bowlby, J. (1975). Attachment and loss, Vol. II: Separation: Anxiety and anger. Penguin.
Brennan, P. (1999). Victims and survivors in fatal residential building fires. Fire and Materials, 23(6), 305–310.
Bryan, J. L. (1980). An examination and analysis of the dynamics of the human behavior
in the MGM Grand Hotel fire, Clark County, Nevada, November 21, 1980. NFPA.
Bryan, J. L. (1985). Convergence clusters: A phenomenon of human behavior seen in selected
high-rise building fires. Fire Journal, 79(6), 27–30.
Bush, L. K., Barr, C. L., McHugo, G. J., & Lanzetta, J. T. (1989). The effects of facial control
and facial mimicry on subjective reactions to comedy routines. Motivation and Emotion,
13(1), 31–52.
Cannavale, F. J., Scarr, H. A., & Pepitone, A. (1970). Deindividuation in the small group:
further evidence. Journal of Personality and Social Psychology, 16(1), 141.
Cappella, J. N. (1981). Mutual influence in expressive behavior: Adult–adult and infant–adult
dyadic interaction. Psychological Bulletin, 89(1), 101.
Cappella, J. N. (1997). Behavioral and judged coordination in adult informal social interactions: Vocal and kinesic indicators. Journal of Personality and Social Psychology, 72(1),
119.
Cappella, J. N., & Planalp, S. (1981). Talk and silence sequences in informal conversations
III: Interspeaker influence. Human Communication Research, 7(2), 117–132.
Chartrand, T. L., & Bargh, J. A. (1999). The chameleon effect: The perception–behavior
link and social interaction. Journal of personality and social psychology, 76(6), 893.
Chovil, N. (1991). Social determinants of facial displays. Journal of Nonverbal Behavior,
15(3), 141-154. doi:10.1007/BF01672216
Chovil, N. (1997). Facing others: A social communicative perspective on facial displays. The
Psychology of Facial Expression, 321.
Clarke, L. (2002). Panic: myth or reality? Contexts, 1(3), 21–26.
107
Cook, A. (1974). The armies of the streets: the New York City draft riots of 1863. University
Press of Kentucky.
Couch, C. J. (1968). Collective Behavior: An Examination of Some Stereotypes. Social Problems, 15(3), 310-322. doi:10.2307/799787
Couzin, I. (2007). Collective minds. Nature, 445(7129), 715–715.
Couzin, I. D. (2009). Collective cognition in animal groups. Trends in Cognitive Sciences,
13(1), 36–43.
Crockford, C., Wittig, R. M., Mundry, R., & Zuberbühler, K. (2012). Wild Chimpanzees Inform Ignorant Group Members of Danger. Current Biology, 22(2), 142?146. doi:10.1016/j.cub.2011.11.053
De Gelder, B. (2006). Towards the neurobiology of emotional body language. Nature Reviews
Neuroscience, 7(3), 242-249. doi:10.1038/nrn1872
De Gelder, B., Snyder, J., Greve, D., Gerard, G., & Hadjikhani, N. (2004). Fear fosters
flight: a mechanism for fear contagion when perceiving emotion expressed by a whole body.
Proceedings of the National Academy of Sciences of the United States of America, 101(47),
16701–16706.
De Tarde, Gabriel. (1903). La Philosophie Pénale. A. Storck & cie.
De Vignemont, F., & Jacob, P. (2012). What Is It like to Feel Another’s Pain?. Philosophy
of Science, 79(2), 295–316.
Derks, D., Fischer, A. H., & Bos, A. E. (2008). The role of emotion in computer-mediated
communication: A review. Computers in Human Behavior, 24(3), 766–785.
Dezecache, G., Conty, L., Chadwick, M., Philip, L., Soussignan, R., Sperber, D., & Grèzes, J.
(2013). Evidence for Unintentional Emotional Contagion Beyond Dyads. PLoS ONE, 8(6),
e67371. doi:10.1371/journal.pone.0067371
Dezecache, G., Mercier, H., & Scott-Phillips, T. C. (2013). An evolutionary approach to
emotional communication. Journal of Pragmatics. doi:10.1016/j.pragma.2013.06.007
Diener, E. (1977). Deindividuation: Causes and consequences. Social Behavior and Person-
108
ality: an International Journal, 5(1), 143–155.
Diener, E., Fraser, S. C., Beaman, A. L., & Kelem, R. T. (1976). Effects of deindividuation
variables on stealing among Halloween trick-or-treaters. Journal of Personality and Social
Psychology, 33(2), 178.
Dimberg, U. (1982). Facial reactions to facial expressions. Psychophysiology, 19(6), 643–647.
Dimberg, U., Hansson, G. Ö., & Thunberg, M. (1998). Fear of snakes and facial reactions:
A case of rapid emotional responding. Scandinavian Journal of Psychology, 39(2), 75–80.
Dimberg, Ulf, & Thunberg, M. (1998). Rapid facial reactions to emotional facial expressions.
Scandinavian Journal of Psychology, 39(1), 39–45. doi:10.1111/1467-9450.00054
Dimberg, Ulf, Thunberg, M., & Elmehed, K. (2000). Unconscious Facial Reactions to Emotional Facial Expressions. Psychological Science, 11(1), 86?89. doi:10.1111/1467-9280.00221
Dondi, M., Simion, F., & Caltran, G. (1999). Can newborns discriminate between their own
cry and the cry of another newborn infant? Developmental Psychology, 35(2), 418?426.
Drury, J., Cocking, C., & Reicher, S. (2009a). Everyone for themselves? A comparative study
of crowd solidarity among emergency survivors. British Journal of Social Psychology, 48(3),
487–506.
Drury, J., Cocking, C., & Reicher, S. (2009b). The nature of collective resilience: Survivor
reactions to the 2005 London bombings. International Journal of Mass Emergencies and
Disasters, 27(1), 66–95.
Drury, John. (2002). When the mobs are looking for witches to burn, nobody’s safe’: Talking
about the reactionary crowd. Discourse & Society, 13(1), 41–73.
Drury, John, Cocking, C., Reicher, S., Burton, A., Schofield, D., Hardwick, A., . . . Langston,
P. (2009). Cooperation versus competition in a mass emergency evacuation: A new laboratory simulation and a new theoretical model. Behavior Research Methods, 41(3), 957–970.
Drury, John, Novelli, D., & Stott, C. (2013). Representing crowd behaviour in emergency
planning guidance:‘mass panic’or collective resilience? Resilience, 1(1), 18–37.
109
Drury, John, & Reicher, S. (2010). Crowd Control. Scientific American Mind, 21(5), 58–65.
Dunbar, R. I., & Spoors, M. (1995). Social networks, support cliques, and kinship. Human
Nature, 6(3), 273–290.
Ehrenreich, B. (2007). Dancing in the streets: A history of collective joy. Macmillan.
Ekman, P. (1994). Moods, emotions, and traits. The nature of emotion: Fundamental Questions, 56–58.
Ekman, P. (2007). Emotions revealed: Recognizing faces and feelings to improve communication and emotional life. Macmillan.
Evans, H., & Bartholomew, R. E. (2009). Outbreak!: The Encyclopedia of Extraordinary
Social Behavior. Anomalist Books, LLC.
Fahy, R. F., Proulx, G., & Aiman, L. (2009). "Panic"and human behaviour in fire. National
Research Council Canada.
Feinberg, W. E., & Johnson, N. R. (2001). The Ties that Bind: A Macro-Level Approach to
Panic. International Journal of Mass Emergencies and Disasters, 19(3), 269-295.
Fitzpatrick, C., & Mileti, D. S. (1991). Motivating public evacuation. International Journal
of Mass Emergencies and Disasters, 9(2), 137–152.
Form, W. H., & Nosow, S. (1958). Community in disaster. Harper.
Fowler, J. H., & Christakis, N. A. (2008). Dynamic spread of happiness in a large social
network: longitudinal analysis over 20 years in the Framingham Heart Study. BMJ?: British
Medical Journal, 337. doi:10.1136/bmj.a2338
Fredrickson, B. L. (1998). What good are positive emotions? Review of General Psychology,
2(3), 300.
Fredrickson, Barbara L. (2004). The broaden-and-build theory of positive emotions. Philosophical Transactions-Royal Society of London Series B Biological Sciences, 1367–1378.
Freud, S. (1912). Recommendations to physicians practising psycho-analysis. In J. Strachey
(Ed.) (Trans. J. Reviene). The standard edition of the complete psychological works of
110
Sigmund Freud (Vol. 12, p. 115). London: Hogarth Press.
Frick, R. W. (1985). Communicating emotion: The role of prosodic features. Psychological
Bulletin, 97(3), 412.
Fridlund, A. J. (1991). Sociality of solitary smiling: Potentiation by an implicit audience.
Journal of Personality and Social Psychology, 60(2), 229.
Fridlund, Alan J. (1994). Human facial expression: An evolutionary view (Vol. 38). Academic
Press San Diego, CA.
Frijda, N. H. (1993). Moods, emotion episodes, and emotions. In M. Lewis & J. M. Haviland
(Éd.), Handbook of emotions (p. 381?403). New York, NY, US: Guilford Press.
Frodi, A. M., Lamb, M. E., Leavitt, L. A., Donovan, W. L., Neff, C., & Sherry, D. (1978).
Fathers’ and mothers’ responses to the faces and cries of normal and premature infants.
Developmental Psychology, 14(5), 490.
Fruin, J. J. (1993). The causes and prevention of crowd disasters. Engineering for crowd
safety. New York, 1–10.
Goldenthal, P., Johnston, R. E., & Kraut, R. E. (1981). Smiling, appeasement, and the silent
bared-teeth display. Ethology and Sociobiology, 2(3), 127?133. doi:10.1016/0162-3095(81)90025X
Green, S., & Marler, P. (1979). The analysis of animal communication. Handbook of Behavioral Neurobiology, 3, 73–158.
Grèzes, J., Pichon, S., & de Gelder, B. (2007). Perceiving fear in dynamic body expressions.
NeuroImage, 35(2), 959?967. doi:10.1016/j.neuroimage.2006.11.030
Grèzes, Julie, & Dezecache, G. (in press). How do shared-representations and emotional pro-
cesses cooperate in response to social threat signals? Neuropsychologia. doi:10.1016/j.neuropsychologia.2013.09.0
Grèzes, Julie, Philip, L., Chadwick, M., Dezecache, G., Soussignan, R., & Conty, L. (2013).
Self-Relevance Appraisal Influences Facial Reactions to Emotional Body Expressions. PLoS
ONE, 8(2), e55885. doi:10.1371/journal.pone.0055885
111
Hatfield, E., Cacioppo, J. T., & Rapson, R. L. (1994). Emotional contagion. Cambridge
Univ Pr.
Hatfield, Elaine, Cacioppo, J. T., & Rapson, R. L. (1993). Emotional contagion. Current
Directions in Psychological Science, 2(3), 96–99.
Hatfield, Elaine, & Hsee, C. K. (s.d.). The Impact of Vocal Feedback on Emotional Experience and Expression (SSRN Scholarly Paper No. ID 1133985). Rochester, NY: Social Science
Research Network.
Headley, H. J. (1873). The Great Riots of New York.
Hecker, J. F. C., & Babington, B. G. (1977). The dancing mania of the Middle Ages. University Microfilms.
Helbing, D., Johansson, A., & Al-Abideen, H. Z. (2007). Crowd turbulence: the physics of
crowd disasters (arXiv e-print No. 0708.3339).
Hill, A. L., Rand, D. G., Nowak, M. A., & Christakis, N. A. (2010). Emotions as infectious
diseases in a large social network: the SISa model. Proceedings of the Royal Society B:
Biological Sciences. doi:10.1098/rspb.2010.1217
Hogg, M. A., & Williams, K. D. (2000). From I to we: Social identity and the collective self.
Group Dynamics: Theory, Research, and Practice, 4(1), 81-97. doi:10.1037/1089-2699.4.1.81
Hoggett, J., & Stott, C. (2010). The role of crowd theory in determining the use of force in
public order policing. Policing and Society, 20(2), 223-236. doi:10.1080/10439461003668468
Hsee, C. K., Hatfield, E., & Chemtob, C. (1992). Assessments of the Emotional States of
Others: Conscious Judgments Versus Emotional Contagion. Journal of Social and Clinical
Psychology, 11(2), 119?-128. doi:10.1521/jscp.1992.11.2.119
Izard, C. E. (1971). The face of emotion. Appleton-Century-Crofts.
Izard, C. E. (1977). Human Emotions. New York: Plenum Press.
Johansson, A., Helbing, D., Al-Abideen, H. Z., & Al-Bosta, S. (2008). From crowd dynamics
to crowd safety: A video-based analysis. Advances in Complex Systems, 11(04), 497-527.
112
doi:10.1142/S0219525908001854
Johnson, N. R. (1987). Panic at" The Who concert stampede": An empirical assessment.
Social Problems, 362–373.
Johnson, R. D., & Downing, L. L. (1979). Deindividuation and valence of cues: Effects
on prosocial and antisocial behavior. Journal of Personality and Social Psychology, 37(9),
1532-1538. doi:10.1037/0022-3514.37.9.1532
Jung, C. G. (1968). Lecture five. Analytical psychology: Its theory and practice (pp. 151160). New York: Random House.
Keating, J. P. (1982). The myth of panic. Fire Journal, 76(3), 57–61.
Kerckhoff, A. C., & Back, K. W. (1968). The June bug; a study of hysterical contagion.
Appleton-Century-Crofts New York.
Kim, S. H., & Hamann, S. (2007). Neural Correlates of Positive and Negative Emotion Regulation. Journal of Cognitive Neuroscience, 19(5), 776-798. doi:10.1162/jocn.2007.19.5.776
Kraut, R. E., & Johnston, R. E. (1979). Social and emotional messages of smiling: An
ethological approach. Journal of Personality and Social Psychology, 37(9), 1539.
Krebs, J. R., Dawkins, R., & others. (1984). Animal signals: mind-reading and manipulation.
Behavioural Ecology: An Evolutionary Approach, 2, 380–402.
Laird, J. D. (1984). The real role of facial response in the experience of emotion: A reply to
Tourangeau and Ellsworth, and others. Journal of Personality and Social Psychology, Vol
40(2), Feb 1981, 355-357.
Lanzetta, J. T., & Orr, S. P. (1980). Influence of facial expressions on the classical conditioning of fear. Journal of Personality and Social Psychology, 39(6), 1081.
Le Bon, G. (1895). Psychologie des foules. Macmillan.
Luminet, O., Bouts, P., Delie, F., Manstead, A. S. R., & Rimé, B. (2000). Social sharing of
emotion following exposure to a negatively valenced situation. Cognition & Emotion, 14(5),
661?688. doi:10.1080/02699930050117666
113
M, A., & J, P. (1963). An epidemic of laughing in the Bukoba district of Tanganyika. Central
African Journal of Medicine, 9(5), 167-170.
Mackay, C. (2004). Extraordinary popular delusions and the madness of crowds. Barnes &
Noble Publishing.
Madsen, E. A., Tunney, R. J., Fieldman, G., Plotkin, H. C., Dunbar, R. I. M., Richardson,
J.-M., & McFarland, D. (2007). Kinship and altruism: A cross-cultural experimental study.
British Journal of Psychology, 98(2), 339–359. doi:10.1348/000712606X129213
Malatesta, C. Z., & Haviland, J. M. (1982). Learning display rules: The socialization of
emotion expression in infancy. Child Development, 991–1003.
Marcoccia, M. (2000). Les smileys: une représentation iconique des émotions dans la communication médiatisée par ordinateur. Les émotions dans les interactions communicatives.
Lyon: Presses universitaires de Lyon, 249–263.
Mawson, A. R. (2005). Understanding mass panic and other collective responses to threat
and disaster. Psychiatry, 68(2), 95?113.
Maynard-Smith, J., & Harper, D. (2004). Animal signals. Oxford University Press, USA.
McPhail, C. (1991). The myth of the madding crowd. Transaction Books.
Mehu, M., & Dunbar, R. I. M. (2008). Relationship between Smiling and Laughter in Humans
(Homo sapiens); Testing the Power Asymmetry Hypothesis. Folia Primatologica, 79(5), 269280. doi:10.1159/000126928
Mehu, M., Little, A. C., & Dunbar, R. I. M. (2007). Duchenne smiles and the perception
of generosity and sociability in faces. Journal of Evolutionary Psychology, 5(1), 183-196.
doi:10.1556/JEP.2007.1011
Moody, E. J., McIntosh, D. N., Mann, L. J., & Weisser, K. R. (2007). More than mere
mimicry? The influence of emotion on rapid facial reactions to faces. Emotion (Washington,
D.C.), 7(2), 447-457. doi:10.1037/1528-3542.7.2.447
Moscovici, S. (1993). La crainte du contact. Communications, 57(1), 35–42.
114
Moscovici, S. (2005). L’âge des foules. Hachette.
Mujica-Parodi, L. R., Strey, H. H., Frederick, B., Savoy, R., Cox, D., Botanov, Y., . . .
Weber, J. (2009). Chemosensory Cues to Conspecific Emotional Stress Activate Amygdala
in Humans. PLoS ONE, 4(7), e6415. doi:10.1371/journal.pone.0006415
Ochsner, K. N., Bunge, S. A., Gross, J. J., & Gabrieli, J. D. E. (2002). Rethinking feelings:
an FMRI study of the cognitive regulation of emotion. Journal of Cognitive Neuroscience,
14(8), 1215-1229. doi:10.1162/089892902760807212
Postmes, T., & Spears, R. (1998). Deindividuation and antinormative behavior: A metaanalysis. Psychological Bulletin, 123(3), 238.
Pratt, J. B. (1920). Crowd psychology and revivals. New York, NY, US: MacMillan Co.
Proulx, G., & Fahy, R. F. (2004). Account analysis of WTC survivors. In Proceedings of
the 3rd International Symposium on Human Behaviour in Fire, Belfast, UK, September (p.
01–03).
Quarantelli, E. L. (2001). The Sociology Of Panic. Smelser and Baltes (eds) International
Encyclopedia of the Social and Behavioral Sciences.
Quarantelli, Enrico L. (1954). The nature and conditions of panic. American Journal of
Sociology, 267–275.
Raafat, R. M., Chater, N., & Frith, C. (2009). Herding in humans. Trends in Cognitive
Sciences, 13(10), 420–428.
Reed, L. I., Zeglen, K. N., & Schmidt, K. L. (2012). Facial expressions as honest signals
of cooperative intent in a one-shot anonymous Prisoner’s Dilemma game. Evolution and
Human Behavior, 33(3), 200?209. doi:10.1016/j.evolhumbehav.2011.09.003
Reicher, S. (1996). ‘The Crowd’century: Reconciling practical success with theoretical failure.
British Journal of Social Psychology, 35(4), 535–553.
Reicher, S. (2001). The psychology of crowd dynamics. Blackwell Handbook of Social Psychology: Group Processes, 182–208.
115
Reicher, S. D. (1984). The St. Pauls’ riot: An explanation of the limits of crowd action
in terms of a social identity model. European Journal of Social Psychology, 14(1), 1–21.
doi:10.1002/ejsp.2420140102
Reicher, S., & Potter, J. (1985). Psychological theory as intergroup perspective: A comparative analysis of "scientific" and "lay" accounts of crowd events. Human Relations, 38(2),
167–189.
Reicher, S., Stott, C., Cronin, P., & Adang, O. (2004). An integrated approach to crowd
psychology and public order policing. Policing: An International Journal of Police Strategies
& Management, 27(4), 558–572.
Reicher, Stephen D. (1987). Crowd behaviour as social action. Rediscovering the social
group: A self-categorization theory. Cambridge, MA, US: Basil Blackwell. (1987). 171–202.
Rendall, D., Owren, M. J., & Ryan, M. J. (2009). What do animal signals mean? Animal
Behaviour, 78(2), 233–240.
Rendall, Drew, & Owren, M. J. (2010). Chapter 5.4 - Vocalizations as tools for influencing
the affect and behavior of others. In Stefan M. Brudzynski (Éd.), Handbook of Behavioral
Neuroscience (Vol. Volume 19, p. 177-185). Elsevier.
Rimé, B., Corsini, S., & Herbette, G. (2002). Emotion, verbal expression, and the social
sharing of emotion. In The verbal communication of emotions: Interdisciplinary perspectives
(p. 185-208). Mahwah, NJ, US: Lawrence Erlbaum Associates Publishers.
Robert, A. (2012). Contagion morale et transmission des maladies: histoire d’un chiasme
(xiiie-xixe siècle). Tracés, (2), 41–60.
Rubio, V. (2008). La foule: un mythe républicain? Vuibert.
Rubio, V. (2010). La Foule. Réflexions autour d’une abstraction. Conserveries mémorielles.
Revue Transdisciplinaire de Jeunes Chercheurs, (#8).
Sagaspe, P., Schwartz, S., & Vuilleumier, P. (2011). Fear and stop: a role for the amygdala in
motor inhibition by emotional signals. NeuroImage, 55(4), 1825?1835. doi:10.1016/j.neuroimage.2011.01.027
116
Sauter, D. A., Eisner, F., Ekman, P., & Scott, S. K. (2010). Cross-cultural recognition
of basic emotions through nonverbal emotional vocalizations. Proceedings of the National
Academy of Sciences. doi:10.1073/pnas.0908239106
Scherer, K. R. (2009). Emotions are emergent processes: they require a dynamic computational architecture. Philosophical Transactions of the Royal Society B: Biological Sciences,
364(1535), 3459–3474.
Schmolck, H., Buffalo, E. A., & Squire, L. R. (2000). Memory Distortions Develop Over
Time: Recollections of the O.J. Simpson Trial Verdict After 15 and 32 Months. Psychological
Science, 11(1), 39?45. doi:10.1111/1467-9280.00212
Schoenewolf, G. (1990). Emotional contagion: Behavioral induction in individuals and groups.
Modern Psychoanalysis, 15(1), 49-61.
Schweingruber, D., & Wohlstein, R. T. (2005). The Madding Crowd Goes to School: Myths
about Crowds in Introductory Sociology Textbooks. Teaching Sociology, 33(2), 136?153.
Scott-Phillips, T. C. (2008). Defining biological communication. Journal of Evolutionary
Biology, 21(2), 387–395.
Scott-Phillips, T. C. (2010). Animal communication: insights from linguistic pragmatics.
Animal Behaviour, 79(1), 1.
Seyfarth, R. M., Cheney, D. L., & Marler, P. (1980). Vervet monkey alarm calls: semantic
communication in a free-ranging primate. Animal Behaviour, 28(4), 1070–1094.
Sighele, S. (1901). La foule criminelle: Essai de psychologie collective. F. Alcan.
Sime, J. D. (1983). Affiliative behaviour during escape to building exits. Journal of Environmental Psychology, 3(1), 21–41.
Sime, J. D. (1985). Movement toward the Familiar Person and Place Affiliation in a Fire Entrapment Setting. Environment and Behavior, 17(6), 697?724. doi:10.1177/0013916585176003
Simner, M. L. (1971). Newborn’s response to the cry of another infant. Developmental
Psychology, 5(1), 136-150. doi:10.1037/h0031066
117
Smith, A. (2010). The theory of moral sentiments. Penguin.
Smith, W. J. (1980). The behavior of communicating: an ethological approach. Harvard
University Press.
Soussignan, R., Chadwick, M., Philip, L., Conty, L., Dezecache, G., & Grèzes, J. (2013).
Self-relevance appraisal of gaze direction and dynamic facial expressions: Effects on facial
electromyographic and autonomic reactions. Emotion, 13(2), 330?337. doi:10.1037/a0029892
Stahl, S. M., & Lebedun, M. (1974). Mystery Gas: An Analysis of Mass Hysteria. Journal
of Health and Social Behavior, 15(1), 44-50. doi:10.2307/2136925
Stepper, S., & Strack, F. (1993). Proprioceptive determinants of emotional and nonemotional
feelings. Journal of Personality and Social Psychology, 64(2), 211?220. doi:10.1037/00223514.64.2.211
Sterelny, K. (2011). From hominins to humans: how sapiens became behaviourally modern.
Philosophical Transactions of the Royal Society B: Biological Sciences, 366(1566), 809?822.
doi:10.1098/rstb.2010.0301
Stott, C., & Reicher, S. (1998). Crowd action as intergroup process: Introducing the police
perspective. European Journal of Social Psychology, 28(4), 509–529.
Susskind, J. M., Lee, D. H., Cusi, A., Feiman, R., Grabski, W., & Anderson, A. K. (2008).
Expressing fear enhances sensory acquisition. Nature Neuroscience, 11(7), 843–850.
Tajfel, H. E. (1978). Differentiation between social groups: Studies in the social psychology
of intergroup relations. Academic Press.
Tarde, G. (1890). Les lois de l’imitation: étude sociologique. Félix Alcan.
Tierney, K., Bevc, C., & Kuligowski, E. (2006). Metaphors matter: Disaster myths, media
frames, and their consequences in Hurricane Katrina. The Annals of the American Academy
of Political and Social Science, 604(1), 57–81.
Uchino, B., Hsee, C. K., Hatfield, E., Carlson, J. G., & Chemtob, C. (1991). The effect of
expectations on susceptibility to emotional contagion. Unpublished manuscript, University
118
of Hawaii, Hawaii.
Vermeulen, N., Godefroid, J., & Mermillod, M. (2009). Emotional Modulation of Attention: Fear Increases but Disgust Reduces the Attentional Blink. PLoS ONE, 4(11), e7924.
doi:10.1371/journal.pone.0007924
Wessely, S. (1987). Mass hysteria: two syndromes? Psychological Medicine, 17(01), 109–120.
Wiesenfeld, A. R., Malatesta, C. Z., & Deloach, L. L. (1981). Differential parental response to
familiar and unfamiliar infant distress signals. Infant Behavior and Development, 4, 281-295.
doi:10.1016/S0163-6383(81)80030-6
Zhen, W., Mao, L., & Yuan, Z. (2008). Analysis of trample disaster and a case study – Mihong
bridge fatality in China in 2004. Safety Science, 46(8), 1255?1270. doi:10.1016/j.ssci.2007.08.002
Zimbardo, P. G. (1969). The human choice: Individuation, reason, and order versus deindividuation, impulse, and chaos. In Nebraska Symposium on Motivation.
119
Appendix
120
GUILLAUME DEZECACHE
Born 29 June 1987 | French citizenship | +33 (0)6 86 50 00 21 | [email protected]
EDUCATION
Université Pierre & Marie Curie (Paris, France)
PhD in Cognitive Science
2010 - present
University of Oxford (Oxford, UK)
Visiting Student in Cognitive & Evolutionary Anthropology
2011 - 2012
Ecole des Hautes Etudes en Sciences Sociales (Paris, France)
MA of Cognitive Science with highest honours
2008 - 2010
Université Lille 3 (Lille, France)
BA of Philosophy with honours
2005 - 2008
RESEARCH
Institute of Cognitive Studies – Ecole Normale Supérieure (Paris, France)
PhD position in Cognitive Science (36 months)
Topic: Studies on basic emotional signaling systems in human crowds
Supervision: Julie Grèzes & Pierre Jacob, in collaboration with Dan
Sperber
Techniques: Electromyography and physiological recordings, behavioral
experiments, theoretical work
Chimfunshi Wildlife Orphanage (Chimfunshi, Zambia)
Research intern in Primatology (1 month)
Topic: Multimodal communication in semi-wild chimpanzees
Supervision: Marina Davila-Ross
Technique: Video recording
2010 - present
2012
Institute of Cognitive and Evolutionary Anthropology – University of Oxford (Oxford, United Kingdom)
Visiting student in Cognitive & Evolutionary Anthropology (5 months)
Topic: Laughter and social bonding in humans
2011 - 2012
Supervision: Robin Dunbar
Technique: Direct observation
GUILLAUME DEZECACHE
PAGE 2
Institute of Cognitive Studies – Ecole Normale Supérieure (Paris, France)
Research intern in Social Neuroscience (9 months)
Topic: Transitive emotional contagion in humans
Supervision: Julie Grèzes, Dan Sperber & Laurence Conty
Technique: Electromyography and physiological recordings
2009 - 2010
Institute of Cognitive Studies – Ecole Normale Supérieure (Paris, France)
Research intern in Social Neuroscience (6 months)
Topic: The spatiotemporal integration of visual social cues
Supervision: Laurence Conty & Julie Grèzes
Technique: EEG under fMRI
2009
Institute of Cognitive Studies – Ecole Normale Supérieure (Paris, France)
Research intern in Philosophy of Psychology (5 months)
Topic: The cognitive mechanisms of collective action
Supervision: Elisabeth Pacherie
Technique: Documentary work
2009
Institute of Cognitive Studies – Ecole Normale Supérieure (Paris, France)
Research intern in Developmental Psychology (5 months)
Topic: The sense of justice in young children (3-to-5 y.o.)
Supervision: Nicolas Baumard & Dan Sperber
Technique: Paper-and-pencil
2009
AWARDS & GRANTS
Fyssen postdoctoral fellowship (Fyssen Foundation)
Full Ph.D. scholarship (French Ministry of Defense)
Mobi’Doc International mobility fund (Île-de-France Regional Council)
Ph.D. offer (London School of Economics)
Merit scholarship (French Ministry of National Education)
2014 - 2016
2010 - present
2012
2010
2008
2
GUILLAUME DEZECACHE
PAGE 3
PUBLICATIONS
Grèzes J. & Dezecache G. (in press). How do shared-representations and emotional processes cooperate
in response to threat signals? Neuropsychologia.
Dezecache G., Mercier H. & Scott-Phillips T. (2013). An evolutionary perspective to emotional
communication. Journal of Pragmatics.
Dezecache G., Conty L. & Grèzes J. (2013). Social affordances: is the mirror neuron system involved?
Commentary on target article of Schilbach and colleagues. Behavioral and Brain Sciences, 36(4), 417-418.
Dezecache G., Conty L., Philip L., Chadwick M., Soussignan R., Sperber D. & Grèzes J. (2013).
Evidence for unintentional emotional contagion beyond dyads. PLoS ONE, 8(6):e67371.
doi:10.1371/journal.pone.0067371
Dezecache G. & Grèzes J. (2013). La communication émotionnelle ou le jeu des affordances sociales.
Santé Mentale. 177, 26-31.
Soussignan R., Chadwick M., Philip L., Conty L., Dezecache G. & Grèzes J. (2013). Self-relevance appraisal
of gaze direction and dynamic facial expressions: effects on facial electromyographic and autonomic
reactions. Emotion, 13(2), 330-337. doi:10.1037/a0029892
Grèzes J., Philip L., Chadwick M., Dezecache G., Soussignan R. & Conty L. (2013). Self-relevance appraisal
influences facial reactions to emotional body expressions. PLoS ONE, 8(2):e55885.
doi:10.1371/journal.pone.0055885
Dezecache G. & Dunbar R.I.M. (2012). Sharing the joke: the size of natural laughter groups. Evolution &
Human Behavior, 33(6), 775-779. doi:10.1016/j.evolhumbehav.2012.07.002
Grèzes J. & Dezecache G. (2012). Communication émotionnelle: mécanismes cognitifs et cérébraux. In P.
Allain, G. Aubin & D. Le Gall (Eds). Cognition Sociale et Neuropsychologie. Solal: Marseille.
Conty L., Dezecache G., Hugueville L. & Grèzes J. (2012). Early binding of gaze, gesture and emotion:
neural time course and correlates. Journal of Neuroscience, 32(13), 4531-4539.
doi:10.1523/JNEUROSCI.5636-11.2012
Dezecache G. (2009). Phénoménologie et sciences cognitives : une psychologie du cognitiviste?
Methodos, 9.
3
GUILLAUME DEZECACHE
PAGE 4
TALKS
The proximate mechanisms and biological function of emotional
transmission of fear and joy in humans
International Symposium on Vision, Action and Concepts
(Tourcoing, France)
10/2013
Mise en évidence de la contagion émotionnelle
Journée interdisciplinaire PSL (Paris, France)
05/2013
La contagiosité des émotions : études sur les basiques de partage émotionnel
chez l’humain
Séminaire doctoral d’études cognitives (Lyon, France)
03/2013
Studies on basic emotional signaling systems in humans
Psychology seminar of the University of Portsmouth (Portsmouth, UK)
09/2012
Sharing the joke: the size of natural laughter groups
Séminaire doctoral de l’Institut Jean Nicod (Paris, France)
06/2012
La contagiosité des émotions: études sur les systèmes basiques de
partage de l'information émotionnelle chez l'humain
Journée IHPST/IJN (Paris, France)
06/2012
Emotional contagion beyond dyads
CERE Emotion Conference (Canterbury, UK)
05/2012
w/ Olivier Morin Ce qui manque aux émotions contagieuses pour être
morales, et aux émotions morales pour être contagieuses
Journée d'étude Morale et Cognition: Les Emotions (Nanterre, France)
09/2011
Evidence for transitive emotional transmission in humans
9th Annual London Evolutionary Research Network Conference.
(London, UK)
11/2011
Evidence for transitive emotional transmission through facial signaling
23rd Annual Conference of the Human Behavior and Evolution Society.
(Montpellier, France)
06/2011
w/ Julie Grèzes Bases cérébrales de la communication émotionnelle
Colloque L'Empathie (Cerisy-la-Salle, France)
06/2011
w/ Julie Grèzes Emotional signalling as cooperation
Workshop Coordination and Cooperation: Game-Theoretical and
Cognitive Perspectives (Paris, France)
05/2011
Transitive emotional contagion
Workshop LNC (Paris, France)
02/2011
4
GUILLAUME DEZECACHE
w/ Hugo Mercier Communication and emotion: an evolutionary
approach
Communication and Cognition (Neuchâtel, Switzerland)
PAGE 5
01/2011
POSTER PRESENTATIONS
Dezecache G., Hobeika L., Conty L. & Grèzes J. “Is communication the biological function of spontaneous
emotional facial reactions?”. Minds in Common: 2nd Aarhus-Paris Conference on Coordination and
Common Ground. Paris (France). June 2013.
Dezecache G., Hobeika L., Conty L. & Grèzes J. “Is communication the biological function of spontaneous
emotional facial reactions?”. Embodied Inter-subjectivity: the 1st-person and the 2nd-person
perspective: an interdisciplinary summer school. Aegina (Greece). June 2013.
Dezecache G., Gay F., Conty L. & Grèzes J. “Emotions' contagiosity and cognitive interference: what can
we learn from a modified STROOP task?”. IPSEN Conference New Frontiers in Social Neuroscience. Paris
(France). April 2013.
Dezecache G. & Dunbar R.I.M. “Size and structure of spontaneous laughter groups”. CERE Emotion
Conference 2012. Canterbury (UK). May 2012.
Dezecache G. & Grèzes J. “'Take care of me!': what do emotional expressions mean?”. Colloque Le
Cerveau Social. Saint-Denis (France). May 2011.
Dezecache G., Conty L., Chadwick M., Philip L., Sperber D. & Grèzes J. "'That she makes you happy makes
me happy': an experiment of transitive emotional contagion (preliminary results)”. Colloque Le Cerveau
Social. Saint-Denis (France). May 2011.
Dezecache G., Grèzes J. & Jacob P. “An evolutionary perspective on emotional contagion”. Journée
scientifique des doctorants de l’ED3C. Paris (France). March 2011.
Dezecache G. & Mercier H. “Emotional vigilance: how to cope with the dangers of emotional signals?”
8th Annual Conference London Evolutionary Research Network Conference. London (UK). November
2010.
Dezecache G., Conty L., Chadwick M., Sperber D. & Grèzes J. “I fear your fear of her/his fear: an
experiment of transitive emotional contagion (preliminary results)” Conference on Shared Emotions,
Joint Attention and Joint Action. Aarhus (Denmark). October 2010.
Dezecache G. & Mercier H. “Emotional vigilance: how to cope with the dangers of emotional signals?”
1st Interdisciplinary Meeting of the DEC. Paris (France). October 2010.
5
GUILLAUME DEZECACHE
PAGE 6
Dezecache G., Conty L., Chadwick M., Sperber D. & Grèzes J. “I fear your fear of her/his fear: an
experiment of transitive emotional contagion (preliminary results)”. 1st Interdisciplinary Meeting of the
DEC. Paris (France). October 2010.
TEACHING
UVSQ (Versailles, France)
Evolution, Psychology & Culture (72h)
Undergraduate students
UCBL (Lyon, France)
Introduction to Evolutionary Psychology and Human
Behavioral Ecology (6h)
Medical students
Université Paris 8 (Saint-Denis, France)
Emotional contagion (2h)
Undergraduate students
Université Paris 10 (Nanterre, France)
Emotional contagion: cognitive mechanisms and neural basis
(2h)
Postgraduate students
2012 & 2013
2012 & 2013
2012 & 2013
2012
6
The Journal of Neuroscience, March 28, 2012 • 32(13):4531– 4539 • 4531
Behavioral/Systems/Cognitive
Early Binding of Gaze, Gesture, and Emotion: Neural Time
Course and Correlates
Laurence Conty,1,2 Guillaume Dezecache,2 Laurent Hugueville,4 and Julie Grèzes2,3
1
Laboratory of Psychopathology and Neuropsychology, Université Paris 8, 93526 Saint-Denis, France, 2Laboratory of Cognitive Neuroscience, Inserm, Unité
960, Ecole Normale Supérieure, 75005 Paris, France, 3Centre de NeuroImagerie de Recherche, 75651 Paris, France, and 4Université Pierre et Marie CurieParis 6, Centre de Recherche de l’Institut du Cerveau et de la Moelle Epinière, Unité Mixte de Recherche S975, 75013 Paris, France
Communicative intentions are transmitted by many perceptual cues, including gaze direction, body gesture, and facial expressions.
However, little is known about how these visual social cues are integrated over time in the brain and, notably, whether this binding occurs
in the emotional or the motor system. By coupling magnetic resonance and electroencephalography imaging in humans, we were able to
show that, 200 ms after stimulus onset, the premotor cortex integrated gaze, gesture, and emotion displayed by a congener. At earlier
stages, emotional content was processed independently in the amygdala (170 ms), whereas directional cues (gaze direction with pointing
gesture) were combined at ⬃190 ms in the parietal and supplementary motor cortices. These results demonstrate that the early binding
of visual social signals displayed by an agent engaged the dorsal pathway and the premotor cortex, possibly to facilitate the preparation
of an adaptive response to another person’s immediate intention.
Introduction
During social interactions, facial expressions, gaze direction, and
gestures are crucial visual cues to the appraisal other people’s
communicative intentions. The neural bases for the perception of
each of these social signals has been provided but mostly separately (Haxby et al., 2000; Rizzolatti et al., 2001; Hoffman et al.,
2007). However, these social signals can take on new significance
once merged. In particular, processing of these social signals will
vary according to their self-relevance, e.g., when coupled with
direct gaze, angry faces are perceived to be more threatening
(Sander et al., 2007; Hadjikhani et al., 2008; N⬘Diaye et al., 2009;
Sato et al., 2010). So far, it remains unclear how these social
signals are integrated in the brain.
At the neural level, there is some evidence that emotion and
gaze direction interact in the amygdala (Adams and Kleck, 2003;
Hadjikhani et al., 2008; N⬘Diaye et al., 2009; Sato et al., 2010), a
key structure for the processing of emotionally salient stimuli
(Adolphs, 2002). The amygdala may thus sustain early binding of
visually presented social signals. Electroencephalography (EEG)
studies suggest that the interaction between emotion and gaze
direction occurs at ⬃200 –300 ms (Klucharev and Sams, 2004;
Received Nov. 8, 2011; revised Jan. 16, 2012; accepted Jan. 18, 2012.
Author contributions: L.C. and J.G. designed research; L.C. and G.D. performed research; L.H. contributed unpublished reagents/analytic tools; L.C. and J.G. analyzed data; L.C. and J.G. wrote the paper.
This work was supported by the European Union Research Funding NEST Program Grant FP6-2005-NEST-Path
Imp 043403, Inserm, and Ecole de Neuroscience de Paris and Région Ile-de-France.
The authors declare no competing financial interests.
Correspondence should be addressed to either of the following : Dr. Laurence Conty, Laboratory of Psychopathology and Neuropsychology, EA 2027, Université Paris 8, 2 rue de la Liberté, 93526 Saint-Denis, France, E-mail:
[email protected]; or Dr. Julie Grèzes, Cognitive Neuroscience Laboratory, Inserm, Unité 960, Ecole
Normale Supérieure, 29 Rue d’Ulm, 75005 Paris, France, E-mail: [email protected].
DOI:10.1523/JNEUROSCI.5636-11.2012
Copyright © 2012 the authors 0270-6474/12/324531-09$15.00/0
Rigato et al., 2010), but direct implication of the amygdala in such
a mechanism has yet to be provided.
It has also been established that, when one observes other
people’s bodily actions, there is activity in motor-related cortical
areas (Grèzes and Decety, 2001; Rizzolatti et al., 2001) and that
activity reaches these areas 150 –200 ms after the onset of a perceived action (Nishitani and Hari, 2002; Caetano et al., 2007;
Tkach et al., 2007; Catmur et al., 2010). Its activity being modulated by social relevance (Kilner et al., 2006) and by eye contact
(Wang et al., 2011), the motor system is thus another good neural
candidate for the integration of social cues.
Here, we set out to experimentally address whether the emotional system or the motor system sustains early binding of social
cues and when such an operation occurs. We manipulated three
visual cues that affect the appraisal of the self-relevance of social
signals: gaze direction, emotion, and gesture. To induce a parametric variation of self-involvement at the neural level, our experimental design capitalized on the ability to change the number
of social cues displayed by the actors toward the self (see Fig. 1a),
i.e., one (gaze direction only), two (gaze direction and emotion or
gaze direction and gesture), or three (gaze direction, emotion,
and gesture) visual cues. We then combined functional magnetic
resonance imaging (fMRI) with EEG [recording of event-related
potentials (ERPs)] to identify the spatiotemporal characteristics
of social cues binding mechanism. First, we analyzed the ERPs to
identify the time course of early binding of social cues. We expected a temporal marker of their integration at ⬃200 ms
(Klucharev and Sams, 2004; Rigato et al., 2010). Then, we quantified the parametric variation of self-involvement on the neural
sources of the ERPs by combining the ERPs with fMRI data.
Materials and Methods
Participants. Twenty-two healthy volunteers (11 males, 11 females; mean
age, 25.0 ⫾ 0.5 years) participated in an initial behavioral pretest to
4532 • J. Neurosci., March 28, 2012 • 32(13):4531– 4539
Conty et al. • Early Binding of Gaze, Gesture, and Emotion
Figure 1. Experimental design and stimuli examples. a, Factorial design. The actors displayed direct or averted gaze, angry or neutral facial expression, and a pointing gesture or not. From the
initial position, one (gaze direction only), two (gaze direction and emotional expression or gaze direction and gesture), or three (gaze direction, emotional expression, and gesture) visual cues could
change. b, Time course of a trial. Before stimuli presentation, a central fixation area appeared for 500 ms at the same level as that at which the actor’s head subsequently appeared. Participants were
instructed to focus their attention on the actor’s face, to avoid saccades and eyeblinks during the duration of the trial, and to judge whether or not the actor’s nonverbal behavior was directed at them.
validate the parametric variation of self-involvement in our paradigm.
Twenty-one healthy volunteers participated in the final experiment (11
males, 10 females; mean age, 23.4 ⫾ 0.5 years). All participants had
normal or corrected-to-normal vision, were right-handed, and had no
neurological or psychiatric history.
Stimuli. Stimuli consisted of photographs of 12 actors (six males). For
each actor, three social parameters were manipulated: (1) gaze direction
[head, eye gaze, and bust directed toward the participant (direct gaze
condition) or rotated by 30° to the left (averted gaze condition)]; (2)
emotion (neutral or angry); and (3) gesture (pointing or not pointing).
This manipulation resulted for each actor in eight conditions of interest.
For each of the actors, we created an additional photograph in which they
had a neutral expression, arms by their sides, and an intermediate eye
direction of 15°. This position was thereafter referred to as the “initial
position.” For all stimuli, right- and left-side deviation was obtained by
mirror imaging. Thus, each actor was seen under 16 conditions: 2 gaze
directions (direct/averted) ⫻ 2 emotions (anger/neutral) ⫻ 2 gestures
(pointing/no pointing) ⫻ 2 directions of gaze deviation (rightward/leftward), resulting in 192 stimuli. For each photograph, the actor’s body
was cut and pasted on a uniform gray background and displayed in 256
colors. Each stimulus was shown in such a way that the actor’s face
covered the participant’s central vision (⬍6° of visual angle both horizontally and vertically) while the actor’s body covered a visual angle
inferior to 15° vertically and 12° horizontally.
Procedure. Each trial was initiated for 500 ms by a fixation area consisting of a central red fixation point and four red angles delimiting a
square of 6° of central visual angle in the experimental context. This
fixation area remained on the screen throughout the trial, until the appearance of a response screen. The participant was instructed to fixate the
central point and to keep his/her attention inside the fixation area at the
level of the central point during the trial, avoiding eye blinks and saccades
(for additional details about instructions, see Conty and Grèzes, 2012).
Given the importance of an ecologically valid approach (Zaki and
Ochsner, 2009; Schilbach, 2010; Wilms et al., 2010), we kept our design as
naturalistic as possible. To do so, an apparent movement was created by
the consecutive presentation of two photographs on the screen (Conty et
al., 2007). The first photograph showed an actor in the initial position
during a random time, ranging from 300 to 600 ms. This was immediately followed by a second stimulus presenting the same actor in one of
the eight conditions of interest (Fig. 1). This second stimulus remained
on the screen for 1.3 s. Throughout the trial, the actor’s face remained
within the fixation area.
An explicit task on the parameter of interest, i.e., to judge the direction
of attention of the perceived agent (Schilbach et al., 2006), was used.
Thus, after each actor presentation, the participant was instructed to
indicate whether the actor was addressing them or another. This was
signified by a response screen containing the expressions “me” and
“other.” The participant had to answer by pressing one of two buttons
(left or right) corresponding to the correct answer. The response screen
remained until 1.5 s had elapsed and was followed by a black screen of
0.5 s preceding the next trial.
Behavioral and EEG/fMRI experiments. In a behavioral pretest, the
above procedure was used, with the exception that each actor stimulus
was presented in either the left or right side of deviation (the assignment
was reserved for half of the participants). Moreover, following the “me–
other ” task, participants had to judge the degree of self-involvement they
felt on a scale of 0 to 9 (0, “not involved”; 9, “highly involved”). The
response screen remained visible until the participant had responded.
Conty et al. • Early Binding of Gaze, Gesture, and Emotion
In the scanner, the 192 trials were presented in an 18 min block, including 68 null events (34 black screens of 4.1 s and 34 of 4.4 s). The block
was then repeated with a different order of trials within the block.
Behavioral data analyses. During both the behavioral pretest and the
EEG/fMRI experiment, participants perfectly performed the me– other
task (behavioral: mean of reaction time ⫽ 622 ⫾ 23 ms; mean of correct
responses ⫽ 97 ⫾ 0.8%; EEG/fMRI: mean of reaction time ⫽ 594 ⫾ 18
ms; mean of correct responses ⫽ 99 ⫾ 0.4%). These data were not further
analyzed. For the behavioral pretest, repeated-measures ANOVA was
performed on percentage of self-involvement, with gaze direction (direct/averted), emotion (anger/neutral), and gesture (pointing/no pointing) as within-subjects factors.
EEG data acquisition, processing, and analyses. In the fMRI, EEGs were
recorded at a sampling frequency of 5 kHz with an MR-compatible amplifier (Brain Products) placed inside the MR scanner. The signal was
amplified and bandpass filtered online at 0.16 –160 Hz. Participants were
fitted with an electrode cap equipped with carbon wired silver/silver–
chloride electrodes (Easycap). Vertical eye movement was acquired from
below the right eye; the electrocardiogram was recorded from the subject’s clavicle. Channels were referenced to FCz, with a forehead ground
and impedances kept ⬍5 k⍀. EEGs were downsampled offline to 2500
Hz for gradient subtraction and then to 250 Hz for pulse subtraction
(using EEGlab version 7; sccn.ucsd.edu/eeglab). After recalculation to
average reference, the raw EEG data were downsampled to 125 Hz and
low-pass filtered at 30 Hz. Trials containing artifacts or blinks were manually rejected. To study the ERPs in response to the perception of the
actor’s movement, ERPs were computed for each condition separately
between 100 ms before and 600 ms after the second photograph and
baseline corrected.
P100-related activity was measured by extracting the mean activity
averaged on four occipito-parietal electrodes around the wave peak between 112 and 136 ms in each hemisphere (PO7/PO3/P7/P5, PO8/PO4/
P8/P6). Early N170-related activity was measured by extracting the mean
activity averaged on four electrodes around the peak between 160 and
184 ms in each hemisphere (P5/P7/CP5/TP7, P6/P8/CP6/TP8). Late
N170-related activity was measured similarly around the peak of the
direct attention condition between 176 and 200 ms. P200-related activity
was measured by extracting the mean activity averaged on six frontal
electrodes around the peak between 200 and 224 ms (F1/AF3/Fz/AFz/F2/
AF4). Repeated-measures ANOVA was performed on each measure with
gaze direction (direct/averted), emotion (anger/neutral), gesture (no
pointing/pointing), and, when relevant, hemisphere (right/left) as
within-subjects factors (the analyses pooled over rightward and leftward
sides of actor’s deviation).
fMRI data acquisition and processing. Gradient-echo T2*-weighted
transverse echo-planar images (EPIs) with blood oxygen-level dependent
(BOLD) contrast were acquired with a 3 T Siemens whole-body scanner.
Each volume contained 40 axial slices (repetition time, 2000 ms; echo
time, 50 ms; 3.0 mm thickness without gap yielding isotropic voxels of 3.0
mm 3; flip angle, 78°; field of view, 192 mm; resolution, 64 ⫻ 64), acquired in an interleaved manner. We collected a total of 1120 functional
volumes for each participant.
Image processing was performed using Statistical Parametric Mapping
(SPM5; Wellcome Department of Imaging Neuroscience, University
College London, London, UK; www.fil.ion.ucl.ac.uk/spm) implemented
in MATLAB (MathWorks). For each subject, the 1120 functional images
acquired were reoriented to the anterior commissure–posterior commissure line, corrected for differences in slice acquisition time using the
middle slice as reference, spatially realigned to the first volume by rigid
body transformation, spatially normalized to the standard Montreal
Neurological Institute (MNI) EPI template to allow group analysis, resampled to an isotropic voxel size of 2 mm, and spatially smoothed with an
isotropic 8 mm full-width at half-maximum Gaussian kernel. To remove
low-frequency drifts from the data, we applied a high-pass filter using a
standard cutoff frequency of 128 Hz.
Joint ERP–fMRI analysis. Statistical analysis was performed using
SPM5. At the subject level, all the trials taken into account in the EEG
analyses were modeled at the appearance of the second photograph with
a duration of 0 s. Trials rejected from EEG analyses were modeled sepa-
J. Neurosci., March 28, 2012 • 32(13):4531– 4539 • 4533
rately. The times of the fixation area (192 trials of 500 ms duration) of the
first photograph (192 trials of between 300 and 600 ms) and of the response (192 trials of 1.5 s duration) as well as six additional covariates
capturing residual movement-related artifacts were also modeled. To
identify regions in which the percentage signal change in fMRI correlated
with the ERP data, we extracted the mean amplitude of each ERP peak,
trial by trial, subject by subject, and introduced them as parametric modulators of the trials of interest into the fMRI model. This resulted in four
parametric modulators (P100, early N170, late N170, and P200) that
were automatically orthogonalized by the software. Effects of the ERP
modulators were estimated at each brain voxel using a least-squares algorithm to produce four condition-specific images of parameter estimates. At the group level, we performed four t tests, corresponding to
P100, early N170, late N170, and P200 image parameter estimates obtained at the subject level.
A significance threshold of p ⱕ 0.001 (uncorrected for multiple comparisons) for the maximal voxel level and of p ⬍ 0.05 at the cluster level
(corresponding to an extent threshold of 150 contiguously active voxels)
was applied for late N170 and P200 contrasts. A small volume correction
( p ⬍ 0.05 corrected for familywise error) approach was also applied to
bilateral amygdala using an anatomical mask from SPM Anatomy Toolbox (version 17) for P100, early N170, late N170, and P200 contrasts. The
Anatomy Toolbox (version 17) was also used to identify the localization
of active clusters. Coordinates of activations were reported in millimeters
in the MNI space.
Results
Behavioral pretest
As expected, we found that our stimuli were judged more selfinvolving when displaying direct compared with averted gaze
(F(1,21) ⫽ 56.7, p ⬍ 0.001), angry compared with neutral facial
expression (F(1,21) ⫽ 9.2, p ⬍ 0.01), and pointing compared with
no pointing (F(1,21) ⫽ 21.7, p ⬍ 0.001). Interestingly, interactions
were also observed between gaze direction and emotion (F(1,21) ⫽
8.5, p ⬍ 0.01) and between gaze direction and gesture (F(1,21) ⫽
4.6, p ⬍ 0.05). Post hoc analyses showed that the effect of emotion
was greater when the participant was the target of attention
(F(1,21) ⫽ 12.4, p ⬍ 0.01; mean effect ⫽ 15.4 ⫾ 5.2%) than when
this was not the case (F(1,21) ⫽ 4.3, p ⬍ 0.05; mean effect ⫽ 6.1 ⫾
2.1%). Pointing actors were also judged more self-involving
when the participant was the target (F(1,21) ⫽ 23.2, p ⬍ 0.001;
mean effect ⫽ 12.1 ⫾ 1.5%) than when this was not the case
(F(1,21) ⫽ 6.2, p ⬍ 0.05; mean effect ⫽ 5.5 ⫾ 1.4%). The triple
interaction between gaze direction, emotion, and gesture failed to
reach significance (F(1,21) ⫽ 3.6, p ⬍ 0.07). However, post hoc
analyses revealed, as expected, that the feeling of self-involvement
increased with the number of self-relevant cues (all t(1,21) ⬎ 2.4,
all p ⬍ 0.05; see Fig. 3). As a result, we succeeded in creating a
parametric paradigm in which the self-relevance increased with
the number of self-oriented social signals.
Time course of social visual cue processing and integration
Our first step in analysis was to address the time course of social
signal processing and their integration. The sequence of short
electric brain responses was indexed by three classical and successive generic ERP components: the occipital P100, the occipitotemporal N170, and the frontal P200 (Ashley et al., 2004;
Vlamings et al., 2009). As also reported in the literature (Puce et
al., 2000; Conty et al., 2007), we observed that N170 in response
to direct attention peaked later than in the other conditions (184
ms vs a mean of 168 ms). Thus, N170 was divided into an early
component and a late component.
We observed a main effect of each factor of interest on P100
activity. Direct gaze (F(1,20) ⫽ 4.52, p ⬍ 0.05), anger (F(1,20) ⫽
9.16, p ⬍ 0.01), and pointing (F(1,20) ⫽ 17.62, p ⬍ 0.001) condi-
4534 • J. Neurosci., March 28, 2012 • 32(13):4531– 4539
Conty et al. • Early Binding of Gaze, Gesture, and Emotion
Figure 2. ERP modulation by social signals: main effects and interactions. The scalp potential maps of each ERP are represented at the top of the figure. The white point indicates the localization
of the electrode on which the time course of the different experimental conditions is displayed below each scalp. a, The P100 amplitude was significantly modulated by each social signal but
independently. No interaction between factors was observed. b, Early N170 (top) was modulated by emotional expression and gesture but independently. No interaction between factors was
observed. Late N170 (bottom) revealed the first interaction between gaze direction and gesture: the condition in which the actor looked and pointed at the participant induced greater activity than
other conditions. c, The integration between all social signals was achieved during the P200 formation: the condition in which the actor looked and pointed at the participant with an angry expression
triggered higher positive activity than all other conditions.
tions induced greater positive activity than the averted gaze, neutral emotion, and no-pointing conditions, respectively. However,
no interactions between factors were observed (all F ⬍ 1).
Analysis on the early N170 revealed first greater activity in the
right than the left hemisphere (F(1,20) ⫽ 10.55, p ⬍ 0.01). Moreover, anger (F(1,20) ⫽ 13.27, p ⬍ 0.01) and pointing gesture
(F(1,20) ⫽ 29.53, p ⬍ 0.001) induced greater negative activity
when compared with the neutral and no-pointing gesture conditions, respectively. However, no interactions between factors
were observed (F ⬍ 1).
We thus confirmed that not only gaze direction and emotion
but also body gesture are independently processed at early stages
(Rigato et al., 2010) (Fig. 2).
Analyses run on late N170 revealed a main effect of all the
factors. The activity was globally greater in the right than in the
left hemisphere (F(1,20) ⫽ 6.56, p ⬍ 0.05). Direct gaze (F(1,20) ⫽
4.52, p ⬍ 0.05), anger (F(1,20) ⫽ 25.94, p ⬍ 0.001), and pointing
condition (F(1,20) ⫽ 19.78, p ⬍ 0.001) induced greater negative
activity than, respectively, averted gaze, neutral, and no-pointing
condition. The first interaction between gaze direction and gesture emerged on this component (F(1,20) ⫽ 12.27, p ⬍ 0.005). The
condition in which the actor pointed and looked toward the subject induced greater activity than all other conditions (all t ⬎ 3.6,
all p ⬍ 0.01). The late N170 on temporo-parietal sites thus
marked the integration of directional social cues (Fig. 2).
On the frontal P200, we observed a main effect of angry expressions (F(1,20) ⫽ 5.51, p ⬍ 0.03) and direct attention (F(1,20) ⫽
5.02, p ⬍ 0.05). Importantly, however, a triple interaction
between gaze direction, emotional expressions, and pointing gesture was detected (F(1,20) ⫽ 4.71, p ⬍ 0.05). The most selfrelevant condition, in which the actor expressed anger, looked,
and pointed toward participants, induced greater positive activity
than all other conditions (all t(1,20) ⬎ 2.15, all p ⬍ 0.05). Moreover, P200 activity tended to increase with the number of selfdirected social cues (Fig. 3). Thus far, our data suggest that the
integration between three main social signals is achieved just after
200 ms in frontal sites, yet they do not provide information about
the neural source of such integration.
Brain network involved in the integration of self-relevant
visual social cues
To explore the brain sources that positively covary with the amplitude of previously identified ERPs, we performed a joint EEG–
fMRI analysis. At the subject level, mean amplitudes of P100,
early N170, late N170, and P200 peaks (extracted trial ⫻ trial)
were introduced as four parametric modulators in the fMRI
Conty et al. • Early Binding of Gaze, Gesture, and Emotion
J. Neurosci., March 28, 2012 • 32(13):4531– 4539 • 4535
Figure 3. Parametric variation of self-involvement at the neural level. The percentage of subjective self-involvement (with SE;
top graphs) and P200 activity (with SE; bottom graphs) is represented as a function of gaze direction (left graphs) and the number
of self-directed cues for the direct gaze condition (right graphs). *p ⬍ 0.05; **p ⬍ 0.01; ⬃p ⬍ 0.08. ns, Nonsignificant.
model. This method enables us to search for brain regions in
which the percentage signal change in fMRI is correlated with the
ERP data without a priori assumptions regarding the location
(Ritter and Villringer, 2006). At the group level, we calculated t
tests for P100, early N170, late N170, and P200 and looked for
brain areas in which the percentage signal change in fMRI correlated with ERP amplitudes.
The goal of the present study was to identify the spatiotemporal course of social visual signal binding. Thus, we first concentrated on late N170 and the P200 when integration occurred. The
right operculum parietal cortex (PFop) extending to somatosensory cortex SII (labeled from here on in as PF/SII) and right
supplementary motor area (SMA), extending to primary motor
area 4a, positively covaried with late N170 amplitude implicated
in the integration of self-relevant directional signals (attention
and gesture pointing toward the self). The source of P200 modulations involved in the integration of all available self-relevant
cues (directional signals toward the self with emotional expression) was found in the right premotor cortex (PM) (Fig. 4, Table
1). In humans, the border between the ventral PM and dorsal PM
is located with 40 ⬍ z coordinates ⬍ 56 (Tomassini et al., 2007).
The present source of P200 ranges from z ⫽ 34 to z ⫽ 58 and is
thus located in the dorsal part of the ventral PM. This region is
likely equivalent to the macaque area F5c (Rizzolatti et al., 2001).
It is strongly connected to the SMA, primary motor area M1,
PFop, and SII (Luppino et al., 1993; Rozzi et al., 2006; Gerbella et
al., 2011) and hosts visuomotor representations (Rizzolatti et al.,
2001).
To assess whether the emotional system also participates in
early binding of gaze, emotion, and gesture, we tested whether
ERP components modulated by the emotional content of stimuli (P100, N170, and
P200) (Batty and Taylor, 2003; Blau et al.,
2007; van Heijnsbergen et al., 2007) were
associated with activity in the amygdala,
known to be highly involved in threat
(Adolphs, 1999) and self-relevance processing (Sander et al., 2007; N⬘Diaye et al.,
2009; Sato et al., 2010). To do so, we used
that structure bilaterally as a region of
interest. BOLD responses in the left
amygdala significantly covaried with
changes in the early component of N170
(Fig. 4, Table 1). This finding validates our
approach by replicating previous results
using intracranial ERPs (Krolak-Salmon
et al., 2004; Pourtois et al., 2010) and surface EEG (Pourtois and Vuilleumier,
2006; Eimer and Holmes, 2007), showing
that information about the emotional
content of a perceived facial expression
quickly reaches the amygdala (140 –170
ms), in parallel with the processing of
other facial cues within the visual cortex.
Here, we show that emotional processing
in the amygdala occurs just before the integration of directional social signals (gaze
and pointing toward the self) detected on
the late component of N170.
Discussion
By coupling fMRI with EEG, we demonstrate for the first time that the integration
of gaze direction, pointing gesture, and
emotion is completed just after 200 ms in the right PM, possibly
to facilitate the preparation of an adaptive response to another’s
immediate intention. We confirm that activity within motorrelated cortical areas arises 150 –200 ms after the onset of a perceived action (Nishitani and Hari, 2002; Caetano et al., 2007;
Tkach et al., 2007; Catmur et al., 2010) and that the interaction
between gaze direction and emotion takes place at ⬃200 –300 ms
(Klucharev and Sams, 2004; Rigato et al., 2010). However, in
contrast to recent accounts of human amygdala function in social
cue integration (Sander et al., 2007; N⬘Diaye et al., 2009; Cristinzio et al., 2010; Sato et al., 2010), we found that emotional content
is processed earlier within the amygdala and independently of
other cues.
Early binding of social cues in the PM 200 ms after stimulus
onset may relate to an embodied response that serves evaluative
functions of others’ internal states (Jeannerod, 1994; Gallese,
2006; Keysers and Gazzola, 2007; Sinigaglia and Rizzolatti, 2011).
The emotional convergence between the emitter and the observer
enhances social and empathic bonds and thus facilitates prosocial
behavior and fosters affiliation (Chartrand and Bargh, 1999;
Lakin and Chartrand, 2003; Yabar et al., 2006; Schilbach et al.,
2008), yet strict motor resonance processing cannot explain the
present activation in the PM. Indeed, anger expressions directed
at the observer are perceived as clear signals of non-affiliative
intentions and are thus less mimicked than averted anger expressions (Hess and Kleck, 2007; Bourgeois and Hess, 2008).
Activity in the PM may relate to the estimation of prior expectations about the perceived agent’s immediate intent. Hierarchical models of motor control purport that higher and lower motor
Conty et al. • Early Binding of Gaze, Gesture, and Emotion
4536 • J. Neurosci., March 28, 2012 • 32(13):4531– 4539
Figure 4. Joint ERP–fMRI results and summary. a, Left amygdala revealed as a neural source of early N170 modulation. Its activity is projected on a coronal section of the MNI template. b, Sources
of late N170 modulation: parieto-somatosensory area (PF/SII) activity projected on a coronal section (left) and right SMA projected on a sagittal section (right) of the MNI template. c, Source of P200
modulation: right PM activity projected on a coronal section of the MNI template. d, Summary of the early binding mechanisms of social cues allowing for a complete representation of other’s
disposition toward the self. AMG, Amygdala.
Table 1. Brain sources covarying with ERP modulations
MNI coordinates
Anatomical region
Sources of early N170 modulations*
Amygdala
Sources of late N170 modulations
SMA
Middle cingulate cortex
Supramarginal gyrus (PF/SII)
Sources of P200 modulations
Right PM
L/R
x
y
z
Z value
L
⫺22
R
R
R
R
Cluster size
⫺2
⫺8
3.69
8
4
60
⫺28
⫺10
⫺22
56
50
26
4.13
3.41
4.29
3152
3152
156
58
6
34
3.61
582
12
p ⱕ 0.001 uncorrected, p ⬍ 0.05 uncorrected at the cluster level. * Small volume correction, p ⬍ 0.05 familywise
error corrected. L, Left; R, right.
modules are reciprocally connected to each other (Wolpert and
Flanagan, 2001; Kilner et al., 2007). Within such perspectives, the
generative models used to predict the sensory consequences of
one’s own actions are also used to predict another’s behavior.
Backward connections inform lower levels about expected sensory consequences, i.e., the visual signal corresponding to the
sight of another’s action. Conversely, the inversion of the generative models allows for the inference of what motor commands
have caused the action, given the visual inputs. The extraction of
prior expectations about another’s intention corresponds to the
inverse model (Wolpert et al., 2003; Csibra and Gergely, 2007),
which needs to be estimated from available cues. Crucially, this
estimation is proposed to be implemented in the bottom-up path
from the temporal cortex to the inferior parietal lobule (PF) to
the PM during the observation of the beginning of an action
(Kilner et al., 2007). Thus, the present activity in the PM may
reflect prior expectations about another’s communicative intention, first built from directional cues (gaze and pointing gesture)
in the dorsal pathway before integrating the emotional content in
the PM. Only then could prior expectations influence, through
feedforward mechanisms, the perception of ongoing motor acts
via a top-down activation of perceptual areas, generating expectations and predictions of the unfolding action (Wilson and
Knoblich, 2005; Kilner et al., 2007). The above-mentioned mechanisms won’t be relevant for novel, unexpected and complex actions for which the goal needs to be estimated from the context
without the involvement of low-level motor systems (Csibra,
2007; Csibra and Gergely, 2007). Indeed, these mechanisms rely
on the equivalence assumption that the observed actor shares the
same motor constraints as the observer, and may thus only apply
to actions that are in the observer’s motor repertoire, such as
those manipulated in the present study.
The question arises as to why P200 and PM activity was greater
when the actor expressed anger, looked, and pointed toward participants. One possible explanation for this pattern of activity is
that information is filtered as a function of its social salience
(Kilner et al., 2006; Schilbach et al., 2011; Wang et al., 2011)
before the estimation of prior expectations. An alternative and
complementary hypothesis is related to the role of the PM in
using sensory information to specify currently available actions
Conty et al. • Early Binding of Gaze, Gesture, and Emotion
to deal with an immediate situation (Cisek, 2007). Prior expectations about the perceived agent’s immediate intent would thus
afford the perceiver specific types of interactions (Gangopadhyay
and Schilbach, 2011; Schilbach et al., 2011). Hence, the highest
level of activity in the PM reflects the highest degree of potential
social interaction, which corresponds here to facing an angry
person pointing and looking toward oneself. Indeed, the expression of direct anger signals a probable physical and/or symbolic
attack (Schupp et al., 2004), is perceived as threatening (Dimberg
and Ohman, 1983; Dimberg, 1986; Strauss et al., 2005), and triggers adaptive action in the observer (Frijda, 1986; Pichon et al.,
2008, 2009, 2012; Grèzes et al., 2011; Van den Stock et al., 2011).
In accordance with such a view, defensive responses in monkeys
are elicited by electrical stimulation at the border between the
ventral and dorsal PM (Cooke and Graziano, 2004; Graziano and
Cooke, 2006) and are supposed, in humans, to be facilitated
within a 250 ms timeframe after the perception of a danger signal
(Williams and Gordon, 2007). Here, emotional signals were processed first in the amygdala at ⬃170 ms. Interestingly, a substantial number of studies have shown that lesions of the amygdala
not only disrupt the ability to process fear signals (LeDoux, 2000)
but can also abolish characteristic defensive behavior in primates
(Emery et al., 2001). In this model, the amygdala plays a critical
role in initiating adaptive behavioral responses to social signals
via its connections with subcortical areas and the PM (Avendano,
1983; Amaral and Price, 1984). Thus, we propose that, after having been processed in the amygdala, emotional information is
integrated with self-directed directional cues in the PM, enabling
prior expectations to be developed about another’s intentions
and the preparation of one’s own action.
At ⬃170 ms, emotional processing occurs in the amygdala,
independently of self-directed directional cues (gaze direction
and pointing gesture). The activation of the amygdala while observers perceived bodily expressions of anger replicates previous
studies (Pichon et al., 2009) and supports its proposed role in the
automatic detection of threat (Emery and Amaral, 2000; LeDoux,
2000; Amaral et al., 2003; Feinstein et al., 2011). Amygdala damage diminishes the brain’s response to threatening faces at both
the ⬃100 –150 and ⬃500 – 600 ms time ranges (Rotshtein et al.,
2010), and, in both infants and adults, the interaction between
gaze direction and emotion takes place at ⬃200 –300 ms (Klucharev and Sams, 2004; Rigato et al., 2010). Furthermore, previous
fMRI studies manipulating self-involvement during face perception revealed that facial expression and gaze direction are integrated in the medial temporal poles (Schilbach et al., 2006; Conty
and Grèzes, 2012) or in amygdala (Adams and Kleck, 2003; Hadjikhani et al., 2008; N⬘Diaye et al., 2009; Sato et al., 2010). Here,
we show that the binding of emotion with gaze direction and
pointing gesture arises at ⬃200 ms in the PM. This suggests that
the pattern of integration revealed previously using fMRI could
reflect later rather than early processes.
Before being integrated with emotional content in the PM,
self-directed directional cues (gaze direction and pointing gesture) are firstly merged within 190 ms in the parietal areas (PF/
SII) and in the SMA. Could the absence of interaction at an early
stage between directional cues and emotion have been attributable to some feature of the present stimuli and task? First, when
present, pointing gesture always indicated the same direction of
attention as did gaze. Second, the participant’s task was to judge
the actor’s direction of attention (toward the self or another)
regardless of the emotional content. This may have led participants to prioritize task-relevant directional cues and thus their
integration in the PF/SII and in the SMA for response selection
J. Neurosci., March 28, 2012 • 32(13):4531– 4539 • 4537
and preparation (Passingham, 1993; Rushworth et al., 2003),
independently of emotion. However, higher activity in the
PF/SII and in the SMA for self-directed compared with otherdirected social cues, and right lateralized activations for righthanded participants, do not fully support such an explanation.
Rather, right-lateralized activations suggest processing related
to representation of another’s action (Decety and Chaminade,
2003).
In conclusion, the current data clearly demonstrate that the
early binding of visual social cues displayed by a congener is
achieved in the motor system rather than in the emotional system. We propose that this would allow one to expedite the preparation of an adaptive response, particularly for self-relevant
social cues—in this case, another’s threatening intention toward
oneself.
References
Adams RB Jr, Kleck RE (2003) Perceived gaze direction and the processing
of facial displays of emotion. Psychol Sci 14:644 – 647.
Adolphs R (1999) Social cognition and the human brain. Trends Cogn Sci
3:469 – 479.
Adolphs R (2002) Neural systems for recognizing emotion. Curr Opin Neurobiol 12:169 –177.
Amaral DG, Price JL (1984) Amygdalo-cortical projections in the monkey
(Macaca fascicularis). J Comp Neurol 230:465– 496.
Amaral DG, Behniea H, Kelly JL (2003) Topographic organization of projections from the amygdala to the visual cortex in the macaque monkey.
Neuroscience 118:1099 –1120.
Ashley V, Vuilleumier P, Swick D (2004) Time course and specificity of
event-related potentials to emotional expressions. Neuroreport
15:211–216.
Avendano C (1983) A fine structural study of the cells that proliferate in the
partially denervated dentate gyrus of the rat. Anat Embryol (Berl)
166:317–332.
Batty M, Taylor MJ (2003) Early processing of the six basic facial emotional
expressions. Brain Res Cogn Brain Res 17:613– 620.
Blau VC, Maurer U, Tottenham N, McCandliss BD (2007) The face-specific
N170 component is modulated by emotional facial expression. Behav
Brain Funct 3:7.
Bourgeois P, Hess U (2008) The impact of social context on mimicry. Biol
Psychol 77:343–352.
Caetano G, Jousmäki V, Hari R (2007) Actor’s and observer’s primary motor cortices stabilize similarly after seen or heard motor actions. Proc Natl
Acad Sci USA 104:9058 –9062.
Catmur C, Mars RB, Rushworth MF, Heyes C (2011) Making mirrors: premotor cortex stimulation enhances mirror and counter-mirror motor
facilitation. J Cogn Neurosci 23:2352–2362.
Chartrand TL, Bargh JA (1999) The chameleon effect: the perceptionbehavior link and social interaction. J Pers Soc Psychol 76:893–910.
Cisek P (2007) Cortical mechanisms of action selection: the affordance
competition hypothesis. Philos Trans R Soc Lond B Biol Sci
362:1585–1599.
Conty L, Grèzes J (2012) Look at me, I’ll remember you: the perception of
self-relevant social cues enhances memory and right hippocampal activity. Hum Brain Mapp. Advance online publication. Retrieved February 5,
2012. doi:10.1002/hbm.21366.
Conty L, N⬘Diaye K, Tijus C, George N (2007) When eye creates the contact!
ERP evidence for early dissociation between direct and averted gaze motion processing. Neupsychologia 45:3024 –3037.
Cooke DF, Graziano MS (2004) Sensorimotor integration in the precentral
gyrus: polysensory neurons and defensive movements. J Neurophysiol
91:1648 –1660.
Cristinzio C, N⬘Diaye K, Seeck M, Vuilleumier P, Sander D (2010) Integration of gaze direction and facial expression in patients with unilateral
amygdala damage. Brain 133:248 –261.
Csibra G (2007) Action mirroring and action interpretation: an alternative
account. In: Sensorimotor foundations of higher cognition. Attention
and performance (Haggard P, Rosetti Y, and Kawato M, eds), pp 435–
459. Oxford, UK: Oxford UP.
Csibra G, Gergely G (2007) “Obsessed with goals”: functions and mecha-
4538 • J. Neurosci., March 28, 2012 • 32(13):4531– 4539
nisms of teleological interpretation of actions in humans. Acta Psychol
(Amst) 124:60 –78.
Decety J, Chaminade T (2003) When the self represents the other: a new
cognitive neuroscience view on psychological identification. Conscious
Cogn 12:577–596.
Dimberg U (1986) Facial reactions to fear-relevant and fear-irrelevant stimuli. Biol Psychol 23:153–161.
Dimberg U, Ohman A (1983) The effects of directional facial cues on electrodermal conditioning to facial stimuli. Psychophysiology 20:160 –167.
Eimer M, Holmes A (2007) Event-related brain potential correlates of emotional face processing. Neuropsychologia 45:15–31.
Emery NJ, Amaral DG (2000) The role of the amygdala in primate social
cognition. In: Cognitive neuroscience of emotion (Lane RD, Nadel L,
eds), pp 156 –191. New York: Oxford UP.
Emery NJ, Capitanio JP, Mason WA, Machado CJ, Mendoza SP, Amaral DG
(2001) The effects of bilateral lesions of the amygdala on dyadic social
interactions in rhesus monkeys (Macaca mulatta). Behav Neurosci
115:515–544.
Feinstein JS, Adolphs R, Damasio A, Tranel D (2011) The human amygdala
and the induction and experience of fear. Curr Biol 21:34 –38.
Frijda NH (1986) The emotions. London: Cambridge UP.
Gallese V (2006) Intentional attunement: a neurophysiological perspective
on social cognition and its disruption in autism. Brain Res 1079:15–24.
Gangopadhyay N, Schilbach L (2011) Seeing minds: a neurophilosophical
investigation of the role of perception-action coupling in social perception. Soc Neurosci. doi:10.1080/17470919.2011.633754.
Gerbella M, Belmalih A, Borra E, Rozzi S, Luppino G (2011) Cortical connections of the anterior (F5a) subdivision of the macaque ventral premotor area F5. Brain Struct Funct 216:43– 65.
Graziano MS, Cooke DF (2006) Parieto-frontal interactions, personal
space, and defensive behavior. Neuropsychologia 44:845– 859.
Grèzes J, Decety J (2001) Functional anatomy of execution, mental simulation, observation, and verb generation of actions: a meta-analysis. Hum
Brain Mapp 12:1–19.
Grèzes J, Adenis MS, Pouga L, Chadwick M, Armony J (2011) Who are you
angry with? The influence of self-relevance on anger processing. Presented at the 17th Annual Meeting of the Organization for Human Brain
Mapping, Quebec City, Quebec, Canada, June 26 –30.
Hadjikhani N, Hoge R, Snyder J, de Gelder B (2008) Pointing with the eyes:
the role of gaze in communicating danger. Brain Cogn 68:1– 8.
Haxby JV, Hoffman EA, Gobbini MI (2000) The distributed human neural
system for face perception. Trends Cogn Sci 4:223–233.
Hess U, Kleck RE (2007) Looking at you or looking elsewhere: the influence
of head orientation on the signal value of emotional facial expressions.
Motiv Emot 31:137–144.
Hoffman KL, Gothard KM, Schmid MC, Logothetis NK (2007) Facialexpression and gaze-selective responses in the monkey amygdala. Curr
Biol 17:766 –772.
Jeannerod M (1994) The representing brain: neural correlates of motor intention and imagery. Behav Brain Sci 17:187–245.
Keysers C, Gazzola V (2007) Integrating simulation and theory of mind:
from self to social cognition. Trends Cogn Sci 11:194 –196.
Kilner JM, Marchant JL, Frith CD (2006) Modulation of the mirror system
by social relevance. Soc Cogn Affect Neurosci 1:143–148.
Kilner JM, Friston KJ, Frith CD (2007) The mirror-neuron system: a Bayesian perspective. Neuroreport 18:619 – 623.
Klucharev V, Sams M (2004) Interaction of gaze direction and facial expressions processing: ERP study. Neuroreport 15:621– 625.
Krolak-Salmon P, Hénaff MA, Vighetto A, Bertrand O, Mauguière F (2004)
Early amygdala reaction to fear spreading in occipital, temporal, and frontal cortex: a depth electrode ERP study in human. Neuron 42:665– 676.
Lakin JL, Chartrand TL (2003) Using nonconscious behavioral mimicry to
create affiliation and rapport. Psychol Sci 14:334 –339.
LeDoux JE (2000) Emotion circuits in the brain. Annu Rev Neurosci
23:155–184.
Luppino G, Matelli M, Camarda R, Rizzolatti G (1993) Corticocortical connections of area F3 (SMA-proper) and area F6 (pre-SMA) in the macaque
monkey. J Comp Neurol 338:114 –140.
Nishitani N, Hari R (2002) Viewing lip forms: cortical dynamics. Neuron
36:1211–1220.
N⬘Diaye K, Sander D, Vuilleumier P (2009) Self-relevance processing in the
Conty et al. • Early Binding of Gaze, Gesture, and Emotion
human amygdala: gaze direction, facial expression, and emotion intensity. Emotion 9:798 – 806.
Passingham RE (1993) The frontal lobes and volontary action. Oxford, UK:
Oxford UP.
Pichon S, de Gelder B, Grèzes J (2008) Emotional modulation of visual
and motor areas by dynamic body expressions of anger. Soc Neurosci
3:199 –212.
Pichon S, de Gelder B, Grèzes J (2009) Two different faces of threat. Comparing the neural systems for recognizing fear and anger in dynamic body
expressions. Neuroimage 47:1873–1883.
Pichon S, de Gelder B, Grèzes J (2012) Threat prompts defensive brain responses independently of attentional control. Cereb Cortex 22:274 –285.
Pourtois G, Vuilleumier P (2006) Dynamics of emotional effects on spatial
attention in the human visual cortex. Prog Brain Res 156:67–91.
Pourtois G, Spinelli L, Seeck M, Vuilleumier P (2010) Temporal precedence
of emotion over attention modulations in the lateral amygdala: intracranial ERP evidence from a patient with temporal lobe epilepsy. Cogn Affect
Behav Neurosci 10:83–93.
Puce A, Smith A, Allison T (2000) ERPs by viewing facial movements. Cogn
Neuropsychol 17:221–239.
Rigato S, Farroni T, Johnson MH (2010) The shared signal hypothesis and
neural responses to expressions and gaze in infants and adults. Soc Cogn
Affect Neurosci 5:88 –97.
Ritter P, Villringer A (2006) Simultaneous EEG-fMRI. Neurosci Biobehav
Rev 30:823– 838.
Rizzolatti G, Fogassi L, Gallese V (2001) Neurophysiological mechanisms
underlying the understanding and imitation of action. Nat Rev Neurosci
2:661– 670.
Rotshtein P, Richardson MP, Winston JS, Kiebel SJ, Vuilleumier P, Eimer M,
Driver J, Dolan RJ (2010) Amygdala damage affects event-related potentials for fearful faces at specific time windows. Hum Brain Mapp
31:1089 –1105.
Rozzi S, Calzavara R, Belmalih A, Borra E, Gregoriou GG, Matelli M, Luppino
G (2006) Cortical connections of the inferior parietal cortical convexity
of the macaque monkey. Cereb Cortex 16:1389 –1417.
Rushworth MF, Johansen-Berg H, Gobel SM, Devlin JT (2003) The left parietal and premotor cortices: motor attention and selection. Neuroimage
20 [Suppl 1]:S89 –S100.
Sander D, Grandjean D, Kaiser S, Wehrle T, Scherer KR (2007) Interaction
effects of perceived gaze direction and dynamic facial expression: evidence for appraisal theories of emotion. Eur J Cogn Psychol 19:470 – 480.
Sato W, Kochiyama T, Uono S, Yoshikawa S (2010) Amygdala integrates
emotional expression and gaze direction in response to dynamic facial
expressions. Neuroimage 50:1658 –1665.
Schilbach L (2010) A second-person approach to other minds. Nat Rev
Neurosci 11:449.
Schilbach L, Wohlschlaeger AM, Kraemer NC, Newen A, Shah NJ, Fink GR,
Vogeley K (2006) Being with virtual others: neural correlates of social
interaction. Neuropsychologia 44:718 –730.
Schilbach L, Eickhoff SB, Mojzisch A, Vogeley K (2008) What’s in a smile?
Neural correlates of facial embodiment during social interaction. Soc
Neurosci 3:37–50.
Schilbach L, Eickhoff SB, Cieslik E, Shah NJ, Fink GR, Vogeley K (2011)
Eyes on me: an fMRI study of the effects of social gaze on action control.
Soc Cogn Affect Neurosci 6:393– 403.
Schupp HT, Ohman A, Junghöfer M, Weike AI, Stockburger J, Hamm AO
(2004) The facilitated processing of threatening faces: an ERP analysis.
Emotion 4:189 –200.
Sinigaglia C, Rizzolatti G (2011) Through the looking glass: self and others.
Conscious Cogn 20:64 –74.
Strauss MM, Makris N, Aharon I, Vangel MG, Goodman J, Kennedy DN,
Gasic GP, Breiter HC (2005) fMRI of sensitization to angry faces. Neuroimage 26:389 – 413.
Tkach D, Reimer J, Hatsopoulos NG (2007) Congruent activity during action and action observation in motor cortex. J Neurosci 27:13241–13250.
Tomassini V, Jbabdi S, Klein JC, Behrens TE, Pozzilli C, Matthews PM, Rushworth MF, Johansen-Berg H (2007) Diffusion-weighted imaging
tractography-based parcellation of the human lateral premotor cortex
identifies dorsal and ventral subregions with anatomical and functional
specializations. J Neurosci 27:10259 –10269.
Van den Stock J, Tamietto M, Sorger B, Pichon S, Grèzes J, de Gelder B
(2011) Cortico-subcortical visual, somatosensory, and motor activations
Conty et al. • Early Binding of Gaze, Gesture, and Emotion
for perceiving dynamic whole-body emotional expressions with and
without striate cortex (V1). Proc Natl Acad Sci USA 108:16188 –16193.
van Heijnsbergen CC, Meeren HK, Grèzes J, de Gelder B (2007) Rapid
detection of fear in body expressions, an ERP study. Brain Res
1186:233–241.
Vlamings PH, Goffaux V, Kemner C (2009) Is the early modulation of brain
activity by fearful facial expressions primarily mediated by coarse low
spatial frequency information? J Vis 9:12 11–13.
Wang Y, Newport R, Hamilton AF (2011) Eye contact enhances mimicry of
intransitive hand movements. Biol Lett 7:7–10.
Williams LM, Gordon E (2007) Dynamic organization of the emotional
brain: responsivity, stability, and instability. Neuroscientist 13:349 –370.
Wilms M, Schilbach L, Pfeiffer U, Bente G, Fink GR, Vogeley K (2010) It’s in
your eyes: using gaze-contingent stimuli to create truly interactive para-
J. Neurosci., March 28, 2012 • 32(13):4531– 4539 • 4539
digms for social cognitive and affective neuroscience. Soc Cogn Affect
Neurosci 5:98 –107.
Wilson M, Knoblich G (2005) The case for motor involvement in perceiving
conspecifics. Psychol Bull 131:460 – 473.
Wolpert DM, Flanagan JR (2001) Motor prediction. Curr Biol 11:
R729 –R732.
Wolpert DM, Doya K, Kawato M (2003) A unifying computational framework for motor control and social interaction. Philos Trans R Soc Lond B
Biol Sci 358:593– 602.
Yabar Y, Johnston L, Miles L, Peace V (2006) Implicit behavioral mimicry:
investigating the impact of group membership. J Nonverbal Behav
30:97–113.
Zaki J, Ochsner K (2009) The need for a cognitive neuroscience of naturalistic social cognition. Ann NY Acad Sci 1167:16 –30.
Evolution and Human Behavior xx (2012) xxx – xxx
Original Article
Sharing the joke: the size of natural laughter groups
Guillaume Dezecache a, b,⁎, R.I.M. Dunbar c,⁎
a
Laboratory of Cognitive Neuroscience (LNC)-INSERM U960 & IEC-Ecole Normale Superieure (ENS), 75005 Paris, France
Institut Jean Nicod (IJN)-UMR 8129 CNRS & IEC-Ecole Normale Superieure & Ecole des Hautes Etudes en Sciences Sociales (ENS-EHESS),
75005 Paris, France
c
Department of Experimental Psychology, University of Oxford, South Parks Rd, Oxford OX1 3UD, United Kingdom
Initial receipt 5 June 2012; final revision received 11 July 2012
b
Abstract
Recent studies suggest that laughter plays an important role in social bonding. Human communities are much larger than those of other
primates and hence require more time to be devoted to social maintenance activities. Yet, there is an upper limit on the amount of time that
can be dedicated to social demands, and, in nonhuman primates, this sets an upper limit on social group size. It has been suggested that
laughter provides the additional bonding capacity in humans by allowing an increase in the size of the “grooming group.” In this study of
freely forming laughter groups, we show that laughter allows a threefold increase in the number of bonds that can be “groomed” at the same
time. This would enable a very significant increase in the size of community that could be bonded.
© 2012 Elsevier Inc. All rights reserved.
Keywords: Laughter; Social group size; Social grooming; Endorphins; Social bonding
1. Introduction
Although by no means unique to humans (it occurs in
great apes: Davila-Ross, Owren, & Zimmermann, 2009;
Waller & Dunbar, 2005), laughter is one of the most
distinctively human behaviors (Gervais & Wilson, 2005;
Provine, 2001). While a number of (not necessarily mutually
exclusive) hypotheses have been suggested for its function
(signaling social or mating interest: Grammer, 1990;
Grammer & Eibl-Eibesfeldt, 1990; Li et al., 2009; Martin
& Gray, 1996; Mehu & Dunbar, 2008; emotional contagion:
Bachorowski & Owren, 2001, Owren & Bachorowski, 2003;
social bonding: Dunbar, 2004; Dunbar et al., 2012), laughter
in humans is characteristically highly social and intensely
contagious (Provine, 2001). The occurrence of laughter
during an interaction also significantly increases the
perceived satisfaction with the interaction (Vlahovic,
Roberts, & Dunbar, 2012).
⁎ Corresponding authors. Guillaume Dezecache is to be contacted at the
Laboratory of Cognitive Neuroscience, Ecole Normale Supérieure, 29 rue
d'Ulm, 75005 Paris, France or Robin Dunbar, University of Oxford, South
Parks Rd, Oxford OX1 3UD, United Kingdom.
E-mail addresses: [email protected] (G. Dezecache),
[email protected] (R.I.M. Dunbar).
1090-5138/$ – see front matter © 2012 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.evolhumbehav.2012.07.002
Anthropoid primates are characterized by an unusually
intense form of social bonding (Dunbar & Shultz, 2010;
Shultz & Dunbar, 2010) that is mediated by an endorphinbased psychopharmacological mechanism effected by social
grooming (Curley & Keverne, 2005; Depue, MorroneStrupinsky, et al., 2005; Machin & Dunbar, 2011). Social
grooming (the bimanual cleaning and manipulation of a
recipient's skin or fur) is limited to dyads since it is
physically difficult to groom several individuals at the same
time. Given this, its effective broadcast group size (the
number of individuals whose state of arousal can be
influenced in this way) is one. This, combined with limits
on the time available for social grooming (Dunbar,
Korstjens, & Lehmann, 2009; Lehmann, Korstjens, &
Dunbar, 2007), seems to set an upper limit on the size of
social group (or community) that can be bonded through this
mechanism (Dunbar, 1993).
Laughter is known to release endorphins in much the
same way as grooming does (Dunbar et al., 2012), and this
has led to the suggestion that the exaggerated forms of
laughter characteristic of humans might have evolved out of
conventional ape laughter (Davila-Ross et al., 2009, DavilaRoss, Allcock, Thomas, & Bard, 2011) as a device for
enlarging the effective size of grooming groups through a
form of “grooming-at-a-distance” (Dunbar, 2012). When
2
G. Dezecache, R.I.M. Dunbar / Evolution and Human Behavior xx (2012) xxx–xxx
hominins evolved larger social communities than those
characteristic of the most social monkeys and apes, some
additional mechanism was required to make this possible.
Increasing grooming time was not an option because it was
already at its upper limit in primates (Dunbar, 1993, Dunbar
et al., 2009), but increasing the number of individuals who
could be “groomed” simultaneously is a plausible alternative. Laughter as a form of chorusing (sensu Burt &
Vehrencamp, 2005; Schel & Zuberbühler, 2012; Tenaza,
1976) seems to fill that role admirably because it allows
several individuals to be involved simultaneously. The fact
that human laughter shares close structural similarities with
ape laughter (Davila-Ross et al., 2009; Provine, 2001)
suggests that, if it was the solution to this problem, it may
have been an early adaptation, long predating the evolution
of speech and language (Dunbar, 2009, 2012).
This suggestion raises the question of laughter's efficiency as a bonding mechanism relative to social grooming.
Given that grooming has an effective broadcast group size of
one, just how large is the broadcast group size for laughter?
To determine this, we observed natural social groups in bars
and collected data on the number of people who laughed
together within these groups. We also sampled the size of the
whole social group as well as the size of conversational
groups (the number of people engaged in a conversation) to
provide benchmark measures that enable comparisons
between laughter and conversation (conversation groups
are known to have an upper limit of four individuals,
irrespective of the size of the social group: Dunbar, Duncan,
& Nettle, 1995).
2. Method
We censused natural social groups in bars in the United
Kingdom (Oxford; 80% of the observations), France (Calais,
Lille, and Paris; 14%), and Germany (Berlin; 6%),
distinguishing social group size (the total number of
individuals present as an interacting group), conversational
subgroup size (the number of individuals within the social
group taking part in a particular conversation, as evidenced
by speaking or obviously attending to the speaker, following
Dunbar et al., 1995), and laughter subgroup size (the number
of individuals laughing in an obviously coordinated way,
following the same definition as for conversational subgroups). Individuals were said to be laughing when they were
producing the vocalization which is characteristic of laughter
(i.e., a series of rapid exhalation–inhalation cycles: DavilaRoss et al., 2009; Provine, 2001). In total, 501 observations
of laughter events were sampled from 450 groups.
Groups of at least two people were covertly observed
from a close distance (maximum 5 m). A group was selected
if it was stable over time and the faces of all members were
visible to the observer. As soon as a burst of laughter was
produced within the group, the laughter subgroup size was
censused, defined as the number of people who produced at
least one laughter vocalization before laughter ceased within
the group. We also censused the size of the conversational
subgroups: individuals were scored as being a member of a
given conversational subgroup if they were speaking or
paying attention to the speaker (as indicated by direction of
eye gaze). Finally, we noted down the size of the social
group within which these were embedded (as evidenced by
the affiliative interactions among the members over the
whole period the group was under observation). While
laughter and conversational subgroup sizes could be
censused via rapid visual scans, group size censuses required
longer and more persistent observation. Groups were
censused at 30-min intervals to guarantee the statistical
independence of each sample. Nevertheless, groups could be
reconsidered for a census before the 30-min interval if they
permanently lost or gained a member.
2.1. Statistical analysis
Due to the small number of observations at larger social
group sizes, data were merged for groups of size 7 to 8, 9 to
10, and 11 to 14. To estimate the optimal size of
conversational and laughter subgroups, we performed a
series of regression analyses, using the Akaike information
criterion (AIC) (Akaike, 1974; Burnham & Anderson, 2002)
to select the function that gave the best fit.
3. Results
Fig. 1 plots the frequency distribution of social,
conversational, and laughter subgroup sizes. Average
conversation subgroup size was 2.93±0.05 S.E. (N=501),
and average laughter subgroup size was 2.72±0.04 S.E. (N=
501). Conversational subgroups larger than 5 were rare
(2.8% of the observations), and none were larger than 10.
Similarly, laughter subgroups larger than four were rare
(5.6% of the observations), and none were larger than six.
Fig. 1. Frequency distribution of social groups, conversation subgroups, and
laughter subgroups.
G. Dezecache, R.I.M. Dunbar / Evolution and Human Behavior xx (2012) xxx–xxx
Overall, approximately 91% of all conversational subgroups
contained four or fewer individuals, and 84% of all laughter
subgroups contained only two or three individuals.
Fig. 2 plots mean conversational and laughter subgroup
sizes against social group size. Mean group size is
significantly different between the three types of group
(social group, conversational subgroups, laughter subgroups) (Kruskal–Wallis: H2=151.441, N=1503; pb.001).
Pairwise comparisons reveal that social groups were larger
than both conversational (Kruskal–Wallis: H1=236.347,
N=1002; pb.001) and laughter subgroups (Kruskal–Wallis:
H1=306.204, N=1002; pb.001). There was a significant but
only modest correlation between social group size and each
of the two other variables (Spearman rank correlations: rs=
0.546, N=501, pb.001 with conversational subgroups; rs=
0.466, N=501, pb.001 with laughter subgroups).
Conversational subgroups were significantly larger than
laughter subgroups (Kruskal–Wallis: H1=69.856, N=
1002; p=.022), and both variables were strongly correlated (Spearman rank correlations: rs=0.875, N=501,
pb.001), suggesting that an increase in conversational
subgroup size predicts an increase in laughter subgroup
size (and vice versa).
The distributions in Fig. 2 suggest that conversational
subgroups reach an asymptotic value (cf. Dunbar et al.,
1995), whereas the distribution of laughter subgroups has a
more explicitly humped shape, suggesting that there may be
an optimal group size for laughter to occur. To explore this in
more detail, we ran regression analyses on the two
distributions, testing a range of alternative functions and
using AIC as the criterion of best fit. The best fit with the
lowest AIC for conversational subgroups was an S function
(r 2=0.797, p=.003, AIC=−13.24), which was significantly
better than its best rival (power function: r 2=0.564, p=.032,
AIC=−10.56) and considerably better than the null hypothesis of a linear fit (r 2 =0.325, p=.140, AIC=−2.04).
Conversational subgroup sizes have an asymptotic value of
4.21, a value close to that found in previous studies (Dunbar
Fig. 2. Conversational and laughter subgroups sizes plotted against social
group size.
3
et al., 1995). In contrast, the best fit for laughter subgroups
was a quadratic function (r 2=0.897, p=.003, AIC=−7.96),
which was significantly better than a linear model (r 2=0.054,
p=.581, AIC=−2.78). Laughter subgroups are best fit by a
humped distribution, with an optimal value of 3.35 for social
groups of size about seven.
4. Discussion
Our results confirm, with a considerably larger sample,
the upper limit of N≈4 on conversation group size reported
by Dunbar et al. (1995). In addition, they suggest that there is
a similar limit on the number of individuals that can be
involved in a laughter event. Ours was, of course, a
naturalistic study and thus benefits by all the advantages of
ecological validity that this offers. While it might have been
possible to run the study in the laboratory with convened
groups of predetermined size, it is questionable as to what
the advantages of doing so would be since it is difficult to
trigger laughter in artificial settings. In retrospect, the
seeming intimacy of laughter that emerges from this study
makes it especially important that the study was naturalistic.
Laughter subgroups are very close to, albeit slightly
smaller than, conversational subgroups in size. Laughter
subgroups may, however, be more constrained than
conversational subgroups in that, unlike the latter, laughter
subgroups have an optimal size that depends on the size of
the whole social group. Laughter, it seems, is not triggered so
easily in very large social groups. The fact that laughter
subgroups are smaller than conversational subgroups is
surprising because laughter is highly contagious. Unlike
conversation, which requires effort and mental concentration
to be engaged, laughter can be triggered merely by seeing
someone else laugh (Provine, 1992) and is typically much
louder, which should make it more easily discerned over
greater distances.
The limits on conversation subgroup size are thought to
arise from acoustical constraints, in particular reflecting
ambient noise levels (Webster, 1965), the distance between
speaker and hearer (Beranek, 1954), the discriminability of
speech sounds (Cohen, 1971; Legget & Northwood, 1960),
and visual access to the speaker (Kendon, 1967; Steinzor,
1950). These constraints make the maintenance of large
conversational subgroups costly because following a
conversation in a large group requires enhanced cognitive
effort that one might not be prepared to pay if more fruitful
interactions are available. This gives rise to the commonly
observed phenomenon that conversation groups readily
split into several subgroups once they get too large (Dunbar
et al., 1995).
The fact that laughter subgroups are of the similar size to
conversational subgroups might reflect the fact that, in the
contemporary context at least, laughter depends on jokes,
and hence speech, and will thus be sensitive to the same
factors as speech, including the physical distance between
4
G. Dezecache, R.I.M. Dunbar / Evolution and Human Behavior xx (2012) xxx–xxx
the interactants (Chapman, 1975), the relationship between
them (Platow et al., 2005), and the similarity in sense of
humor (Lynch, 2010). However, this cannot be the whole
explanation because we do not need to hear the joke to
laugh when everyone else is laughing: under these
circumstances; we simply cannot help laughing even if
we do not understand the joke (Provine, 2001). Laughter
per se does not depend on speech detection and has
acoustic (Bachorowski, Smoski, & Owren, 2001; Provine
& Yong, 1991; Szameitat, Darwin, Szameitat, Wildgruber,
& Alter, 2011) as well as visual (Petridis & Pantic, 2008;
Ruch & Ekman, 2001) properties—including the fact that it
is invariably much louder—that make it detectable over
much greater distances than speech. This makes the fact that
laughter subgroups are slightly smaller than conversational
subgroups puzzling. It may be relevant that laughter, speech,
and nonvocal sounds appear to be processed in quite distinct
parts of the auditory cortex (Meyer, Zysset, von Cramon, &
Alter, 2005), suggesting that laughter and speech may share
only limited properties. This might reflect the fact that they
have different functions and dynamics.
An alternative explanation for the small size of laughter
subgroups (and, in particular, the fact that they are smaller than
conversational subgroups) may derive from the fact that
laughter is intrinsically more spontaneous and intimate than
conversation, and so depends more explicitly on the dynamics
of the group and coordination in the mind states of the
individuals involved (Weisfeld, 1993). This may make it
challenging to have large numbers of people involved. This
may relate explicitly to laughter's role in social bonding as a
form of chorusing that long predates language (Dunbar, 2012).
One could argue that physical constraints (noise,
disposition of the tables, number of chairs around the
tables) might have constrained the size of the social groups
and the associated conversational and laughter subgroups,
thereby biasing our observations. However, this seems
unlikely since the sizes of our conversational subgroups are
identical to those reported by Dunbar et al. (1995) whose
observations were collected in contexts where such physical
constraints did not hold (large evening receptions, gatherings during fire drills).
The fact that hominins evolved social groups that are
considerably larger than those of other primates (Aiello &
Dunbar, 1993; Dunbar, 2009; Gowlett, Gamble, & Dunbar,
in press) has raised the possibility that laughter may have
evolved into its present human form specifically to break
through the ceiling imposed by more conventional primate
bonding processes (Dunbar, 2012). Laughter might fill that
role both because it seems to be an effective way of
triggering endorphin activation (Dunbar et al., 2012) and
because it can be triggered in several individuals simultaneously. Our findings suggest that the “grooming” group for
laughter is a little over three individuals. Since all members
of the laughter group gain an endorphin surge (unlike the
grooming dyad, where endorphins are triggered only in the
groomee), this would make laughter three times as efficient
as grooming, which would in turn allow a very significant
increase in the size of the community that could be bonded
(though probably not a trebling of community size since
social group size is a not a monotonic function of grooming
clique size: see Dunbar, 2012; Kudo & Dunbar, 2001).
Language, when it finally evolved, clearly gave a new
impetus to laughter because it allowed laughter to be
triggered by the telling of jokes, instead of being triggered by
nonlinguistic means (e.g., social play, tickling, or socially
incongruous situations: Gervais & Wilson, 2005; Vettin &
Todt, 2005). Whether this increased the frequency of
laughter or simply allowed its timing to be managed more
effectively is an interesting question, but not one that can be
answered here. However, what perhaps does need to be
emphasized is the distinction between natural laughter
groups (with their typically small size) and the ways in
which laughter can now be engineered on very large scales
(e.g., in comedy clubs). Whereas the first remains an intimate
conversational phenomenon, the second requires the rules of
the public lecture (i.e., one person is allowed to hold the
floor, and the rest must remain silent and pay attention). This
certainly increases the size of the broadcast group dramatically but is possible only with the capacity to agree on
collective cultural rules of behavior, and that in turn is
dependent on language. Meanwhile, it seems that the
prelinguistic features of laughter as a form of chorusing
continue to be maintained and play an important role in
facilitating everyday social interaction and bonding.
References
Aiello, L. C., & Dunbar, R. I. M. (1993). Neocortex size, group size, and the
evolution of language. Current Anthropology, 34(2), 184–193.
Akaike, H. (1974). A new look at the statistical model identification. IEEE
Transactions on Automatic Control, 19(6), 716–723, http://dx.doi.org/
10.1109/TAC.1974.1100705.
Bachorowski, J. A., & Owren, M. J. (2001). Not all laughs are alike: voiced
but not unvoiced laughter readily elicits positive affect. Psychological
Science, 12(3), 252.
Bachorowski, J. A., Smoski, M. J., & Owren, M. J. (2001). The acoustic
features of human laughter. The Journal of the Acoustical Society of
America, 110(3 Pt 1), 1581–1597.
Beranek, L. L. (1954). Acoustics (vol. 6). New York: McGraw-Hill; 1954.
Burnham, K. P., & Anderson, D. R. (2012). Model selection and
multimodel inference: a practical information–theoretic approach.
Berlin: Springer Verlag.
Burt, J. M., & Vehrencamp, S. L. (2005). Dawn chorus as an interactive
communication network. In P. K. McGregor, Ed. Animal communication networks Cambridge: Cambridge University Press.
Chapman, A. J. (1975). Eye contact, physical proximity and laughter: a reexamination of the equilibrium model of social intimacy. Social
Behavior and Personality, 3(2), 143–155, http://dx.doi.org/10.2224/
sbp. 1975.3.2.143.
Cohen, J. E. (1971). Casual groups of monkeys and men. Cambridge:
Cambridge University Press.
Curley, J. P., & Keverne, E. B. (2005). Genes, brains and mammalian social
bonds. Trends in Ecology & Evolution, 20(10), 561–567.
Davila-Ross, M., Owren, M. J., & Zimmermann, E. (2009). Reconstructing
the evolution of laughter in great apes and humans. Current Biology,
19(13), 1106–1111, http://dx.doi.org/10.1016/j.cub.2009.05.028.
G. Dezecache, R.I.M. Dunbar / Evolution and Human Behavior xx (2012) xxx–xxx
Davila-Ross, M., Allcock, B., Thomas, C., & Bard, K. A. (2011). Aping
expressions? Chimpanzees produce distinct laugh types when responding to laughter of others. Emotion, 11(5), 1013–1020, http://dx.doi.org/
10.1037/a0022594.
Depue, R. A., & Morrone-Strupinsky, J. V. (2005). A neurobehavioral
model of affiliative bonding: implications for conceptualizing a human
trait of affiliation. The Behavioral and Brain Sciences, 28(3),
313–349.
Dunbar, R. I. M. (1993). Coevolution of neocortex size, group size and
language in humans. The Behavioral and Brain Sciences, 16,
681–735.
Dunbar, R. I. M. (2004). The human story. London: Faber & Faber.
Dunbar, R. I. M. (2009). Why only humans have language. In R. Botha,
& C. Knight (Eds.), The prehistory of language Oxford University
Press: Oxford.
Dunbar, R. I. M. (2012). Bridging the bonding gap: the transition from
primates to humans. Philosophical Transactions of the Royal Society B:
Biological Sciences, 367(1597), 1837–1846, http://dx.doi.org/10.1098/
rstb.2011.0217.
Dunbar, R. I. M., Duncan, N., & Nettle, D. (1995). Size and structure of
freely forming conversational groups. Human Nature, 6(1), 67–78,
http://dx.doi.org/10.1007/BF02734136.
Dunbar, R. I. M., Korstjens, A. H., & Lehmann, J. (2009). Time as an
ecological constraint. Biological Reviews, 84(3), 413–429.
Dunbar, R. I. M., & Shultz, S. (2010). Bondedness and sociality. Behaviour,
147(7), 775–803.
Dunbar, R. I. M., Baron, R., Frangou, A., Pearce, E., van Leeuwen, E. J. C.,
& Stow, J., et al (2012). Social laughter is correlated with an elevated
pain threshold. Proceedings of the Royal Society B: Biological Sciences,
279(1731), 1161–1167.
Gervais, M., & Wilson, D. S. (2005). The evolution and functions of
laughter and humor: a synthetic approach. The Quarterly Review of
Biology, 80(4), 395–430.
Gowlett, J. A. J., Gamble, C., & Dunbar, R. I. M. (in press). Human evolution
and the archaeology of the social brain. Current Anthropology.
Grammer, K. (1990). Strangers meet: laughter and nonverbal signs of
interest in opposite-sex encounters. Journal of Nonverbal Behavior,
14(4), 209–236, http://dx.doi.org/10.1007/BF00989317.
Grammer, K., & Eibl-Eibesfeldt, I. (1990). The ritualisation of laughter.
Natürlichkeit der Sprache und der Kultur, 192–214.
Kendon, A. (1967). Some functions of gaze-direction in social interaction.
Acta Psychologica, 26(1), 22–63.
Kudo, H., & Dunbar, R. I. M. (2001). Neocortex size and social network size
in primates. Animal Behaviour, 62(4), 711–722, http://dx.doi.org/
10.1006/anbe.2001.1808.
Legget, R. F., & Northwood, T. D. (1960). Noise surveys of cocktail parties.
Journal of the Statistical Society of America, 32, 16–18.
Lehmann, J., Korstjens, A. H., & Dunbar, R. I. M. (2007). Group size,
grooming and social cohesion in primates. Animal Behaviour, 74(6),
1617–1629.
Li, N. P., Griskevicius, V., Durante, K. M., Jonason, P. K., Pasisz, D. J., &
Aumer, K. (2009). An evolutionary perspective on humor: sexual selection
or interest indication? Personality and Social Psychology Bulletin, 35(7),
923–936, http://dx.doi.org/10.1177/0146167209334786.
Lynch, R. (2010). It's funny because we think it's true: laughter is augmented
by implicit preferences. Evolution and Human Behavior, 31(2), 141–148,
http://dx.doi.org/10.1016/j.evolhumbehav.2009.07.003.
Machin, A. J., & Dunbar, R. I. M. (2011). The brain opioid theory of social
attachment: a review of the evidence. Behaviour, 148(10), 985–025.
5
Martin, G. N., & Gray, C. D. (1996). The effects of audience laughter on men's
and women's responses to humor. The Journal of Social Psychology,
136(2), 221–231, http://dx.doi.org/10.1080/00224545.1996.9713996.
Mehu, M., & Dunbar, R. I. M. (2008). Naturalistic observations of smiling and
laughter in human group interactions. Behaviour, 145(12), 1747–1780.
Meyer, M., Zysset, S., von Cramon, D. Y., & Alter, K. (2005). Distinct fMRI
responses to laughter, speech, and sounds along the human peri-sylvian
cortex. Cognitive Brain Research, 24(2), 291–306, http://dx.doi.org/
10.1016/j.cogbrainres.2005.02.008.
Owren, M. J., & Bachorowski, J. A. (2003). Reconsidering the evolution of
nonlinguistic communication: the case of laughter. Journal of Nonverbal
Behavior, 27(3), 183–200, http://dx.doi.org/10.1023/A:1025394015198.
Petridis, S., & Pantic, M. (2008). Audiovisual discrimination between
laughter and speech. Acoustics, Speech and Signal Processing, 2008
ICASSP, , 5117–5120.
Platow, M. J., Haslam, S. A., Both, A., Chew, I., Cuddon, M., & Goharpey,
N, et al. (2005). « It's not funny if they're laughing »: self-categorization,
social influence, and responses to canned laughter. Journal of
Experimental Social Psychology, 41(5), 542–550, http://dx.doi.org/
10.1016/j.jesp. 2004.09.005.
Provine, R. R. (1992). Contagious laughter: laughter is a sufficient stimulus
for laughs and smiles. Bulletin of the Psychonomic Society, 30(1), 1–4.
Provine, R. (2001). Laughter: a scientific investigation. Harmondsworth:
Penguin Press.
Provine, R. R., & Yong, Y. L. (1991). Laughter: a stereotyped human
vocalization. Ethology, 89(2), 115–124, http://dx.doi.org/10.1111/
j.1439-0310.1991.tb00298.x.
Ruch, W., & Ekman, P. (2001). The expressive pattern of laughter. In A. W.
Kaszniak, Ed. Emotion, Qualia, and Consciousness. pp. 426–443
Hackensack, NJ: World Scientific.
Schel, A. M., & Zuberbühler, K. (2012). Dawn chorusing in guereza colobus
monkeys. Behavioral Ecology and Sociobiology, 66, 361–373.
Shultz, S., & Dunbar, R. (2010). Encephalization is not a universal
macroevolutionary phenomenon in mammals but is associated with
sociality. Proceedings of the National Academy of Sciences of the
United States of America, 107(50), 21582–21586, http://dx.doi.org/
10.1073/pnas.1005246107.
Steinzor, B. (1950). The spatial factor in face to face discussion groups.
Journal of Abnormal and Social Psychology, 45(3), 552.
Szameitat, D. P., Darwin, C. J., Szameitat, A. J., Wildgruber, D., & Alter, K.
(2011). Formant characteristics of human laughter. Journal of Voice,
25(1), 32–37.
Tenaza, R. R. (1976). Songs, choruses and countersinging of Kloss' gibbons
(Hylobates klossii) in Siberut Island, Indonesia. Zeitshcrift fur
Tierpsychologie, 40, 37–52.
Vettin, J., & Todt, D. (2005). Human laughter, social play, and play
vocalizations of non-human primates: an evolutionary approach. Behaviour, 142(2), 217–240.
Vlahovic, T., Roberts, S., & Dunbar, R. I. M. (2012). Effects of duration and
laughter on subjective happiness within different modes of communication. Journal of Computer-mediated Communication, 17, 436–450.
Waller, B. M., & Dunbar, R. I. M. (2005). Differential behavioural effects of
silent bared teeth display and relaxed open mouth display in
chimpanzees (Pan troglodytes). Ethology, 111(2), 129–142.
Webster, J. C. (1965). Speech communications as limited by ambient noise.
The Journal of the Acoustical Society of America, 37, 692.
Weisfeld, G. E. (1993). The adaptive value of humor and laughter. Ethology
and Sociobiology, 14(2), 141–169, http://dx.doi.org/10.1016/01623095(93)90012-7.
Emotion
Self-Relevance Appraisal of Gaze Direction and Dynamic
Facial Expressions: Effects on Facial Electromyographic
and Autonomic Reactions
Robert Soussignan, Michèle Chadwick, Léonor Philip, Laurence Conty, Guillaume Dezecache,
and Julie Grèzes
Online First Publication, September 17, 2012. doi: 10.1037/a0029892
CITATION
Soussignan, R., Chadwick, M., Philip, L., Conty, L., Dezecache, G., & Grèzes, J. (2012,
September 17). Self-Relevance Appraisal of Gaze Direction and Dynamic Facial Expressions:
Effects on Facial Electromyographic and Autonomic Reactions. Emotion. Advance online
publication. doi: 10.1037/a0029892
Emotion
2012, Vol. 12, No. 6, 000
© 2012 American Psychological Association
1528-3542/12/$12.00 DOI: 10.1037/a0029892
Self-Relevance Appraisal of Gaze Direction and Dynamic Facial
Expressions: Effects on Facial Electromyographic and
Autonomic Reactions
Robert Soussignan
Michèle Chadwick and Léonor Philip
Université de Bourgogne
Ecole Normale Supérieure, Paris
Laurence Conty
Guillaume Dezecache and Julie Grèzes
Université Paris 8, Saint-Denis
Ecole Normale Supérieure, Paris
What processes or mechanisms mediate interpersonal matching of facial expressions remains a debated
issue. As theoretical approaches to underlying processes (i.e., automatic motor mimicry, communicative
intent, and emotional appraisal) make different predictions about whether facial responses to others’
facial expressions are influenced by perceived gaze behavior, we examined the impact of gaze direction
and dynamic facial expressions on observers’ autonomic and rapid facial reactions (RFRs). We recorded
facial electromyography activity over 4 muscle regions (Corrugator Supercilli, Zygomaticus Major,
Lateral Frontalis, and Depressor Anguli Oris), skin conductance response and heart rate changes in
participants passively exposed to virtual characters displaying approach-oriented (anger and happiness), and avoidance-oriented (fear and sadness) emotion expressions with gaze either directed at or
averted from the observer. Consistent with appraisal theories, RFRs were potentiated by mutual eye
contact when participants viewed happy and angry expressions, while RFRs occurred only to fear
expressions with averted gaze. RFRs to sad expressions were not affected by gaze direction. The
interaction between emotional expressions and gaze direction was moderated by participants’
gender. The pattern of autonomic reactivity was consistent with the view that salient social stimuli
increase physiological arousal and attentional resources, with gaze direction, nature of emotion, and
gender having moderating effects. These results suggest the critical role of self-relevance appraisal
of senders’ contextual perceptual cues and individual characteristics to account for interpersonal
matching of facial displays.
Keywords: emotional expression, gaze, gender, mimicry, self-relevance appraisal
1998). Yet what processes underlie interpersonal behavior matching remains unclear. In particular, whether mimicry accounts for
all forms of behavior matching is debated, and especially for RFRs
to others’ emotional expressions (e.g., Moody, McIntosh, Mann, &
Weisser, 2007). More specifically, when we spontaneously smile
or frown upon seeing a person who is smiling or frowning, is it
a motor mimicry, an intent to communicate, or an emotional
response?
One way to resolve this issue is to investigate the impact of gaze
direction and facial expressions on RFRs. Indeed, major theoretical approaches make distinct predictions about whether RFRs to
others’ facial displays are influenced by perceived gaze behavior,
and eye contact has been suggested to be a potent trigger of
embodied simulation (Niedenthal, Mermillod, Maringer, & Hess,
2010). According to the mimicry hypothesis, the perception of
some behaviors directly and automatically activates our own motor
representation of these behaviors (Chartrand & van Baaren, 2009),
probably via the so-called mirror neurons system. Although attention may enhance observation– execution matching (Chong, Cunnington, Williams, & Mattingley, 2009), mimicry as an automatic
response should also occur when the sender’s gaze is not directed
toward an observer. By contrast, according to the communicative
Humans often spontaneously match conspecifics’ behaviors, a
phenomenon typically termed mimicry (Hess, Philippot, & Blairy,
1999). There is substantial evidence that mimicry is ubiquitous,
serves important social functions (e.g., affiliation), and is automatic and nonconscious (Chartrand & van Baaren, 2009), because,
for example, the perception of facial expressions may elicit congruent rapid facial reactions (RFRs) (e.g., Dimberg & Thunberg,
Robert Soussignan, Centre des Sciences du Goût et de l’Alimentation,
Université de Bourgogne, Dijon, France; Michèle Chadwick, Léonor
Philip, Guillaume Dezecache, and Julie Grèzes, Laboratoire de Neurosciences Cognitives and Institut d’Etude de la Cognition, Ecole Normale
Supérieure, Paris, France; Laurence Conty, Laboratoire de Psychopathologie et Neuropsychologie, Université Paris 8, Saint-Denis, France.
This research was supported by a grant from the French National
Research Agency (ANR 11 EMCO 00902). We are grateful to Sylvie
Berthoz (INSERM U669) for administrative support.
Correspondence concerning this article should be addressed to Robert
Soussignan, Centre des Sciences du Goût et de l’Alimentation, CNRS
UMR 6265, INRA UMR 1324, Université de Bourgogne, Dijon, France.
E-mail: [email protected] or [email protected]
1
2
SOUSSIGNAN ET AL.
act hypothesis, facial matching is primarily an interpersonal process reflecting some representation and understanding of the sender‘s internal state and should mainly occur if an observer is the
target of the sender’s attention (Bavelas, Black, Lemery, & Mullett, 1986). Finally, emotional perspectives, and more specifically
appraisal theories of emotion, stress the importance of appraisal
dimensions (e.g., pleasantness, self-relevance, coping potential,
event compatibility with social/personal norms or values) to account for the differentiation of emotional responses (Scherer,
Schorr, & Johnstone, 2001). Within this framework, the meaning
of emotional cues perceived by the self is critical and varies as a
function of features, such as gaze direction, which may signal that
the observer is the target of sender’s attention or that she or he
detects the sender is reacting to a salient event in the shared
environment (Sander, Grandjean, Kaiser, Wehrle, & Scherer,
2007). For example, angry faces directed toward a receiver, by
signaling that he or she is the target of hostility, were perceived as
less affiliative (Hess, Adams, & Kleck, 2007) and induced higher
corrugator (Schrammel, Pannasch, Graupner, Mojzisch, & Velichkovsky, 2009) and amygdala (N=Diaye, Sander, & Vuilleumier,
2009) activity than did angry faces with averted gaze. On the other
hand, fearful faces with averted gaze, by signaling a potential
source of danger, were perceived as more intense and negative
(Hess et al., 2007; Sander et al., 2007), and elicited higher
amygdala activity (N=Diaye et al., 2009) than did fearful faces
directed at the observer.
To our knowledge, only two studies have tested the effects of
both gaze behavior and facial expression on RFRs (Mojzisch et al.,
2006; Schrammel et al., 2009). However, besides conflicting results, these studies were limited by (i) including only approachoriented emotions (happiness and anger), (ii) using a judgment task
that may affect electromyography (EMG) activity as a result of the
cognitive load of the task (Lishner, Cooter, & Zald, 2008) or the
encoding of emotional concepts (Halberstadt, Winkielman, Niedenthal, & Dalle, 2009), (iii) manipulating face orientation rather
than gaze direction, making unclear whether different amounts of
information conveyed in the direct versus averted condition contributed to findings.
The aim of this study was to test the differing assumptions
concerning the underlying mechanisms of interpersonal matching
of facial expressions by examining whether the passive observation of approach-oriented (happiness and anger) and avoidanceoriented (fear and sadness) emotions elicited congruent RFRs and
differentiated autonomic responses as a function of gaze direction.
Autonomic responses, such as skin conductance response (SCR)
and heart rate (HR) deceleration, were recorded because they index
sympathetic arousal and attention/orienting responses, respectively
(Andreassi, 2000). While motor mimicry and communicative accounts predict that congruent RFRs should be either little influenced by gaze direction or solely affected by eye contact, appraisal
models make predictions depending on the self-relevance of facial
expressions as a function of gaze direction and gender. For happiness, more congruent RFRs were expected for faces with direct
versus averted gaze, as the affiliative value of smiles would be
increased if the observer is the object of another’s attention, and
more particularly in women because they smile more or are more
affiliative than men (Dimberg & Lundquist, 1990; LaFrance,
Hecht, & Paluck, 2003). For direct-gaze anger expressions, congruent or incongruent RFRs (i.e., anger or fear) may occur (Moody
et al., 2007; Schrammel et al., 2009), and these effects should be
moderated by gender, since men display more anger or are more
dominant than do women (Brody & Hall, 2008; Hess, Adams, &
Kleck, 2005). Concerning autonomic reactivity, the few studies
manipulating both happy/anger expressions and gaze behavior
failed to find an effect of eye-to-eye contact, as measured by
pupil dilation (Mojzisch et al., 2006; Schrammel et al., 2009).
However, since emotionally neutral face studies using SCR
showed that perceiving a direct gaze elicited higher reactivity
than perceiving an averted gaze (e.g., Helminen, Kaasinen, &
Hietanen, 2011), we hypothesized higher SCRs and larger HR
decelerations in response to facial expressions with direct as
opposed to averted gaze.
Regarding avoidance-oriented emotions, appraisal models predict that fear faces with averted gaze should be more self-relevant
because they signal more clearly the location of a potential danger
in the shared environment (Sander et al., 2007). From this perspective, observers should display larger RFRs when exposed to
fearful faces with averted in contrast to direct gaze. Finally, for
sadness, as their facial expressions signal both the loss of a
person/object of importance to the self (Bonanno, Goorin, &
Coifman, 2008) and a call for support or help from others (Fischer
& Manstead, 2008), averted gaze should more clearly signal “loss
and disengagement” (Adams & Kleck, 2005), whereas direct gaze
would remain self-relevant for signaling the sender’s need for help
or support. Thus, we anticipated congruent RFRs for sad faces
regardless of eye direction, with higher RFRs in women than in
men, as they have been shown to be more emotionally contagious
for this emotion (Doherty, Orimoto, Singelis, Hatfield, & Hebb,
1995). Finally, we predicted earlier RFRs for approach-oriented
(500 –1000 ms) than for avoidance-oriented (1000 –2000 ms) emotions following findings from previous studies (Oberman, Winkielman, & Ramachandran, 2009).
Method
Participants
Forty-two adults (21 women) participated to the study. Because
of technical problems, data from 11 participants were discarded,
leaving 17 women (M ⫽ 23.36 years, SD ⫽ 2.33) and 14 men
(M ⫽ 24.78 years, SD ⫽ 3.47) in the final sample.
Stimuli
We used avatars that are highly controlled realistic stimuli able
to induce EMG reactivity and an experience of being with another
person (Bailenson, Bloscovich, Beall, & Loomis, 2003; Weyers,
Mühlberger, Hefele, & Pauli, 2006). Movies depicting virtual
characters were created using Poser 9 software (Smith Micro,
Watsonville, CA). The facial expressions were obtained by manipulating polygon groups on a three-dimensional (3D)-–mesh that
made up the avatars’ facial structure. The polygon groups were
comparable to the action units (AUs) as described in the Facial
Action Coding System (FACS) (Ekman & Friesen, 1978). The
following codes were used: 6 ⫹ 12 ⫹ 25 ⫹ 26 for happiness, 4 ⫹
5 ⫹ 24 for anger, 1 ⫹ 4 ⫹ 15 for sadness, and 1 ⫹ 2 ⫹ 4 ⫹ 5 ⫹
20 for fear. Disgust expressions were also created for pretests
using AU9/10. Neutral faces were used as control stimuli. Avatars
SELF-RELEVANCE OF GAZE AND EMOTION EXPRESSION
(2 men, 2 women) had either direct or averted gaze. Gaze direction
was created by angular deviation of the iris structure, in relation to
the axis of the head, using a computational displacement of 15° to
either side (left/right) to generate counterbalanced conditions.
Each movie clip lasted 2 s, with the rise time of high-intensity
expression (apex) occurring at 500 ms and then followed by a
1500-ms static expression. Stimuli were presented using a 19-inch
LCD monitor with a resolution of 500 ⫻ 490 pixels. The visual
angles of stimuli were 22.92° in height and 21.92° in width.
Pretests
Three groups of adults (N ⫽ 21–32) rated facial features of
avatars’ emotion expressions. The pretests revealed that (i) emotional expressions were accurately decoded regardless of gaze
direction (from 80.7% to 95.3%), F(5, 115) ⫽ 4.01, p ⫽ .001; (ii)
gaze direction was accurately decoded regardless of the type of
emotion (from 92.6% to 100%), F(5, 130) ⫽ 2.05, p ⬎ .05; (iii)
anger faces with direct gaze were judged more hostile than anger
faces with averted gaze (77.34% vs. 47.66%), t(31) ⫽ 5.24, p ⬍
.0001), whereas fearful faces with averted gaze signaled more
clearly a danger in the environment than those with direct gaze
(75.78% vs. 64.84%), t(31) ⫽ 2.52, p ⫽ .02. Finally, for sadness
expressions, about half of participants accurately selected either
“loss/disengagement” or “help/support” information, regardless of
gaze direction.
3
with electrolyte gel were placed and secured using adhesive collars
and sticky tape. Following Fridlund and Cacioppo’s (1986) guidelines, the two electrodes of a pair were placed at a distance of about
1.5 cm over muscle regions associated with emotion expressions
(Ekman & Friesen, 1978). Lateral Frontalis muscle activity, which
raises outer brow, was used to measure fear expression. Corrugator Supercilii muscle activity, which lowers brows, was used to
measure anger expression. Zygomaticus Major muscle activity,
which pulls lip corners, was used to measure happy expression.
Depressor Anguli Oris muscle activity, which pulls the lips downward, was used to measure sad expression. The ground electrode
was placed in the upper part of the forehead. The EMG signals
were recorded with a 10-Hz to 500-Hz bandpass filter and a 50-Hz
notch filter, rectified and smoothed online using a 500 ms time
constant.
SCR (in microsiemens) was recorded using bipolar finger electrodes and ADInstruments Model ML116 GSR Amp connected to
the PowerLab system. The electrodes were attached with a Velcro
strap on the palmar surfaces of the middle segments of phalanges
of the second and third fingers of the nondominant hand. Heart
activity was recorded from 2 electrocardiogram (ECG) electrodes
placed above the right and left wrists. A digital input on the
computer detected the R-waves and displayed HR online in beats
per minute (bpm) on a separate channel.
Data Analysis
Procedure
On arrival, participants sat in a comfortable chair and were
separated by two screens from the experimenter. Following the
placement of sensors, they were instructed that involuntary reactions (facial temperature, HR, and SCR) will be recorded in
response to avatar’s faces. A cover story was used for facial EMG
to minimize demand characteristics and avoid voluntary control of
facial muscles (Fridlund & Cacioppo, 1986). Following the completion of a familiarization trial, participants viewed 4 avatars
displaying 4 facial expressions (angry, fear, happy, and sad) plus
a neutral face, with either a direct or averted gaze. Each trial began
with a warning beep (250 ms) followed by a central fixation cross
(1000 ms), and then by the avatar movie for 2 s. A blank screen
was displayed during 18- to 23-s intertrial intervals. The order of
stimuli presentation was randomized across participants using
E-Prime software.
Psychophysiological Measures
They were recorded using AD Instruments PowerLab data acquisition system connected to a PC. The bioelectrical signals were
filtered, amplified, and sampled at a rate of 2000 Hz under the
control of the LabChart 7 software. The stimulus onset was automatically signaled on the LabChart channels by a Quatech PCMCIA card. As part of the LabChart software, the Video Capture
module was used with a Webcam to record visible facial movements of participants to enable a latter visual inspection of movement artifacts.
Before attaching the electrodes, the target sites of the skin of the
left side of the face were cleaned with alcohol and gently rubbed,
and then four pairs of 4-mm shielded Ag/AgCl electrodes filled
Because of technical difficulties and consistent electric noise,
data of 11 participants were excluded. Movies of the remaining
sample (N ⫽ 31) were then inspected to verify the presence of
movements unrelated to the activity of muscle regions of interest.
No more than 0.5% of trials related to irrelevant movements (e.g.,
gaping, yawning) were dropped from subsequent analyses. Following visual inspection, EMG amplitudes were calculated during
the 300-ms window preceding stimulus onset (baseline) and during
20 time intervals of 100-ms stimulus presentation. The mean EMG
amplitudes during subsequent 100-ms time intervals were expressed as a percentage of the mean amplitude of the baseline.
Percentage scores were used to standardize the widely differing
absolute EMG amplitudes of participants and enable meaningful
comparisons between individuals and across sites (Delplanque et
al., 2009).
SCR was defined as change in the amplitude occurring 1 to 3 s
after the stimulus onset (Dawson, Schell, & Filion, 2000). We
calculated temporal changes of SCR by subtracting the 500-ms SC
baseline preceding stimulus onset (prestimulus) from the maximum amplitude in the six subsequent 500-ms intervals after stimulation onset. SCR data were then log transformed to normalize the
distribution of SCR scores (Dawson et al., 2000). HR change was
computed off-line by subtracting the 500-ms baseline level prior to
each stimulus onset (prestimulus) from the mean of HR over each
500-ms interval of the 4-s window after stimulus onset.
We conducted analyses of variance (ANOVAs) with emotion
(anger, fear, happiness, neutral, and sadness), gaze (direct,
averted), avatar’s sex (male, female), and time (20 intervals for
facial EMG, 6 intervals for SCR, and 8 intervals for HR) as
within-subjects factors and participant’s gender (men, women) as
SOUSSIGNAN ET AL.
4
a between-subjects factor.1 Following the significance of any
overall F test, we used Tukey’s honestly significant difference
(HSD) tests to compare differences between means.
Results and Discussion
Facial EMG
Happy faces. Significant effects of emotion, F(4, 116) ⫽
10.41, p ⬍ .00001, ␩p2 ⫽ .26, gaze, F(1, 29) ⫽ 8.32, p ⫽ .007,
␩p2 ⫽ .22, and Emotion ⫻ Time interaction, F(76, 2204) ⫽ 2.29,
p ⬍ .0001, ␩p2 ⫽ .07, were found on Zygomaticus activity reflecting larger RFRs to avatars’ happy faces from 700 to 2000 ms (all
ps ⬍ .05). As predicted, the interaction between emotion and gaze
was significant, F(4, 116) ⫽ 2.78, p ⫽ .03, ␩p2 ⫽ .09, indicating
higher Zygomaticus activity to happy faces with direct than averted
gaze (p ⫽ .02) (Figure 1a). A marginally significant interaction
between gaze and gender was also detected, F(1, 29) ⫽ 3.45, p ⫽
.07, ␩p2 ⫽ .11, with men showing higher reactivity in the direct
(4.54%) than in the averted (⫺0.23%) gaze condition (p ⫽ .02),
whereas no effect was found in women (direct gaze: 2.20%;
averted gaze: 1.30%).
Previous studies manipulating gaze behavior provided conflicting results, with one study reporting higher zygomatic activation in
response to happy expressions looking at observers (Schrammel et
al., 2009), while another found no effect of attention (Mojzisch et
al., 2006). Although it is unclear how these contradictory results
might be explained, it is interesting that we used—like Schrammel
et al. (2009) and unlike Mojzisch et al. (2006)—avatars displaying
Duchenne smiles, which are typically considered enjoyment smiles
(e.g., Soussignan, 2002). Because enjoyment smiles with eye contact are rewarding cues fostering intimacy and social interaction
(Niedenthal et al., 2010), it is possible that their social meaning
differs from that of enjoyment smiles with averted gaze. This could
have led to more congruent RFRs as part of an interpersonal
emotion transfer (Parkinson, 2011) promoting affiliative exchanges. Further studies are needed using RFRs to clarify the issue
of the social meanings attributed to different types of smiles as a
function of gaze direction (Niedenthal et al., 2010; Soussignan &
Schaal, 1996). With regard to participant’s gender, we did not find,
as expected, that women displayed larger zygomatic activity than
men (e.g., Dimberg & Lundquist, 1990), but only that men exhibited higher zygomatic activity when happy faces looked directly at
them as opposed to when happy faces looked away. Thus, women’s smiles in response to others’ happy faces appeared less
affected by gaze direction. Although the reason for this finding is
unclear, it is possible that motives for affiliation are greater in
women than in men (Brody & Hall, 2008), potentiating the level of
women’s zygomatic activity regardless of gaze direction.
Anger faces. Significant effects of emotion, F(4, 116) ⫽ 5.96,
p ⫽ .0002, ␩p2 ⫽ .17, and of Emotion ⫻ Time interaction, F(76,
2204) ⫽ 2.13, p ⬍ .0001, ␩p2 ⫽ .07, were found for Corrugator
Supercilii activity reflecting RFRs in participants exposed to anger
faces, reaching significance from 700-ms onward (all ps ⬍ .05).
The Emotion ⫻ Time ⫻ Gaze ⫻ Participants’ gender interaction
was significant, F(76, 2204) ⫽ 1.38, p ⫽ .02, ␩p2 ⫽ .04, as well as
the Emotion ⫻ Time ⫻ Participants’ gender ⫻ Avatar’s sex
interaction, F(76, 2204) ⫽ 1.46, p ⫽ .007, ␩p2 ⫽ .05. These
findings reflected higher RFRs in men when angry faces looked at
them than when angry faces looked away (Figure 1b, time windows: 900 –1000 ms, all ps ⬍ .05), and higher RFRs in men
exposed to angry expressions of male than female avatars (time
windows: 600 –900 ms, all ps ⬍ .05).
A previous study (Schrammel et al., 2009) also found a higher
Corrugator response to anger faces when virtual characters turned
toward observers in comparison with when they looked elsewhere.
Taken together, these findings may reflect implicit appraisal of the
social meaning of anger expressions that signal hostility, threat, or a
potential attack when directed toward the perceiver (Sander et al.,
2007), with men possibly more reactive than women, as part of a
defensive reaction related to power/status. As anger displays have
been linked to the power and dominance of the expresser, and social
stereotypes render the perception of these signals more appropriate in
men than in women (Hess et al., 2005), it is possible that our findings
reflect gender-based stereotypical expectations. Further research measuring both RFRs and social stereotypes in participants exposed to
angry expressions varying in gaze direction is required.
Fear faces. The following interactions were significant for Lateral Frontalis muscle activity: Emotion ⫻ Time, F(76, 2204) ⫽
1.46, p ⫽ .006, ␩p2 ⫽ .05, Emotion ⫻ Gaze ⫻ Time, F(76,
2204) ⫽ 1.48, p ⫽ .005, ␩p2 ⫽ .05, and Emotion ⫻ Gaze ⫻
Time ⫻ Participant’s gender, F(76, 2204) ⫽ 1.39, p ⫽ .01,
␩p2 ⫽ .04. Avatars’ fear expressions, in comparison with other
emotional expressions, induced an increase in Frontalis activity,
beginning during the second 500-ms interval, and reaching significance between 1300 and 1700 ms (all ps ⬍ .05) after stimulus
onset. Interestingly, fear faces with averted gaze induced more
Frontalis activity than did fear faces with direct gaze (time intervals: 1300 –1600 ms, all ps ⬍ .05), whereas no differences were
found for the other emotions. This finding might reflect the critical
role of self-relevance appraisal because fear faces combined with
averted gaze may more clearly signal a potential threat/danger in
the observer’s environment (Hadjikhani, Hoge, Snyder, & de
Gelder, 2008). From this perspective, matched RFRs may reflect
an interpersonal emotion transfer (Parkinson, 2011). Such an interpretation is strengthened by studies wherein fearful faces with
averted gaze induced greater increases in subjective reports and
amygdala activity than did fearful faces with direct gaze (Hadjikhani et al., 2008; N=Diaye et al., 2009; Sander et al., 2007).
Furthermore, as shown in Figure 1c, women displayed a higher
increase in Frontalis activity than did men when exposed to fear
faces with averted as opposed to direct gaze (time intervals:
1300 –1600, p ⬍ .05). Fear is believed to occur more in women
than in men (Brody & Hall, 2008; Hess et al., 2000) and in studies
using self-reports of emotional contagion, women scored higher on
the fear subscale than men (Doherty et al., 1995). Thus, more
RFRs to fearful faces with averted gaze in women in contrast to
men suggest that these facial expressions might be more selfrelevant for women as a possible result of both socialization and
gender stereotypes.
Sad faces. A significant main effect of emotion was found for
Depressor Anguli Oris activity, F(1, 29) ⫽ 4.38, p ⫽ .002, ␩p2 ⫽
1
We also conducted ANOVAs using the type of muscle as a withinsubjects factor for each type of emotion. The results of the Muscle ⫻ Time
interaction revealed expression-appropriate muscles: F(57, 1653) ⫽ 2.26, p ⬍
.0001, for anger; F(57, 1653) ⫽ 1.95, p ⬍ .001, for happiness; F(57, 1653) ⫽
1.34, p ⫽ .04, for fear; and F(57, 1596) ⫽ 1.36, p ⫽ .04, for sadness.
SELF-RELEVANCE OF GAZE AND EMOTION EXPRESSION
5
Figure 1. Mean facial electromyography (EMG) activity as a function of gaze direction, gender, and the nature
of emotional facial expressions: (a) Zygomatic Major activity; (b) Corrugator Supercilii activity to anger faces;
(c) Lateral Frontalis activity to fear faces; (d) Depressor Anguli Oris activity. Activity reflects average
activation during each 100-ms time interval.
.13, indicating higher reactivity in participants exposed to sad
(1.90%) than to other facial expressions (anger: ⫺0.33%; fear:
⫺0.32%; happy: ⫺0.62%; neutral: ⫺0.83%; all ps ⬍ .05). Moreover, a significant Emotion ⫻ Time ⫻ Participant’s gender interaction was detected (Figure 1d), F(76, 2204) ⫽ 1.28, p ⫽ .05,
␩p2 ⫽ .04, reflecting more activity in women in response to sad than
to anger or fear faces (time windows: 1200 –1400 ms; all
ps ⬍ .05). We obtained larger RFRs to sad faces for both gaze
conditions. This is not surprising since we predicted that sad faces
with either direct or averted gaze convey self-relevant signals
(disengagement due to a loss, call for social support), as confirmed
by our pretest studies. Furthermore, our result showing that women
displayed higher Depressor activity than men when exposed to sad
faces is consistent with findings indicating they scored higher than
men in a sadness subscale of emotional contagion (Doherty et al.,
1995), and that they usually expressed more sad expressions than
men (Brody & Hall, 2008).
Autonomic Data
SCR. A main effect of time was found, F(5, 145) ⫽ 3.38, p ⫽
.006, ␩p2 ⫽ .10, indicating that observing human faces elicited a
6
SOUSSIGNAN ET AL.
significant increase in SCRs within 1 to 3 s after stimulus onset
(p ⬍ .05). A Gaze ⫻ Time ⫻ Avatar’s sex interaction was also
found, F(5, 145) ⫽ 4.67, p ⫽ .0005, ␩p2 ⫽ .14, revealing higher
SCRs to female avatars with direct than averted gaze within 2.5–3
s after stimulus onset (all ps ⬍ .05). Finally, an Emotion ⫻
Gaze ⫻ Time ⫻ Avatar’s sex significant interaction was detected,
F(20, 580) ⫽ 2.01, p ⫽ .006, ␩p2 ⫽ .06 (Figure 2a), revealing
higher SCRs to female avatars’ fear expressions with direct gaze
than averted gaze (time windows: 2–3s, all ps ⬍ .001).
These data are in line with previous studies showing an effect of
gaze direction on physiological arousal in response to emotionally
neutral faces (Helminen et al., 2011; Nichols & Champness, 1971).
However, in our study, mutual eye contact strongly potentiated
physiological arousal in participants exposed to fear expressions.
As the human amygdala, which directly influences electrodermal
activity (Mangina & Beuzeron-Mangina, 1996), is responsive to
the larger size of eye whites (i.e., sclera) of fear faces (Whalen et
al., 2004), our finding might reflect a higher effect of sclera with
direct as opposed to averted gaze on autonomic arousal. Concerning the effect of character’s gender, although we have no clear
explanation for this finding, a similar result has been reported by
Schrammel et al. (2009), who speculated, with regard to genderspecific norms, that higher arousal in response to female characters
might reflect the participant’s expectation of a more affiliative and
rewarding interaction from a female than a male partner.
HR. A significant effect of time was found, F(7, 203) ⫽ 9.56,
p ⬍ .0001, ␩p2 ⫽ .25, as well as a significant interaction between
time and avatar’s sex, F(7, 203) ⫽ 2.50, p ⫽ .017, ␩p2 ⫽ .08,
indicating that female avatars elicited larger HR deceleration than
male avatars. Interestingly, a Gaze ⫻ Time ⫻ Participant’s gender
interaction was also detected, F(7, 203) ⫽ 1.81, p ⫽ .08, ␩p2 ⫽ .06,
with men and women displaying cardiac deceleration reaching a
minimum at about 3 s after stimulus onset, followed by a cardiac
acceleration at 4 s in men in the averted in contrast to direct gaze
condition (p ⫽ .007), whereas a decrease in heart rate was observed at 4 s after stimulus onset in both gaze conditions in women
(Figure 2b).
As predicted, the perception of facial stimuli induced heart rate
deceleration, consistent with previous studies using both positive and
negative facial expressions (e.g., Vrana & Gross, 2004). This suggests
that a heart rate decrease likely reflects the allocation of attentional
resources to salient stimuli. Interestingly, our findings highlighted that
gaze direction influenced heart rate only in men, suggesting that while
attentional resources were initially allocated to stimuli in both men
and women, men might be more susceptible to social disengagement
when the sender’ eyes looked elsewhere. Although further work is
required to confirm this result, it is interesting that neurophysiological
studies, using event-related potential (ERP), provided evidence that
gender influenced the N140 and P240 components of attention to cue
stimuli, with women allocating more attention resources to complete
the task (Feng et al., 2011).
General Discussion
To the best of our knowledge, this study is the first to investigate
the effects of both the senders’ gaze direction and facial expressions
related to approach-oriented (happiness and anger) and avoidanceoriented (fear and sadness) emotions on observers’ RFRs and autonomic responses. It was designed to test assumptions about three
possible underlying processes (i.e., automatic motor mimicry, communicative intent, and emotional appraisal) accounting for matched
facial reactions to another’s emotional expressions. Taken together,
our findings indicate that when participants were not submitted to
judgment tasks and passively observed facial expressions, RFRs to
both approach and avoidance-oriented emotions cannot be interpreted
as involving either direct motor matching or a need to communicate
the sender’s emotional expression. While these two perspectives predict that observers’ congruent RFRs should be little influenced by
gaze direction (automatic motor mimicry) or solely affected by eye
contact (communicative intent), participants in our study displayed
congruent RFRs as a function of the social meaning of perceived gaze
direction and facial expressions. Thus, our findings are more consistent with predictions from appraisal theories of emotion highlighting
the critical role of the detection of self-relevance to account for
Figure 2. Time course of (a) SCR magnitude and (b) heart rate changes in participants exposed to avatars as
a function of gaze direction, the nature of facial expressions, and gender. SCR ⫽ skin conductance response;
bpm ⫽ beats per minute.
SELF-RELEVANCE OF GAZE AND EMOTION EXPRESSION
differentiated and adaptive responses to salient social events (Scherer
et al., 2001).
Because we focused on RFRs under limited social circumstances,
this does not exclude that communicative intent may be involved in
more complex social situations (e.g., Bavelas et al., 1986). Furthermore, a direct perception– behavior link possibly mediates numerous
manifestations of mimicry (e.g., mannerisms, nonexpressive mimicry)
(Chartrand & van Baaren, 2009). However, when someone passively
observes others’ facial expressions, we provide clear evidence that
facial muscle responses may be rapidly matched to the senders’ facial
displays depending on the significance of social cues that were implicitly appraised in relation to the self.
References
Adams, R. B., Jr., & Kleck, R. E. (2005). Effects of direct and averted gaze
on the perception of facially communicated emotion. Emotion, 5, 3–11.
doi:10.1037/1528-3542.5.1.3
Andreassi, J. L. (2000). Psychophysiology: Human behavior and physiological response. Mahwah, NJ: Erlbaum.
Bailenson, J. N., Bloscovich, J., Beall, A. C., & Loomis, J. M. (2003).
Interpersonal distance in immersive virtual environments. Personality
and Social Psychology Bulletin, 29, 819 – 833. doi:10.1177/
0146167203029007002
Bavelas, J. B., Black, A., Lemery, C. R., & Mullett, J. (1986). “I show how
you feel”: Motor mimicry as a communicative act. Journal of Personality and Social Psychology, 50, 322–329. doi:10.1037/0022-3514.50.2
.322
Bonanno, G. A., Goorin, L., & Coifman, K. G. (2008). Sadness and grief.
In M. Lewis, J. Haviland-Jones, & L. Feldman Barrett (Eds.), Handbook
of emotions (3rd ed., pp. 797– 810). New York, NY: Guilford Press.
Brody, L. R., & Hall, J. A. (2008). Gender and emotion in context. In M.
Lewis, J. Haviland-Jones, & L. Feldman Barrett (Eds.), Handbook of
emotions (3rd ed., pp. 395– 408). New York, NY: Guilford Press.
Chartrand, T. L., & van Baaren, R. (2009). Human mimicry. Advances in
Experimental Social Psychology, 41, 219 –274. doi:10.1016/S00652601(08)00405-X
Chong, T. T., Cunnington, R., Williams, M. A., & Mattingley, J. B. (2009).
The role of selective attention in matching observed and executed actions.
Neuropsychologia, 47, 786 –795. doi:10.1016/j.neuropsychologia.2008
.12.008
Dawson, M. E., Schell, A. M., & Filion, D. L. (2000). The electrodermal
system. In J. T. Cacioppo & L. G. Tassinary (Eds.), Principles of
psychophysiology: Physical, social, and inferential elements (2nd ed.,
pp. 200 –223). Cambridge, UK: Cambridge University Press.
Delplanque, S., Grandjean, D., Chrea, C., Coppin, G., Aymard, L., Cayeux,
I., . . . Scherer, K. R. (2009). Sequential unfolding of novelty and
pleasantness appraisals of odors: Evidence from facial electromyography and autonomic reactions. Emotion, 9, 316 –328. doi:10.1037/
a0015369
Dimberg, U., & Lundqvist, L.-O. (1990). Gender differences in facial
reactions to facial expressions. Biological Psychology, 30, 151–159.
doi:10.1016/0301-0511(90)90024-Q
Dimberg, U., & Thunberg, M. (1998). Rapid facial reactions to emotional
facial expressions. Scandinavian Journal of Psychology, 39, 39 – 46.
doi:10.1111/1467-9450.00054
Doherty, R. W., Orimoto, L., Singelis, T. M., Hatfield, E., & Hebb, J.
(1995). Emotional contagion: Gender and occupational differences. Psychology of Women Quarterly, 19, 355–371. doi:10.1111/j.1471-6402
.1995.tb00080.x
Ekman, P., & Friesen, W. V. (1978). Facial Action Coding System: A
technique for the measurement of facial movement. Palo Alto, CA:
Consulting Psychologists Press.
7
Feng, Q., Zheng, Y., Zhang, X., Song, Y., Luo, Y. J., Li, Y., & Talhelm,
T. (2011). Gender differences in visual reflexive attention shifting:
Evidence from an ERP study. Brain Research, 1401, 59 – 65. doi:
10.1016/j.brainres.2011.05.041
Fischer, A. H., & Manstead, A. S. R. (2008). Social functions of emotion.
In M. Lewis, J. Haviland-Jones, & L. Feldman Barrett (Eds.), Handbook
of emotions (3rd ed., pp. 456 – 468). New York, NY: Guilford.
Fridlund, A. J., & Cacioppo, J. T. (1986). Guidelines for human electromyographic research. Psychophysiology, 23, 567–589. doi:10.1111/j
.1469-8986.1986.tb00676.x
Hadjikhani, N., Hoge, R., Snyder, J., & de Gelder, B. (2008). Pointing with
the eyes: The role of gaze in communicating danger. Brain and Cognition, 68, 1– 8. doi:10.1016/j.bandc.2008.01.008
Halberstadt, J., Winkielmann, P., Niedenthal, P. M., & Dalle, N. (2009).
Emotional conception: How embodied emotion concepts guide perception and facial action. Psychological Science, 20, 1254 –1261. doi:
10.1111/j.1467-9280.2009.02432.x
Helminen, T. M., Kaasinen, S. M., & Hietanen, J. K. (2011). Eye contact
and arousal: The effects of stimulus duration. Biological Psychology, 88,
124 –130. doi:10.1016/j.biopsycho.2011.07.002
Hess, U., Adams, R. B., & Kleck, R. E. (2005). Who may frown and who
should smile? Dominance, affiliation, and the display of happiness
and anger. Cognition and Emotion, 19, 515–536. doi:10.1080/
02699930441000364
Hess, U., Adams, R. B., Jr., & Kleck, R. E. (2007). Looking at you or
looking elsewhere: The influence of head orientation on the signal value
of emotional facial expressions. Motivation and Emotion, 31, 137–144.
doi:10.1007/s11031-007-9057-x
Hess, U., Philippot, P., & Blairy, S. (1999). Mimicry: Facts and fiction. In
P. Philippot & R. S. Feldman (Eds.), The social context of nonverbal
behavior. Studies in emotion and social interaction (pp. 213–241).
Cambridge, UK: Cambridge University Press.
Hess, U., Senécal, S., Kirouac, G., Herrera, P., Philippot, P., & Kleck, R. E.
(2000). Emotional expressivity in men and women: Stereotypes and
self-perceptions. Cognition and Emotion, 14, 609 – 642. doi:
10.1080/02699930050117648
LaFrance, M., Hecht, M. A., & Paluck, E. L. (2003). The contingent smile:
A meta-analysis of sex differences in smiling. Psychological Bulletin,
129, 305–334. doi:10.1037/0033-2909.129.2.305
Lishner, D. A., Cooter, A. B., & Zald, D. H. (2008). Rapid emotional
contagion and expressive congruence under strong test conditions. Journal of Nonverbal Behavior, 32, 225–239. doi:10.1007/s10919-0080053-y
Mangina, C. A., & Beuzeron-Mangina, J. H. (1996). Direct electrical
stimulation of specific human brain structures and bilateral electrodermal activity. International Journal of Psychophysiology, 22, 1– 8. doi:
10.1016/0167-8760(96)00022-0
Mojzisch, A., Schilbach, L., Helmert, J., Pannasch, S., Velichkovsky,
B. M., & Vogeley, K. (2006). The effects of self-involvement on
attention, arousal, and facial expression during social interaction with
virtual others: A psychophysiological study. Social Neuroscience, 1,
184 –195. doi:10.1080/17470910600985621
Moody, E. J., McIntosh, D. N., Mann, L. J., & Weisser, K. R. (2007). More
than mere mimicry? The influence of emotion on rapid facial reactions
to faces. Emotion, 7, 447– 457. doi:10.1037/1528-3542.7.2.447
N=Diaye, K., Sander, D., & Vuilleumier, P. (2009). Self-relevance processing in the human amygdala: Gaze direction, facial expression, and
emotion intensity. Emotion, 9, 798 – 806. doi:10.1037/a0017845
Nichols, K. A., & Champness, B. G. (1971). Eye gaze and the GSR.
Journal of Experimental Social Psychology, 7, 623– 626. doi:Org/10
.1016/0022–1031(71)90024 –2
Niedenthal, P. M., Mermillod, M., Maringer, M., & Hess, U. (2010). The
Simulation of Smiles (SIMS) model: Embodied simulation and the
8
SOUSSIGNAN ET AL.
meaning of facial expression. Behavioral and Brain Sciences, 33, 417–
433. doi:10.1017/S0140525X10000865
Oberman, L. M., Winkielman, P., & Ramachandran, V. S. (2009). Slow
echo: Facial EMG evidence for the delay of spontaneous, but not
voluntary emotional mimicry in children with autism spectrum disorders. Developmental Science, 12, 510 –520. doi:10.1111/j.1467-7687
.2008.00796.x
Parkinson, B. (2011). Interpersonal emotion transfer: Contagion and social
appraisal. Social and Personality Psychology Compass, 5, 428 – 439.
doi:10.1111/j.1751-9004.2011.00365.x
Sander, D., Grandjean, D., Kaiser, S., Wehrle, T., & Scherer, K. R. (2007).
Interaction effects of perceived gaze direction and dynamic facial expression: Evidence for appraisal theories of emotion. European Journal of
Cognitive Psychology, 19, 470 – 480. doi:10.1080/09541440600757426
Scherer, K. R., Schorr, A., & Johnstone, T. (2001). Appraisal processes in
emotion: Theory, methods, research. Series in affective science. New
York, NY: Oxford University Press.
Schrammel, F., Pannasch, S., Graupner, S., Mojzisch, A., & Velichkovsky,
B. (2009). Virtual friend or threat? The effects of facial expression and gaze
interaction on psychophysiological responses and emotional experience.
Psychophysiology, 46, 922–931. doi:10.1111/j.1469-8986.2009.00831.x
Soussignan, R. (2002). Duchenne smile, emotional experience and autonomic reactivity: A test of the facial feedback hypothesis. Emotion, 2,
52–74. doi:10.1037/1528-3542.2.1.52
Soussignan, R., & Schaal, B. (1996). Forms and social signal value of
smiles associated with pleasant and unpleasant sensory experience.
Ethology, 102, 1020 –1041. doi:10.1111/j.1439-0310.1996.tb01179.x
Vrana, S. R., & Gross, D. (2004). Reactions to facial expressions: Effects
of social context and speech anxiety on responses to neutral, anger, and
joy expressions. Biological Psychology, 66, 63–78. doi:10.1016/j
.biopsycho.2003.07.004
Weyers, P., Mühlberger, A., Hefele, C., & Pauli, P. (2006). Electromyographic responses to static and dynamic avatar emotional facial expressions. Psychophysiology, 43, 450 – 453. doi:10.1111/j.1469-8986.2006
.00451.x
Whalen, P. J., Kagan, J., Cook, R. G., Davis, F. C., Kim, H., Polis, S., . . .
Johnstone, T. (2004). Human amygdala responsivity to masked fearful
eye whites. Science, 306, 2061. doi:10.1126/science.1103617
Received February 21, 2012
Revision received June 20, 2012
Accepted July 31, 2012 䡲
Self-Relevance Appraisal Influences Facial Reactions to
Emotional Body Expressions
Julie Grèzes1*, Léonor Philip1, Michèle Chadwick1, Guillaume Dezecache1, Robert Soussignan2,
Laurence Conty1,3
1 Laboratoire de Neurosciences Cognitives (LNC) - INSERM U960 & IEC - Ecole Normale Supérieure (ENS), 75005 Paris, France, 2 Centre des Sciences du Goût et de
l’Alimentation (CSGA) UMR 6265 CNRS - 1324 INRA, Université de Bourgogne, 21000 Dijon, France, 3 Laboratoire de Psychopathologie and Neuropsychologie (LPN,
EA2027), Université Paris 8, Saint-Denis 93526 cedex, France
Abstract
People display facial reactions when exposed to others’ emotional expressions, but exactly what mechanism mediates these
facial reactions remains a debated issue. In this study, we manipulated two critical perceptual features that contribute to
determining the significance of others’ emotional expressions: the direction of attention (toward or away from the observer)
and the intensity of the emotional display. Electromyographic activity over the corrugator muscle was recorded while
participants observed videos of neutral to angry body expressions. Self-directed bodies induced greater corrugator activity
than other-directed bodies; additionally corrugator activity was only influenced by the intensity of anger expresssed by selfdirected bodies. These data support the hypothesis that rapid facial reactions are the outcome of self-relevant emotional
processing.
Citation: Grèzes J, Philip L, Chadwick M, Dezecache G, Soussignan R, et al. (2013) Self-Relevance Appraisal Influences Facial Reactions to Emotional Body
Expressions. PLoS ONE 8(2): e55885. doi:10.1371/journal.pone.0055885
Editor: Andrea Serino, University of Bologna, Italy
Received March 13, 2012; Accepted January 7, 2013; Published February 6, 2013
Copyright: ß 2013 Grèzes et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits
unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: This work was supported by the Agence National of Research (ANR) ‘‘Emotion(s), Cognition, Comportement’’ 2011 program (Selfreademo) and by
INSERM. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing Interests: The authors have declared that no competing interests exist.
* E-mail: [email protected]
the social context, the perceived emotion [29,30], and the
relationship between the expresser and the observer [31].
The present experiment manipulated the self-relevance of
stimuli to further verify the contribution of affective processes to
RFRs. Recent work converges toward the view that the ability to
initiate adapted behaviors in response to others’ emotional signals
mainly depends on the capacity to correctly evaluate the
functional significance of the emitted signal for the self [32].
Several factors can therefore influence how self-relevant a given
emotional signal is, thereby determining how an observer will
evaluate and respond to it. Direction of gaze and body posture are
among the most socially relevant cues through which we gain
information regarding the source of an individual’s emotional
reaction and the target of their impending actions. Such cues are
particularly significant for anger because of their prime importance in regulating social interactions in both human [33] and
non-human [34] primates. Facial expressions of anger have been
shown to be more accurately and quickly recognized, and judged
to be more intense, when coupled with direct gaze [35–40].
Additionally, Hess et al. [31] revealed an increase in the EMG
activity of the orbicularis occuli in response to funny films of
increasing intensity in the presence of friends but not of strangers;
strongly suggesting that both self-relevance appraisal and the
intensity of eliciting stimuli are important determinants of
emotional facial reactions.
Here we elaborated upon the above-mentioned results by
varying two independent critical cues in face to face interactions:
body orientation, proven to be important in determining to whom
social attention is directed (toward or away from the observer), and
Introduction
Emotional expressions are critical to the coordination of social
interactions by providing information about the emitter’s emotional states and behavioral intentions and by evoking reactions in
the observer [1–4]. The research agrees that when exposed to
emotional expressions, people display rapid facial reactions (RFRs)
detectable by electromyography (EMG) [5–9]. While viewing
static or dynamic happy faces elicits increased zygomaticus major
activity (pulling the corners of the mouth back and upwards into a
smile), angry faces evoke increased corrugator supercilii activity
(pulling the brows together) [10–20]. Nevertheless, exactly what
mechanism mediates these facial reactions remains a debated issue
[8,21–23].
One major theoretical framework proposes that these facial
reactions reflect the readout of emotional processing [6,24,25].
Within this framework, the appraisal perspective postulates that a
multimodal organization of response patterns (which includes
facial expressions and physiological reactions) is established
according to appraisal configurations (novelty, coping potential,
relevance, etc.) that are emotion-specific [1,26,27]. The emotional
readout framework implies that people would be disposed to react
with emotion-specific response patterns to biologically relevant
stimuli such as expressions of anger [6]; and also that a given facial
expression can elicit a different emotion and thus a divergent
reaction in the observer, such as, for instance, a posture of
submission in response to a threatening expression. This partly
explains why facial reactions are less automatic than first thought
[28], and why their production varies substantially as a function of
PLOS ONE | www.plosone.org
1
February 2013 | Volume 8 | Issue 2 | e55885
Self-Relevance Influences Facial Reactions
the intensity of the emotional display (different levels of angry body
expressions). We presented dynamic bodily expressions of anger of
increasing intensity, directed toward or away from the observer.
First, whether previous findings could be generalized to angry
body expressions remains to be established, but if affective
processes participate in facial reactions, RFRs should be elicited
for other forms of emotional communication signals than facial
expressions, such as bodily expressions. Second, the observer’s
facial EMG responses to emotional expressions as a function of
face direction has only been explored in two studies [17,41].
Besides presenting conflicting results, these studies were limited in
that subjects were explicitely instructed to determine the presence
or absence of eye contact. Thus, by potentially influencing the
importance attributed to gaze direction, they might have biased
facial EMG activity. Yet, if the relevance of other’s emotional
expressions impact the oberver’s affective processing, being the
target of an expression of anger is expected to implicitely trigger
more activity in the corrugator supercilii, as compared to being a
simple observer of that expression. Moreover, the level of muscle
activity is expected to fluctuate with the intensity of the displayed
expression.
Figure 1. 2*4 factorial design. Short movies of neutral (1), mild (2),
moderate (3) and intense anger (4) oriented-to-Self and oriented-toOther were presented.
doi:10.1371/journal.pone.0055885.g001
Validation of the stimuli
Methods
Two behavioral experiments were conducted on the selected 96
stimuli.
Identification of Anger. This study assessed the ability to
identify anger from dynamic body expressions. Participants
(n = 20) were requested to decide (forced-choice) for each video
whether the expression of the actor was ‘‘neutral’’, ‘‘angry’’ or
‘‘other’’. The order of the stimuli was fully randomized, as well as
the order of the response words on the response screen.
Categorization rates were percent transformed and submitted to
a repeated measures ANOVA with within-subject factors of
Target of Attention (Self or Other), Levels of Emotion (1, 2, 3, 4)
and Choice (Anger, Neutral, Other). Greenhouse-Geisser epsilons
(e) and p values after correction were reported where appropriate.
Post-hoc comparisons (two-tailed t-tests) were performed for the
analysis of simple main effects when significant interactions were
obtained.
The ANOVA revealed a main effect of Choice, F(2,38) = 36.57;
p,.001, but no main effect of Target (F(1,19) = 1.30; p = .26), nor
a main effect of Levels of Emotion, F(3,57) = 2.42; p = .075;
e = 0.67; pcorr = .101. Of interest, only the interaction between
Levels of Emotion and Choice, F(6,114) = 143.06; p,.001;
e = 0.55; pcorr,.001 reached significance. For both Self- and
Other-directed expressions, level 1 was correctly categorized as
‘‘Neutral’’ (as compared to ‘‘Anger’’ and ‘‘Other’’, all ps,.001),
and levels 3 and 4 as ‘‘Anger’’ (as compared to ‘‘Neutral’’ and
‘‘Other’’, all ps,.001). The response accuracy for these conditions
was above 75% and differed from chance level (33%) at p,0.001
(See Fig. 2). This was not the case for the mild levels of anger
where accuracy did not significantly differ from chance level
(Other2 = 36%, p = .497; Self2 = 39%, p = .195). These mild levels
were ambiguous as participants responded ‘‘Neutral’’, ‘‘Angry’’ or
‘‘Other’’ equally for both Self- and Other-directed expressions (all
ps..169; See Fig. 2 and Table S1).
Subjective Feelings. The second experiment assessed the
intensity of the participants (n = 20)’ feelings when confronted with
angry body expressions. Participants were requested to evaluate
the intensity of Felt Confusion, Surprise, Sadness, Threat and
Irritation on 5 graduated scales from 0 to 9. The five scales
appeared on the screen following each video, and their order was
randomized between subjects. The order of the stimuli was fully
randomized. Ratings were submitted to a repeated-measures
Ethics
The present study obtained ethics approval from the local
research ethics committees (CPP Ile de France III and Institut
Mutualiste Montsouris) at all institutions where participants were
recruited and human experimentation was conducted.
Stimuli
Eight professional actors (four males) were hired and instructed
to begin at neutral and to increase their expression of anger in
seven to nine 3 s increments according to the experimenters signal
in front of a camera until deemed satisfactory. Performances were
filmed with two cameras: one was facing the actor; the second at a
45uangle relative to the first creating the impression that the
expression was aimed toward the observer (oriented-to-self
condition) or toward another (oriented-to-other condition).
Videos were edited using Windows Movie Maker and several 2 sec
(25 frames per second) fragments were selected to obtain two
extracts for each condition from neutral to extreme anger with two
different viewpoints. Clips of actors seen from the side were flipped
to obtain equal numbers of left and right videos and faces were
blurred using the Adobe After-effect software, to preclude extraction
of any emotional cues conveyed by them and restricting
information to the body.
Selection of the final material was based on the results of a
behavioral pilot study. A total of 312 edited video clips including
all the original steps from neutral to anger for each actor were
presented on a PC screen. Participants (n = 23) were instructed to
evaluate the intensity of the actor’s bodily expression on a
continuous scale from neutral to high anger. Two-tailed paired ttests were used to compare increments and the results permitted
the selection of the most consistently convincing performances of
each actor’s range, corresponding to 4 significantly different steps
in the degree of expressed anger (p,0.05). We retained 96 videos
corresponding to 8 actors, 4 levels of anger (neutral; mild;
moderate; intense anger) and 2 points of view (oriented to self and
other, both right and left viewpoints). A 264 factorial design was
built, with Target of Attention (Self or Other) and Levels of
emotion (neutral (1); mild (2), moderate (3) and intense anger (4)) as
factors (see Fig. 1).
PLOS ONE | www.plosone.org
2
February 2013 | Volume 8 | Issue 2 | e55885
Self-Relevance Influences Facial Reactions
when exposed to Self- as compared to Other-directed expressions
and increased their rating of intensity of feeling as a function of the
increased intensity of the stimuli, these effects were more marked
for feelings of Threat (see table S2). Together, these results
strongly suggest that the perception of Self-directed angry body
expressions mainly prompted a feeling of Threat in the observer,
as compared to other feelings (See Fig. 3).
Facial EMG experiment
Participants. Forty-four participants (21 women) participated in the physiological experiment. All had normal or correctedto-normal vision, were right-handed, naive to the aim of the
experiment and presented no neurological or psychiatric history.
All provided written informed consent according to institutional
guidelines of the local research ethics committee and were paid for
their participation. Due to a bad signal-to-noise ratio in
physiological signals, four subjects (2 men) were excluded from
final analysis leaving 40 participants (mean age = 2460.4 years).
Experiment. Participants had to fix a white cross centered on
a 19-inch black LCD screen for a random duration of 800 to
1200 ms followed by a silent 2000 ms video showing an actor in
one of the eight experimental conditions. Each video was followed
by an inter-stimulus interval of 1000 ms. Additionally, 15 oddball
stimuli (upside-down video-clips; see below) and 38 null events
(black screen of 2 sec) were included pseudo-randomly within the
stimulus sequence. The order of the stimuli was fully randomized.
Subjects were instructed to press a button each time the upsidedown video-clip appeared to ensure they paid attention to all the
stimuli throughout the session. The participants performed at
100% of accuracy (at mean 648622 ms) in this oddball task.
Data acquisition and analysis. Using the acquisition
system ADInstruments (ML870/Powerlab 8/30), EMG activity
was continuously recorded using Sensormedics 4 mm shielded
Ag/AgCl miniature electrodes (Biopac Systems, Inc). Fixation
cross and stimuli onset were automatically signaled on the
channels of the LabChart Pro software by a PCMCIA Parallel
Card (Quatech SPP-100). Before attaching the electrodes, the
target sites of the left face were cleaned with alcohol and gently
rubbed to reduce inter-electrode impedance. Two pairs of
electrodes filled with electrolyte gel were placed on the target
sites and secured using adhesive collars and sticky tape. Following
the guidelines proposed by Fridlund & Cacioppo [42], the two
electrodes of a pair were placed at a distance of approximately
1.5 cm over 2 muscle regions associated with different emotional
expressions. Activity over the left corrugator supercilii muscle, which
lowers brows, was used as a marker of negative emotional
expression [6]. Activity over the left zygomaticus major muscle, which
pulls lip corners up and indexes pleasure/happiness, was used as a
control recording site to verify that participants responded
selectively to anger expressions. The ground electrode was placed
on the upper right forehead. The signal was amplified, band-pass
filtered online between 10–500 Hz, and then integrated. EMG
trials containing artifacts were manually rejected. No more than
15% of the trials were dropped per muscles. Integral values were
subsampled offline at 10 Hz and log transformed to reduce the
impact of extreme values [9,23]. Values were then standardized
within participants and within muscle to allow comparisons.
Temporal profiles of facial EMG during the first 1000 ms
following stimulus onset were investigated by calculating mean
amplitudes during 10 time intervals of 100 ms. Pre-stimulus values
(computed over 200 ms before the stimuli onset) were then
subtracted from post-stimulus activity to measure the activity level
caused by viewing each stimulus (i.e., to calculate the change from
baseline). EMG activity was thus defined as the change from the
Figure 2. Results from the categorization task. Mean percentage
for each choice (Anger, Neutral or Other) of the categorization task
plotted as a function of the Levels of Emotion (1, 2, 3, 4).
doi:10.1371/journal.pone.0055885.g002
ANOVA with within-subject factors of Feelings (Confusion,
Surprise, Sadness, Threat and Irritation), Target of Attention
(Self or Other) and Levels of Emotion (1, 2, 3, 4). GreenhouseGeisser epsilons (e) and p values after correction were reported
where appropriate. Post-hoc comparisons (two-tailed t-tests) were
performed for the analysis of simple main effects when significant
interactions were obtained.
The ANOVA indicated a main effect of Feelings,
F(4,76) = 16.09; p,.001, e = 0.82; pcorr,.001, and a main effect
of Levels of Emotion, F(3,57) = 48.59; p,.001; e = 0.38; pcorr,.001,
but no main effect of Target, F(1,19) = 2.64; p = .12. There was a
significant interaction between Feelings * Levels of Emotion,
F(12,228) = 19.57; p,.001; e = 0.37; pcorr,.001. The intensity of
the Feelings increased with the Levels of Emotion (Level1,Level2,Level3,Level4 - all t(19).36.22; all ps,.001), except for
Sadness (Level1 = Level2 = Level3,Level4)(See Table S2 and
Fig. 3). Of interest here, there was a significant interaction
between Feelings * Target, F(4,76) = 6.25; p,.001; e = 0.68;
pcorr = .001. Self- as compared to Other-directed expressions were
perceived as more Threatening (t(19) = 2.67; p = .015) and more
Irritating (t(19) = 2.54; p = .02). There was no difference for the
other Feelings (ps..23).
We then conducted a repeated-measures ANOVA with withinsubject factors of Feelings (Threat and Irritation), Target of
Attention (Self or Other) and Levels of Emotion (1, 2, 3, 4). This
ANOVA revealed a main effect of Feelings, F(1,19) = 14.63,
p = .001: participants felt more threatened than irritated when
confronted with body expressions of anger (Mean (SEM)
Threat = 3.24(.19); Irritation = 2.42(.23)(See Figure 3). It also
revealed a main effect of Target, F(1,19) = 7.84; p = .011; a main
effect of Levels of Emotion, F(3,57) = 65.51; p,.001; e = 0.42;
pcorr,.001, a significant interaction between Feelings * Levels of
Emotion, F(3,57) = 14.86; p,.001; e = 0.55; pcorr,.001, a significant interaction between Feelings * Target, F(3,57) = 19.87;
p,.001 but no triple interaction between Feelings * Levels of
Emotion * Target, F(3,57) = .082; p = .97. Importantly here, while
participants rated their feeling of both Threat and Irritation higher
PLOS ONE | www.plosone.org
3
February 2013 | Volume 8 | Issue 2 | e55885
Self-Relevance Influences Facial Reactions
Figure 3. Intensity of felt emotions. The intensity of Felt Emotions
(Threatened, Irritated, Surprised, Confused and Sad) with standarderrors are plotted as a function of the Target of Attention (S for Self, O
for Other), and the Levels of Emotion (1, 2, 3, 4). The grey asterisks on
the right signal feelings that significantly increased with Levels of
Emotion. Blacks asterisks on panels signals feelings that significantly
increased for Self as compared to Other-directed body.
doi:10.1371/journal.pone.0055885.g003
baseline occurring between 0 and 1000 ms after stimuli onset
[10,23]. Finally, mean levels of corrugator and zygomaticus
activity were computed separately for each experimental condition.
Physiological data were first submitted, separately for each
muscle, to repeated measures ANOVA using Target of Attention
(Self or Other), Levels of Emotion (1, 2, 3, 4) and Time Windows
(10) as within-subject factors. Second, when the Time Windows
factor interacted with another factor of interest, we performed
post-hoc t-tests to determine the time windows for which the effect
occurred and submitted the mean activity of these windows to a
new ANOVA using Target of Attention (Self or Other) and Levels
of Emotion (1, 2, 3, 4) as within-subject factors. GreenhouseGeisser epsilons (e) and p values after correction were reported
when appropriate. Post-hoc comparisons (two-tailed t-tests) were
also performed for the analysis of simple main effects when
significant interactions were obtained.
Results
Corrugator activit
The ANOVA indicated significant effects of Target of
Attention, F(1,39) = 11.05; p = .002, Levels of Emotion,
F(3,117) = 2.71; p = .048, and Time Windows F(9,351) = 45.55;
p,.001; e = 0.20; pcorr,.001 (See Table S3, and Fig. 4). The
interaction between Target of Attention and Levels of Emotion,
F(3,117) = 5.39; p = .002; e = 0.77; pcorr = .004, was significant after
correction, whereas the other interactions didn’t reach significance
after correction: Time Windows6Target of Attention,
F(9,351) = 2.63; p = 0.006; e = 0.26; pcorr = .068, and Time Windows6Levels of Emotion, F(27,1053) = 1.66; p = .019; e = 0.23;
pcorr = .127. Yet, the triple interaction between Time windows,
Target of Attention and Levels of Emotion reached significance
after correction, F(27,1053) = 1.67; p,.001; e = 0.23; pcorr = .035.
We then submitted the data for each time window to a second
ANOVA with within-subject factors of Target of Attention (Self or
Other) and Levels of Emotion (1, 2, 3, 4). This analysis revealed
that the interaction between Target of Attention and Levels of
Emotion was significant between 300 and 700 ms Time windows,
all F(3,117).4.4; all pcorr,.01.
We thus computed the mean activity between 300 and 700 ms
and submitted these data to a second ANOVA with within-subject
factors of Target of Attention (Self or Other) and Levels of
Emotion (1, 2, 3, 4)(See Table S4, and Fig. 5). This second
ANOVA revealed a main effect of Target of Attention. Selfdirected body induced greater corrugator activity than Otherdirected bodies, F(1,39) = 13.02; p,.001. An interaction between
Target of Attention and Levels of Emotion was also observed,
F(3,117) = 6.31; p,.001; e = 0.75; pcorr = .002, revealing that the
effect of Target of Attention increased with the Levels of Emotion:
the effect of Target of Attention was not significant at level 1 (i.e.
Neutral stimuli-t(39) = 2.605; p = .548); failed to reach significance at level 2, t(39) = 1.855; p = .071; appeared significant at
level 3, t(39) = 2.338; p = .025, and reached high significance at
PLOS ONE | www.plosone.org
4
February 2013 | Volume 8 | Issue 2 | e55885
Self-Relevance Influences Facial Reactions
signals were directed toward them as compared to averted gaze,
and the higher the intensity of displayed anger, the stronger their
facial reactions. We propose early RFRs to body expressions of
anger might be related to an emotional appraisal process [38].
Our data reveal the same influence of the direction of attention
in the RFRs to body expressions, as has been shown for faces
[35,38]. Using virtual avatars and manipulating face orientation,
Schrammel et al. [17,17] demonstrated significantly higher
corrugator activity for angry faces with direct gaze as compared to
angry faces with averted gaze. More recently, we further provided
facial EMG evidence of the critical role of attention on
interpersonal facial matching by manipulating gaze direction
rather than face orientation [43]. Here, even in the absence of
gaze information, self-directed body expressions of anger triggered
higher corrugator reactivity as compared to other-directed bodies.
Our data converge with the appraisal perspective which proposes
that the evaluation of emotional stimuli depends on the degree of
self-relevance of the event. Within such a framework, it is proposed
that anger should be rated as more intense when coupled with
direct gaze as it signals a direct threat for the observer [38,40,44].
Indeed, this was confirmed by our behavioural pre-tests revealing
that the perception of self-directed angry body expressions
specifically increased the subjective feelings of being threatened.
Crucially, we have demonstrated for the first time that the
intensity of bodily expressions of anger displayed by a congener
enhanced RFRs only when directed toward the self. The absence
of such an increase for averted bodies dismisses the possibility that
these findings are strictly related to the amount of movement
involved in body expressions. Together with the findings of Hess et
al. [31] of increased EMG reactivity to funny films of increasing
intensity in the presence of friends only, our results imply that it is
the interaction between these factors that influences how selfrelevant an emitted signal is and determines the levels of RFRs
(here: direction of the emitters’ attention and the intensity of their
expression), rather than each factor individually. Moreover, our
results strongly suggest that the higher the potential for interaction
with another (positive in Hess et al., negative here), the higher the
facial reactions in the observer.
Recently, using EEG under fMRI, we revealed that the degree
of potential social interaction with another relies on the binding of
self-relevant social cues 200 ms after stimulus onset in motorrelated areas [45]. The present early RFRs, beginning at 300 ms
after stimulus onset, may thus reflect the emotional motor response
to being threatened. Activity in the corrugator supercilii muscle is
largely accepted as a reflection of negative emotional reactions to
negative-valenced stimuli, such as spiders and snakes [6],
unpleasant scenes [46] or to negative facial expressions
[23,30,47], and has also been demonstrated in response to static
body expressions of fear [48,49]. The present activity in the
corrugator supercilii muscle triggered in response to body expressions of
anger may thus relate to the observer’s negative emotional
reaction. As anger displays are appraised as power and dominance
signals, which have been shown to trigger divergent rather than
convergent responses [50], one can speculate that these RFRs
convey a divergent fear response [23,30].
RFRs over the corrugator muscle occur in response to body
expressions in the absence of facial information, and regardless of
body orientation and of emotional content. Although it is
acknowledged that RFRs may result from multiple processes
[18,23], the presence of early RFRs in absence of facial
expressions cannot be explained by strict motor mimicry as the
body expressions here did not provide the cues necessary for facial
motor matching. A strict motor mimicry process is indeed not
sufficient to explain why RFRs are displayed to non-facial and
Figure 4. Time course of the mean EMG activity. A) Over the
corrugator supercilii region as a function of the Target of Attention (S for
Self (green), O for Other (blue)) and the Levels of Emotion (1,2,3,4).
Activity reflects average activation during each 100-ms time interval. B)
Over the zygomaticus region as a function of the Target of Attention (S
for Self, O for Other) and the Levels of Emotion (1,2,3,4).
doi:10.1371/journal.pone.0055885.g004
level 4 of emotion, t(39) = 5.826; p,.001. Interestingly, for Selfdirected bodies, level 1 was significantly different from level 2
(t(39) = 22.687; p = .011); level 2 and level 3 were not significantly
different, t(39) = 2.134; p = .897; but level 3 appeared significantly
different from level 4, t(39) = 22.342; p = .024. By contrast, the
different levels of emotion did not significantly differ in the Otherdirected condition, all ps..434. Finally, post-hoc analyses revealed
that activity between 300–700 ms significantly differs from 0 in all
experimental conditions (all t(39).4.7;all p,.001) suggesting that
all conditions triggered RFRs (see Fig. 5).
Zygomatic activity
Using zygomatic activity as a control recording site, the ANOVA
with Target of Attention (Self or Other), Levels of Emotion
(1,2,3,4) and Time Windows of analyses (10), as within-subject
factors, did not reveal any main effect nor significant interaction,
all F,1.45 (Table S5, Fig. 4).
Discussion
Previous EMG experiments have consistently demonstrated that
people tend to produce facial reactions when looking at other’s
facial expressions of emotion. Here, we found that participants
displayed early facial reactions to body expressions of anger, as
revealed by an increase of corrugator activity occurring 300 to
700 ms after stimulus onset. RFRs were stronger when anger
PLOS ONE | www.plosone.org
5
February 2013 | Volume 8 | Issue 2 | e55885
Self-Relevance Influences Facial Reactions
Figure 5. Mean activity over the corrugator supercilii region between 300 and 700 ms. The mean (SEM) activity is represented as a function
of A) the Target of Attention (Self (green), Other (blue)) and the Levels of Emotion (1,2,3,4) and, B) only for Self-oriented conditions for the 4 Levels of
Anger. *p,0.05.
doi:10.1371/journal.pone.0055885.g005
non-social emotional pictures [6], emotional body expressions
[48,49] and auditory stimuli [51–53], nor why they are
occasionally incongruent with the attended signals [23]. Moreover,
our results are clearly at odds with the predictions that can be
derived from a motor mimicry perspective, i.e. that participants
should either display congruent RFRs to others’ angry faces,
irrespective of the direction of attention of the emitter [28] or
display less mimicry when directed at the observer as anger
conveys non-ambiguous signals of non-affiliative intentions
[29,37]. Yet, the present early RFRs elicited in all experimental
conditions, including neutral bodies (Level 1), also rule out the
possibility that they reflect emotional processes only and suggest
that RFRs could partly result from a mere orienting response to
the apparition of the stimuli and/or the observer’s cognitive effort
[54] to decode an emotional expression in the absence of facial
information and/or the appraisal of goal-obstructiveness [54,55].
Also, as the present findings were revealed using body expressions
of anger only, we cannot simply rule out that motor-mimicry
processes would occur under other experimental circumstances,
nor specify how motor, emotional and appraisal processes might
interact. Further experiments are thus needed to determine
whether the present results can be generalized to a wider range
of emotions as well as whether (and to what extent) both motor
and affective processes operate when facial information is available
[18].
To conclude, we not only demonstrate that the corrugator supercilii
muscle can be triggered in response to angry expressions but
extend these findings to dynamic bodies. The present findings
corroborate the emotional readout framework and further suggest
that rapid facial reactions reflect the appraisal of the context and
its self-relevance which varies as a function of the emitter’s
direction of attention and the intensity of his/her anger.
Supporting Information
Table S1 Mean (SEM) recognition rate.
(DOC)
Table S2 Mean (SEM) intensity ratings of feelings
(DOC)
Table S3 Mean (SEM) data from the Corrugator activity
submitted to a repeated measures ANOVA using Target
of Attention (Self or Other), Level of Emotion (1, 2, 3, 4)
and Time Windows (10) as within-subject factors.
(DOC)
Table S4 Mean activity (SEM) between 300 and 700 ms
for the Corrugator muscle region submitted to a
repeated measures ANOVA with within-subject factors
of Target of Attention (Self or Other) and Level of
Emotion (1, 2, 3, 4).
(DOC)
Table S5 Mean (SEM) data from the zygomatic activity
submitted to a repeated measures ANOVA using Target
of Attention (Self or Other), Level of Emotion (1, 2, 3, 4)
and Time Windows (10) as within-subject factors.
(DOC)
Acknowledgments
We are grateful to Sylvie Berthoz (INSERM U669 & IMM) for
administrative supports and to the anonymous referees for their
constructive comments.
Author Contributions
Conceived and designed the experiments: JG LC RS LP. Performed the
experiments: LP MC GD RS LC. Analyzed the data: LP MC GD LC JG.
Wrote the paper: JG LC.
References
3. Fischer AH, Manstead ASR (2008) Social functions of emotions. In: Lewis M,
Haviland-Jones JM, Barrett L, editors. Handbook of emotions. New-York:
Guilford Press. pp. 456–470.
4. Frijda NH (2010) Impulsive action and motivation. Biol Psychol 84: 570–579.
doi: 10.1016/j.biopsycho.2010.01.005.
1. Frijda NH (1986) The emotions. Cambridge: Cambridge University Press.
2. Keltner D, Haidt J (2001) Social functions of emotions. In: Mayne T, Bonanno
GA, editors. Emotions: Current issues and future directions. New-York: Guilford
Press. pp. 192–213.
PLOS ONE | www.plosone.org
6
February 2013 | Volume 8 | Issue 2 | e55885
Self-Relevance Influences Facial Reactions
30. van der Schalk J, Fischer A, Doosje B, Wigboldus D, Hawk S, et al. (2011)
Convergent and divergent responses to emotional displays of ingroup and
outgroup. Emotion 2: 298.
31. Hess WR, Akert K (1995) The intensity of facial expression is determined by
underlying affective state and social situation. J Pers Soc Psychol 69: 280–288.
32. Loveland K (2001) Toward an ecological theory of autism. In: Burack CK,
Charman T, Yirmiya N, Zelazo PR, editors. The Development of Autism:
Perspectives from Theory and Research. New Jersey: Erlbaum Press. pp. 17–37.
33. Argyle M (1988) Bodily Communication (2nd edition). New York: Methuen.
34. Emery NJ, Amaral DG (2000) The role of the amygdala in primate social
cognition. In: Lane RD, Nadel L, editors. Cognitive neuroscience of emotion.
New York: Oxford University Press. pp. 156–191.
35. Adams RB Jr, Gordon HL, Baird AA, Ambady N, Kleck RE (2003) Effects of
gaze on amygdala sensitivity to anger and fear faces. Science 300: 1536.
36. Adams RB Jr, Kleck RE (2005) Effects of direct and averted gaze on the
perception of facially communicated emotion. Emotion 5: 3–11.
37. Hess U, Adams R, Kleck R (2007) Looking at You or Looking Elsewhere: The
Influence of Head Orientation on the Signal Value of Emotional Facial
Expressions. Motivation & Emotion 31: 137–144. Article.
38. Sander D, Grandjean D, Kaiser S, Wehrle T, Scherer KR (2007) Interaction
effects of perceived gaze direction and dynamic facial expression: Evidence for
appraisal theories of emotion. Eur J Cognit Psychol 19: 470–480.
39. Bindemann M, Burton MA, Langton SRH (2008) How do eye gaze and facial
expression interact? Visual Cognition 16: 733.
40. Cristinzio C, N’Diaye K, Seeck M, Vuilleumier P, Sander D (2010) Integration
of gaze direction and facial expression in patients with unilateral amygdala
damage. Brain 133: 248–261.
41. Mojzisch A, Schilbach L, Helmert JR, Pannasch S, Velichkovsky BM, Vogeley
K (2006) The effects of self-involvement on attention, arousal, and facial
expression during social interaction with virtual others: A psychophysiological
study. Social Neuroscience 1: 184–195. 10.1080/17470910600985621.
42. Fridlund AJ, Cacioppo JT (1986) Guidelines for Human Electromyographic
Research. Psychophysiology 23: 567–589.
43. Soussignan R, Chadwick M, Philip L, Conty L, Dezecache G, Grèzes J (2012)
Self-relevance appraisal of gaze direction and dynamic facial expressions: Effects
on facial electromyographic and autonomic reactions. Emotion. 10.1037/
a0029892.
44. N’Diaye K, Sander D, Vuilleumier P (2009) Self-relevance processing in the
human amygdala: gaze direction, facial expression, and emotion intensity.
Emotion 9: 798–806.
45. Conty L, Dezecache G, Hugueville L, Grèzes J (2012) Early Binding of Gaze,
Gesture, and Emotion: Neural Time Course and Correlates. The Journal of
Neuroscience 32: 4531–4539.
46. Bradley MM, Lang PJ (2007) Emotion and motivation. In: Cacioppo JT,
Tassinary JG, Bernston G, editors. Handbook of Psychophysiology. New-York:
Cambridge University press. pp. 581–607.
47. Balconi M, Bortolotti A, Gonzaga L (2011) Emotional face recognition, EMG
response, and medial prefrontal activity in empathic behaviour. Neurosci Res
71: 251–259. 10.1016/j.neures.2011.07.1833.
48. Magnee MJCM, Stekelenburg JJ, Kemner C, de Gelder B (2007) Similar facial
electromyographic responses to faces, voices, and body expressions. Neuroreport
18: 369–372.
49. Tamietto M, Castelli L, Vighetti S, Perozzo P, Geminiani G, et al. (2009)
Unseen facial and bodily expressions trigger fast emotional reactions.
Proceedings of the National Academy of Sciences 106: 17661–17666.
50. Tiedens LZ, Fragale AR (2003) Power moves: Complementarity in submissive
and dominant nonverbal behavior. J Pers Soc Psychol 84: 558–568.
51. Bradley MM, Lang PJ (2000) Affective reactions to acoustic stimuli.
Psychophysiology 37: 204–215.
52. Hietanen JK, Surakka V, Linnankoski I (1998) Facial electromyographic
responses to vocal affect expressions. Psychophysiology 35: 530–536.
53. de Gelder B, Vroomen J, Pourtois G, Weiskrantz L (1999) Non-conscious
recognition of affect in the absence of striate cortex. Neuroreport 10: 3759–3763.
10716205.
54. Smith CA (1989) Dimensions of appraisal and physiological response in emotion.
J Pers Soc Psychol 56: 339–353.
55. Aue T, Scherer KR (2011) Effects of intrinsic pleasantness and goal
conduciveness appraisals on somatovisceral responding: Somewhat similar, but
not identical. Biol Psychol 86: 65–73.
5. Bush LK, McHugo GJ, Lanzetta JT (1986) The effects of sex and prior attitude
on emotional reactions to expressive displays of political leaders. Psychophysiology 23: 427.
6. Dimberg U, Thunberg M (1998) Rapid facial reactions to emotional facial
expressions. Scand J Psychol 39: 39–45. 9619131.
7. Dimberg U, Thunberg M, Elmehed K (2000) Unconscious facial reactions to
emotional facial expressions. Psychol Sci 11: 86–89.
8. Hess U, Blairy S (2001) Facial mimicry and emotional contagion to dynamic
emotional facial expressions and their influence on decoding accuracy.
Int J Psychophysiol 40: 129–141.
9. McIntosh GJ (2006) Spontaneous facial mimicry, liking, and emotional
contagion. Polish Psychological Bulletin 37: 31–42.
10. Dimberg U (1982) Facial reactions to facial expressions. Psychophysiology 19:
643–647.
11. Wild B, Erb M, Bartels M (2001) Are emotions contagious? Evoked emotions
while viewing emotionally expressive faces: quality, quantity, time course and
gender differences. Psychiatry Res 102: 109–124.
12. Sonnby-Borgstrom M (2002) Automatic mimicry reactions as related to
differences in emotional empathy. Scand J Psychol 43: 433–43.
13. de Wied M, van Boxtel A, Zaalberg R, Goudena PP, Matthys W (2006) Facial
EMG responses to dynamic emotional facial expressions in boys with disruptive
behavior disorders. J Psychiatr Res 40: 112–121. 10.1016/j.jpsychires.2005.08.003.
14. Weyers P, Mehlberger A, Hefele C, Pauli P (2006) Electromyographic responses
to static and dynamic avatar emotional facial expressions. Psychophysiology 43:
450–453. 10.1111/j.1469-8986.2006.00451.x.
15. Schilbach L, Eickhoff SB, Rotarska-Jagiela A, Fink GR, Vogeley K (2008)
Minds at rest? Social cognition as the default mode of cognizing and its putative
relationship to the ‘‘default system’’ of the brain. Conscious Cogn 17: 457–467.
10.1016/j.concog.2008.03.013.
16. Sato W, Fujimura T, Suzuki N (2008) Enhanced facial EMG activity in response
to dynamic facial expressions. Int J Psychophysiol 70: 70–74. 10.1016/
j.ijpsycho.2008.06.001.
17. Schrammel F, Pannasch S, Graupner ST, Mojzisch A, Velichkovsky BM (2009)
Virtual friend or threat? The effects of facial expression and gaze interaction on
psychophysiological responses and emotional experience. Psychophysiology 46:
922–931. 10.1111/j.1469-8986.2009.00831.x.
18. Moody EJ, McIntosh DN (2011) Mimicry of Dynamic Emotional and MotorOnly Stimuli. Social Psychological and Personality Science 2: 679–686.
19. Rymarczyk K, Biele C, Grabowska A, Majczynski H (2011) EMG activity in
response to static and dynamic facial expressions. Int J Psychophysiol 79: 330–
333. 10.1016/j.ijpsycho.2010.11.001.
20. Dimberg U, Andreasson P, Thunberg M (2011) Emotional Empathy and Facial
Reactions to Facial Expressions. Journal of Psychophysiology 25: 26–31.
21. Hess U, Philippot P, Blairy S (1998) Facial Reactions to Emotional Facial
Expressions: Affect or Cognition? Cogn Emot 12: 509–531.
22. Moody EJ, McIntosh DN (2006) Mimicry and autism: bases and consequences
of rapid, automatic matching behavior. In: Rogers S, Williams J, editors.
Imitation and the social mind: Autism and typical develoment. New-York:
Guilford Press. pp. 71–95.
23. Moody EJ, McIntosh DN, Mann LJ, Weisser KR (2007) More than mere
mimicry? The influence of emotion on rapid facial reactions to faces. Emotion 7:
447–457. 2007-06782-020.
24. Buck R (1994) Social and emotional functions in facial expression and
communication: The readout hypothesis. Biol Psychol 38: 95–115.
25. Cacioppo JT, Petty RP, Losch ME, Kim HS (1986) Electromyographic activity
over facial muscle regions can differentiate the valence and intensity of affective
reactions. J Pers Soc Psychol 50: 268.
26. Ellsworth PC, Scherer KR (2003) Appraisal processes in emotion. In: Davidson
RJ, Goldsmith H, Scherer KR, editors. Handbook of Affective Sciences. NewYork: Oxford University Press. pp. 572–595.
27. Roseman IJ, Smith CA (2001) Appraisal Theory. In: Schere KR, Schorr A,
Johnstone T, editors. Appraisal Processes in Emotion: Theory, Methods,
Research. Oxford: Oxford University Press. pp. 3–19.
28. Chartrand TL, Bargh JA (1999) The chameleon effect: The perception–
behavior link and social interaction. J Pers Soc Psychol 76: 893–910.
29. Bourgeois P, Hess U (2008) The impact of social context on mimicry. Biol
Psychol 77: 343–352. 10.1016/j.biopsycho.2007.11.008.
PLOS ONE | www.plosone.org
7
February 2013 | Volume 8 | Issue 2 | e55885
ñ½ññ½½½ò½ññ­§§§§§§§§ ½½ òòò¼ ¿§§§§§ § §§§§§§ó§§§§§§ §§§§§§§§§§§§
§² §§§ §§§§ §² §§§§§ §§§§§§§¼ ¼§§§ § §§§§ §§§§§§ §§§ §§ §§§§§§
§§§§ §§ § §§§¬§§§ §§ §§§§§§§§§§§ §§§§ ò§§§Œó §§ §§ §§¬§§§ §§ §§§ó
§§§§§§§§ §§§§§ ò§§§Œ §§ §§§Œóò ¿§§ §§²§ §§§§§§§§§ §§§§§§§§§§§§§
§§§§§§ §§§§§§ §§§ §§§§§§§§§ §§ §§§§§§§ §§§§§§§§§§§§ §§§§§ §§§§
§§§§ §§§§§§§§ §§ §§§§§§§ §§§§§§Ž§ §§§§ òò§§§§§ §§ ò§§§§§§§§
§§ §§ò ççïçóò ¬§§§§§§§§§ó §§§§§§ §§ §§§§ §§§§ §¬§§§§§§ §§§ §§§§§§§§
§§§§ §§§§§§§§§§§§§§§§ §§§§§§ §§§§§§§§ §§§ §§§§§ §§§§§§ §§§§§
§§§§ §§§§§ ò§§§Œ §§§ §§§ò§§§Œó §§§§§§§§§§§§§§ò
¼§ §§§ §§§§§ §² §§§ §§§§§§§§ §§ §§§§§§§§§§§§§§§§ó §§ §§§§§§§§
§§§§ § ²½½½½²²²½½²½½ §§§§§§§§§§§§ §§§§§§ §§§§§§ §§§§§§§§§§§§§ò
§§§§§§§§§§§§§ §§§§§§§ §§§§§§§§§§§§ó §§ §§§§ §²²§§ § §§§§§§ §§§§§§§§§ó
§§§§§§§§§ §§§ §§§§§§§§§ §² §§§§§§§§§§§§ §§ ²§§§§ §² §§²§§§§§§§
§§§§§§§ §§§Œ §§§ §§§ò§§§Œ §§§§§§§§§§§§ò ק § §§§§§§§ §§§§§ó
§§§§§§ó §§§ §§§§§ §² §§§ §§§§§§ §§§ §§§§§ §§§§§§§ §§§ §§§§§§§§§§
§§§§§§§§§§§ §§§§ §§§ò ¿§§§ §§±§§§§§§§§ §§§§§§ §§ §§²§§ §§§§
§§§§§§§ §§ §¬§§§§§§§§§§ §§§§§§§ §§§§§ §§§§§§§§§­§§ §§§§§§§ §§
§§§§§§§ §§ §§§§ §§²§ §§§§§§§§§§§§ò
ÌòÌ̼Ìò¬Ì®Ì¬Ì¿
¿§§ §§§§§§§ §§§§² ò§§§§ ²§§§§§§§ ²§§ §§§§§§§ §§§ ¬§§§§§§ §² §§§§
§§§§§§§§§§ò
²²²²²² ²²²²²²²²²²²² ²² ²²² ²²²²²² ²²²²²²
²á²²²² ²²á²²á²²á
§§§¼ïçòïçïîò­çïîçîçîîïçççïîîç
ôôôôôôôôô ôôôôôôôôôôô ôôôôôôôô ôôôôôô¾ ôô­ ­ôôôô ôô­ôô­ô
ô§§§§§ ï òò§§§§§ §§ ò§§§§§§§§ §§ §§òóò ô§§§ §§§§§§§§ §§§§§§§§§§§§
§§ §§§§§§ §§§§§§§§§¼ ò§ó §§§ §§§§§§ §§§§§§§§§§§ó ò§ó §§§§§ §§§§§§
§§§§§§§§§§§ó ò§ó §§§§§§ §§§§§§ §§§§§§§§§§§¼ §§§§§§ §§§§§§§§§§§ó
ò§ó §§§§§§ §§§§§§ §§§§§§§§§§§¼ §§§§§§§ §§§§§§§§§§§ò
§§§ §§§§§§ §§§§§§§§§§ §§§§§§ §§§ §§§§ §§§§§ §§ §§§§§ §§§ §§§§
§§§§§ò
¬§§§§§§§§§§§§§§§ §§§§§§ §§ §¬§§§§§§§ §§ § §§§§§§ §§§§§§§§§§
§² § §§§§§ §§§§ §§§§§§§§§§§§§§ §§ §§ §§§§§ §§§§§§§ §§§§§§§§ §§
§§§§§§ §§§§§§§§§ òò§§§§§ §§ ò§§§§§§§§ §§ §§ò ççççóó §§§ §§§§ §§§ó
§§§§§ §§§§§§§ § §§§§§§§ §§²²§§§§§§ §§§§§§§ §§§§§§ §§§ §§§§§§§
§§§§§§§§§§§§ò ®§§§§§§§ §§§ §§§§§§§§ §§§§§Ž§ §§§§ §§§§§ §§§§§§§ §
§§§§§§ §§§§§§§§§§§§ §§§§§§§ ×Œ §§§ §§§ §§§§§§ §² §§§ §§§§§§§ò ק
§§§§§§§§ó §§§§§§§§ §§§§§§§§ §§§ §§§§§§§ §§§§§§§§§§§§ §² §§§§§§§ó
§§§§§§ §§§§§ ×Œ §§§§ §§ §§§Œ §§§ §§¬§§§ §§Œ §§ §§§ §§§§§§
§§§ò§§§òŒ ò§§§§§§§§ §§§ ²§§§§§§§§§ §² §§§§§§§§ §§ §§¬§§§§ §§
§§§§ §§§§§§§§§§§§§ §§§ §§§ó§§§§§§§§§§§§§ §§§§§§§§§§ §§
§§§§§§§ §§§§§§§§§§§§ó §§ §§§§§§ § §§§§§§§ §§§§§§§§§ §² §§§ §§§§§§§§
§§§§§§§ §§§§§§§§§ §§ §§§ §§§§§§§§§§§§§ §§§§§§§§§§§ §§§§ § §§§§§§
§§§§§§ §§§ §§ §§§ §§§§§§§§ §² §§§ §§§§§§ §§§§§§ §§§§§§§§ §§ §§§ §§§
§§§§§§ò ¿§§§ §§§§§§§§ §§§§ §§ §§§§§§§§§§§§§ §§§§§§§§ó §§ §§§ §
½½½½½½½½½½½½½ §§²§§§§§§ ²§§§§ §§§§§§§§ §§§ §§§§ §§§ §§§§§§§§§§§
§² §§§ §§§§§§ §§§§§§ §§§ §§ §§§§§§§§§ ò×Œ §§§§§§§§§ §§§ §§§Œ
§§§§§§§§§ §§§§Œóò ­§§§ § §§²§§§§§§ §§ §§§ §§§§ §§§§ §§§§§§§§ §§
§§§§§§ §§ §§ §§§§§§§§ §§ §§§§§§§§ò ¬¬¿ó§§§§ §§§§§§§ §§§§§ §§§§
§§§§ §§²§§§§§§ ²§§§§ §§§§§§§§ §§§ §§§§§ §§§§§§§§§ §§§§§§§§ §§§§§§§§
§§§§§§ §§§§§§ §§§ §§§ §§§§§ §§§§§§ §§§²§§§§§§ §§§§§¬ òò§§§§§ §§
ò§§§§§§§§ §§ §§ò ççïïóò
ק §§§§§§§§ó §§§§§§§ §§§²§§§§§§§ §§ §§§§§§§§§§§§§§§§ §§²²§§§
§§§§§§§§§ §§ §§§ §§¬§§§ §² §§§ §§§§§§§§ ò§ §§§§ §§§§§ §§ § §§§§
§² § §§§§§óó §§ §§§§§§§§ §§§§§§§§ §§ § §§§§§§ §§§§§Ž§ §§§§ó §§
§ §§§§²§§Ž§ §§§§ó §§§ §§ §§§§²§§§§ §§ §§§§§§§ §§§§§§§§§§§§ò ²§§§§§§§
§§§§§ ²§§ §§§§§§§§ §§ §§§§²§§§§ §§ §§ § §§§§²§§Ž§ §§§§ §§§§§ §§§§
§§§§§§§ó §§§§§§§ §§§§§§§§ §§ §§§§ §§§§§ §² §§§§ §§§§§§§ §§§
§§§§§§ §§§§ §§§§§§§§ §§ §§§§²§§§§ò ¿§§§ §§§§§§§§§§§§ §§§ §§§§§ §§§§
¿
÷¿÷÷÷¿÷÷÷÷ ÷÷ ÷÷÷÷÷÷÷÷÷ ÷÷÷÷÷÷÷÷÷÷÷÷ ÷÷÷÷÷ Š Ý÷ÝÝÝÝ ÝÝÝÝ Ý ÝÝ÷ Š Ý÷÷»÷
÷÷÷å¿»÷ Ý÷å÷÷÷÷÷÷÷ ÷Ý÷Ý÷å ååÝÝå å¿÷÷÷å å÷¿÷÷÷å ÷÷¿÷÷÷¿÷÷÷÷ ÷÷
å÷÷÷§÷å¿÷§÷»÷÷÷ ¿÷§ ÷÷÷÷÷å÷÷÷§÷»÷÷÷ Š ÷å÷ ÝôôÝôåå Ý÷÷÷÷÷÷÷÷ô å¿÷÷÷ ôå
ÝòåôÝ Ý¿÷÷÷òò÷÷÷÷å å÷¿÷÷÷ò
³²²²²²²²²³²²³²²²²²²³³²²²²³²²²
²²²²²²²²³²²²²á³²²²á®®²²²²®³²²
®²²²²³³²²³²²³²²²³²²
²²²®²ééééé³³²²³²²³²²²³²²é®²²®²²³®²®á²²éé
²²²®²²éé²²²²²³³²²³²²³²²²é²²²²é²²²²²²²²²²²²áé
²²²®²ééééé³³²²³²²³²²²³²²é®²²®²²³®²®á²²éï
æææææææææ ̧ ±§§§§§§§ §§§ §§§§ §§§§ §§§ §§§§§§ §§§§§§ §§§§§§ §§ §§§
§§§§§§§§§ §² §§§§§§ §²²§§§§§§§§ §§§§§§§§§§ó §§§ §§ §§§§§§§ §§§§ §§§§ §²
§§§ §§§§§§§§ §§§§ §§ §§§ §§§§§§§§ §§§ §§§§§§§§ §§§§§¬ §² §§§ §§§§§ §§§§§
§§ §§§§§§§§§§§ §² §§§§§§§§§ §§§§§§§§ §§ §§§§§§§§§§­§§ §§ §§§§±§§§ó §§§
§§§§§§ §§ §§§§ § §§§§§§§ §² §§§Ž§ §§§ §§§§§§ §§§§§ §§§§§§ §§ §§§§§§§§ §§
§§§§§§ §§§§§§§ò
̧ §§§§§§§§§§§§ ­§§§§§§§§ §§ §§ò ²§§ §§§§§§§ §§§§§ §§§§§§§§§§§ §§§§§§
§§§§§§§§§§ §§§ §§§§§§ §² §§§§§§ §§§§§§§§§§ò ̧ §§§ §§§§ §§§§§ó
§§§§§§§ §§§§§ §§§ §§ó§§§§§§§§§ §² §§§ §§§§§§§ §² §²²§§§§§§§Œ §§
§§§§§§§§§§­§ §§§§ §§ §§§§§§§§ §§§§§§§§§ §§ §§ §§§§§§§ §§§§§§§§
§§§§§§ § §§§§§§ §§§§§§§§§§§ò ̧ §§§§§ §§§ §§§§ §§§§ §§§§§§§§§
§§§§§§ ½­½² ò§ò§òó ²§§§§§§§ §² §§§ §§§§§§§§§§§ §§§§§ ²§§§§§§§ §§
§§§ §§ §²²§§§ § §§§§§§§§§ó §§ ±§§§§§§§§§§§§ §§§§ §§²²§§§§§ ²§§§ §§§§§§ó
§§§§ §§§§§§ ²½ò½òò²ò ­§§§§§ §§§§§§§ §§§ó §§ §§§§§§§§ §§ §§§§ó ²§§§§§§§ §²
§§§ §§§§§§§§§§§ §§§§§ ²§§§§§§§ §§ §§ òò½½½ § §§§§§§§§§Ž§ §§§§§§§§
§§§ §§§§§ §§§ §§§§§§§­§§ §§§ §§§§§§§§§ §§ §§§§ §§ §§§ §§§§§§§
§§§§§§§§ò ¿§§§ §§§§§§§§§§§ §§§²§ §§ ²§§§§§§§§§§ §§§ §§ §§§ §§§
§§§§§§§ §§ ²§§§ §§§§§§ §§§§§§§§§ ²§§§ §§§ §§²§§§§§§§§ §§§§§§Œ §§
§§§§§§§§ §§§ §§§§§§§§§ò ̧ §§§§ §§§§§§§§ §§§§ §§§§§ §§§§§§§§ §§§
§§§§§§§ §§§§§§§§§§§§§§ §§§§§§§§§§§§ §§§§§§§ §§ ­§§§§§§§§
§§ §§òó §§§ §§§ §§§§§ §§§§§§§§§ §§§§§§§§§§§ §§ §§§§§§ §§§§§§§§ò
¬§§§§§§ó §§ §§§§§§ §§§§ §§§ §§§§§§§ §§ §§§ §§§§§²§ §§ §§§§
§§§§§§ §§§§ §§§§ §§§§§§§§§§ §§ §§§§§§ §²²§§§§§§§§óŒ §§§ §§§§
§§§§ §²²§§ §§§ ²§§ §§§§§§§ §§§§§ §§§ §§§§ §§§ §§§§§§§§§ §§§ §§§ó
§§§§§§ §§ §§§ §§§§§§§§§ §§§§§§§§§ò ¿§§ §§§§§ §§§§§§ §² §§§§§ §§§§§§
§§ §§§§§§§ §§ §§§§§§§§§§§ §§§ §§§§§§ §§§§§§§§§§ó §§§§§ §§§§§ §§§§§§
§§§ §§§§§§§§§§ §² §§§§§§ §²²§§§§§§§§ò
ò§§§§§§§ §§ §§§§ ­§§§§§§§§ §§§ §§§§§§§§§§ §§§§§ó §§ §§ §§§
§§§§² §§§§ §§§ §§§§§§ §§§§§§ §§§§§§ òÌÌ­ó §§ §§§§§§§§ò ô§§§§ó
ìììììììììì ììì ììììì ìììììììì ìììììì ìììì
îïî
ñ½ññ½½½ò½ññ­§§§§§§§§ ½½ òòò¼ ¿§§§§§ § §§§§§§ó§§§§§§ §§§§§§§§§§§§
§§§§ §§§§§ §§ §§§ §§§§ ²§§ §§§§§§ §§§§§§§§§§ §§§§§§§§§§§¼ ̧§§§§§
§§§§ §§§§§§ §§§§§§§§§§ òÌÌ§ó §§§§ §§§§§§§ ²§§ ïîò ò®§§§§§§ §§ §§ò
ïççêó §² §§§ §§§§±§§ §§§§§§§§ ò¬Ìó §§§ §§§§§§§§ §§§§§§§ò
̧§§§ §§§§§ ïîòó §§§§ îòîò §§§§ ²§§ § §§§§§§§§ §§§§§§§§§
§§§§§§ §§ §§§ §§§§§ §§§ §§§ §§§§§§ §§§§§§ó §§§§§§§ îòêò §§§§
²§§ §§§ §§ §§§§ §§§§§§§ §§ §§§ §§§§§§ §§§§§§ó §§§ ïòóò ²§§ §§§ó
§§§§§§§§§ §§§§§§§ò ק §§§§§§ó ̧²§§§§ §§ §§ò òççïçó §§§§§§§§
§§§§ ïîò §² §§§ §§§§§§§§ §§§§§§§ §§ §§§ §§§§§§§§§§§§§ §§§§§¬
§§§§ §§§§§§§§§ §§ §§§§§§§§§ §§§§§§§§ §§§§§§§ó §§§ ïçò
§§§§§§§§§ §§ §§§ó§§§§§§§§§ §§§§§§§§ §§§§§§§ò ̧ §§ §§§ §§§§²
§§§ §§§§§§ §§§§§§ §§ §§§§§§§§§§ Ì̧ò ²§§§§§ó §§§§ §§§ §§ §§§§ó
§§§§­§§ §§ §§§§§§Œ §§§§§§§§§ §§§§§§§ó §§§§ §§ó §§§§§§§ §§§§ §§§
§§§§§§ §§§§ ²§§§§§§§§§ § §§§§§§§§ §§§§§§ §§§§§§§§§§§ ò§§ò §§§§§§§§§§§
§§§§ §§ §§¬§§§ §§ ²§§ §§§§§§§§§ §§§§§§§ó §§§ §§§§§§§§§ §§§§§§²
§§§§§§§§§§§ò
¿§§§§ §§ § §§§§§§ §§§§§§ Š § ²§§§§§§§§§ §§§ Š §§§ Ì̧ §§§§§
§§§ §§ §§§§ §§§§§§§§§§ ²§§ §§§ §§§§§§§§§§§§ §² §§§ §§§§§§§§§§
§² §§§§§§ §²²§§§§§§§§¼ ̧§§ §§ §§§§§§§§ §§ §§§ §§§§§§§§§§ §²
§§§§§§ §²²§§§§§§§§ §§ó §§§§§§§§§§§ §§§§²§§§ó §§§§ §§²²§§§§§ ²§§§
§§§§ Ì̧ §§§ ²§§§§ §§ §§ó §§§§ §§ó §§ §§§§§§§§ §§ §§§§§§§§
§§§§§ §§§§§§§ ò²§­­§§§§§§ §§ §§ò çççïóò ק§§§§§ó §§ §§§§§§ §§§§
§§§ §²²§§§§§§§ §§§§§§§§§§§ §² §§§§§§ §§§§§§§ §§§§§ ²§§ §§§ §§§¬§§§§§ó
§§§§ §² §§§§§§§§ §§§§§§ §§ §§§§§§²§ §§§§§ § §§§§§§ §§§§§§²§ò
ò§§ §§ §§§ §§§§§§§§ §§§ §§§§§§ §² §§§§§§ §²²§§§§§§§òŒ ­§§§§§§§§
§§ §§ò §§ §§ §§§§§§ §²²§§§§§§§ §§ §§§ §§§§§§§§§§§§§ ²§§ §§§§§§§§§§§
§§§§§§§§ §§ §§§§§§Œ ò§§§§ò óòïòïó §§§§ò îóò ¿§§§§ §§§§§§§§§§§§§ó
§§§§§§§§§ §§ §§§ §§§§§§§ó §§§ §§§§§§§§§§­§§ §§ §§ §§§§§§§§§§ §²
§§§§§ §§§§§§§§ §§§§ §§§§§ §§§§§ ²§§ §§§§§§§§§§§§§ §§§§§§§§§§§§
§² §§§§§§§§Œ ò§§§§ò óòïòïó §§§§ò îóò ק §§§§ §§ §§§§§ §§§§ §§§§§§
§§§§§§§ §§§ §§ §§§§§§§§§§ §²²§§§§§§§§ §§ §§§ §§§§§ §§§§ §§§§
§§§§§§§ § §§§§ §§§§§ §² §§§§§§§§§§§§§ ²§§ §§§§§§§ §§ §§§ §§§§§§§§ò
Ì§ó §§ §§§§§§§ó §§§ §§§§§§§§§§§§ ²§§§§§ §§§§§§§§§§§§§ ²§§ §§§§§§
§§ §§§ §§§§§§§§§§§ §§§ §§§ §§§§§§§§§§ §§§§§§§§§§ §§²§§§§§§§§ §§
§§§§§ §§ §§§§§§ §§§ §§§§ §§§§§§§§ §§§§§§ ²§§§ §§§§§ §§§§§§§§
§§§§§§§§§ §§§§§§§ òò§§§² çççî¿ ò§§§² ¿ ̧§§§²§ ççïçóò ¿§§§
§§§§§§§ §§§§ §§§§§ §§§§§§§§ §§§§§§ §§§§§§ §§§§§§§§§§§§ §§ §§§§ §§§§ §§§ó
§§§§§ §§§§§§§§§§ §² §§§§§§§§ §§§§§§§§§§§§§§§ §² §§§§§§§§§ §§§§§§§
ò§§§§§§ §²²§§§§§§§§ó §§§ §§§§§ §§§§§§§§§§ §§§§§§§ §§§ §§§ §² §¬§§§ó
§§§ §§ §§§§ §§ §§§§§§§§ §§§§§§§ §§²§§§§§§§§ò ̧§§§§§§ §§ §§§ §§§§§§§§
§§§§§¬ §§§ §§§§§§§§§ §§§§§ §§§§§§§ §§§§§ §§§§§²§§§ §§ §§§ §§§
§§§§§§§§§§§§§§ §² §§§ §§§§§§§ §²²§§§§§§§§ òò§§§² ¿ ̧§§§²§ ççïçóò
ק § §§§§§§ §§§§§ §§§§§§§§§ §§ §§§ §§§§ òò§§§§ §§ §§ò ççïçóó
§§§§§§§§§§§§ ²§§§§ §§§§§§§ §§§§§§§ §§§§§§§§§ §§§§§§ §§§§§§§§§
§§§§§§¬ §§§§§§ §§§§§§§ §§§§§§§§§ §§­§ó § §§§§§§§§ §§§§§§§ó §§§ §§§
§¬§§§§§§§§ §² §§§§§ò ̧ §§§§ §§§§ §§ §§§§ó §§§§§ §§§§§§§§§§§§§§ó
§§§§§§§§ ò¬¬®ó §§§§§§§ §§§§ ²§§§§§§§§§ §§§§§§§§ §§§§§§§§§
§§§§§§§ ò²Ì²×ó §§§§ § §§§§§§§ §² §§§§§ §§§§§§ §§§§§§§ §§§§§§§§ §§
§§§ ¬Ì §§ §§§§§ §§ ççç §§§§ §²§§§ §§§§§§§§ §§§§§ò ­§§§§§ §§§§§ §§§ó
§§§§§§ òÌÌ­ó §§§§§§§§§§ §§§§§ §§§ §¬§§§§§ §§§ §§§§§§§§§§ §§ §§§
¬Ìó §§ §§§§§ §¬§§§§§§§§§ §§§§§§§§ §§ §§ §§§§§§§§ §§§ §§§§§§§§§
§§ § §§§§§ §§§§§§§ §² §§§ó§² §§§§§§§ §§§§§§§§§§ §§§ §§§ §§§§§²§§§
§§§§ §§§§§²§§ §§§§ §§§§§§§ §§§§§ §¬§§§§§§§§§ òú§§§§§§§§ ¿
¬§§§ çççîó ¬§§§ §§ §§ò çççîóò ̧ §§§§§§§ §§§§ §§§§§§§§ §§ §§§
¬Ì §§ §§§§§§§ §§ §§§ §§§§§§§§§§§§§ §§§§§§§§§§¼ òïó §§§ §§§§ó
§§§§§§ §² §§§§§ §¬§§§§§§§§§§ §§§§§ §§§ §§§§§§§§§ §§§§§Ž§ §§§§§§ó
§§§ §§§§§§ó §§§ òçó §§§ §§§§§ §§§§§§ §² §§§§§§§§§ §§§§§§§§§ §§§§§§§
²§§ §§§ §§§§§§§§ §§ §§§§ §§§§ §§ §§§ §§§§§§§§§ §§§§§§§§§ò
ק§§§§ó §§§§ §§§ §§§§§§§§§§§ §² §§§ §§§ §§§§§ §¬§§§§§ §§§
§§§§§§§ §§§§§ §² §§§§§§§§ §§ §§§ ¬Ì ²§§ §§§ §§§§§§§ §§§§§§ §² §§§§§ó
§§§§ §§§§§§ §§§§§§§§§§§ó §§§§ §§ó §§ §§§§§ §§§§§§ §§§§§§§§ó ²§§§§§ó §§§
§§§²§§§ §§§§§§§ §§§§§§²ò ¿§§§ §§§§§§§§ §§ §§§ ¬Ì §§§ §§§§§²§§§
§§ §§§ §§§ §§§§§§§§§§§ §§§§§§§ §§§§§§§ §§§§§§§§§§§§§§§ §²
§§§§§§ §§§§§§§§§§§§§ó §§§§§§§§§ §§ §§§ §§§§§§§ §§§§§§ §§§§§§§§§§§ó
§§ §§§§ §§ §§§ §§§§§§§§§ §² §§ §§§§§§§§ §§§§§§§§§§ §§§§§§§§ò
ק §§§ó §§ §§§§² §§§§ §§ §§ §§§§§§§ §§ §§§ §§² §§§§§§§ §§§ ²§§§ §²
§§§§§§§§§ §§§§§§§§ §§§§ §§ §§§§§§§§ §§ §§§ §§§§§§§§§§ §² §§§§§§ §²²§§ó
§§§§§§ §§ §§§§§§§§§§ §§§§ §§§§ §§ §§§§§§§§§§§§§§ §§§§§§§ §² §§ §§§
ÌÌ­ó §§²§§§ §§§§§§§§§§§ §§§ §§§§§§§§§§ §§ §§§ ¬Ì §§ §§§§§§§§§ §§§ó
§§§§§§ò ­§§§§§ó §§ §§ §§§§§§§§§ §§ §§§§§§§§ §§§§§§§ §§§§ §§ §§§
§§§§§§§ §§§§§§§§§ §§§§§§§§§§ Š §§§§ §§ §§§ §§§§§§§§ ò­§§§§§
îïî
ìììììììììì ììì ììììì ìììììììì ìììììì ìììì
§§ §§ò çççóó Š §§ ²§§§§§§§§§§ §§ §§§ §§§§§§§§§§ §² §§§§§§ §§§§§§§
§§§ §§ §§§ §§§§§§§§§§ §² §§§Ž§ §§§ §§§§§§§§ §§§§§§§§ò ò§§§§ó
±§§§§§§ó §§ §§ §§§§§§§§§§ §§ §§§§§§§ §§§§ §§§ §§§§§§§§§§ §² §§§§§§
§²²§§§§§§§ §§§§§§§§ § §§§§§ § §§§§§ §§§§§§²ó §§§§§§ §§§§§§§§§§§ó
§§§ §§§§§§§§§§ §§§§ §§²²§§ ò²§§ §§§ §§§§ §§§§ó ²§§§ §§§ ÌÌ­ò
ò§§§§§§§§§­§§§ §§§ §§§§§§ §§§§§ §§§§§§ §² §§§ §§§§§§ §²²§§§§§§§
§§§§§§²Œ §§§§§§ §§§ ÌÌ­ §§§§§§§§§§ § §§§§§§§§§§§ §§§§ §§ §§§
§§§§§§§§§§§§§ §² §§§ §§§§§§§§§§ §² §§§§§§ §§§§§§§ò
²²²²®²²²²²á ²²²é²²² ²²²²²²®®²²²²²
²²²²²²²²²²²² ²²² ²²³²²²²á² ²²²²²²²²
§§§¼ïçòïçïîò­çïîçîçîîïçççïîîî
§ôôôô §ôô­ ôôôôôôô
Ý÷ÝÝÝÝ Ýô³Ýå Ý÷÷÷÷÷¿÷÷÷÷ ÷÷÷÷÷÷÷÷÷÷÷÷ ¿÷§ Ý÷÷÷÷÷÷÷å Ý÷ÝÝÝÝ Ý÷÷å
÷÷»» ¿÷§ ò÷¿÷÷ Ý÷÷÷¿÷÷§ Ý÷÷÷÷÷÷÷÷å ÝÝÝåå ò÷÷÷ ÷÷§÷òå å÷¿÷÷÷ò
®²²²²³²²²²²²á³²²²²²²³²²
²²²®²ééééé³²²²²³²²é²²²²²²²é®²²²²®²²²²®²²²²²²á³²²²²
æææææææææ ̧ §§§§§ §§ §§§² §§§§§§Œ §§ §§§ §§§§§§§§§§§§ §² §§§§§§§§§§§
§§§§§§§ §§ §§§§§§§ §§§§§§§§§§§§ó §§§§§§§ §§§² §§§§§§ §§ §§§§§§§ §§ §§§
§§§§§§ §² §§§§§§§§§§§ §§§§§§§ §§§§§§ §§§ §§§§§§§§§ §§§§§§ò ¬§§§§§§§ §§
§§§§§§ó§§§§§§ §§§§§§§§§§§§ §§§§ §§§§§§§§§§ §§ §§§ §§§§§§§§§§§ §²
§§§§§§§ §§§§§§§§§ §§§§§§§ó §§§ §§§§ §§§§§§§§§ §§§§§§§ §§§§§§§ §§§§ §§
§§§§ §§ §§§§ §§§ §§§§§§§§ §² §§§ §§§§§§§§§§ §§§§§§§§ò
¿§§ §§§§§§ó§§§§§§ §§§§§§§§§§§§ ²§§§§§§§² §§§§§§§§§ §§ ­§§§§ó
§§§§ §§ §§ò §§ §² §§§§§§§§§§ §§§§§§§§ ²§§ §§§§§§§§§§§ §§ §§§ §§§§§§
§² §§§§§§§§§ §§§§§§§§ ²§§ §§§ §§§§§§§ §§§§ §§§§§§§§§§ § §§§§§§ §§§§ó
§§§§§§§ §§§§§§§ §§§§§ §§§§§§§ò ô§§§§ó §§§ §§§² §§§§§§Œ §§
§§§§§§ó§§§§§§ §§§§§§§§§§§§ §§ §§§§ §§§§§§§ §§ §§§ §§§§§§ §²
§§§§§§ó§§§§§§ §§§§§§§§ó §§§ §§ §§ §§²§§§ §§§§ §§§§§§§§ §§ §§§ §§§§§ó
§§§§§§§ §² §§§§§§ó§§§§§§ §§§§§§§ §§§§ §§§§§§§§§§ §§ §§§§§§§ §§§§§ó
§§§§ §§§§§§§ò ­§§§§§ó §§ §§§§§ §§§§§ §§§§§§§§§ §§§§§§§ §§§§§§
§§§§§§§§§§§§ §§§§§§§§ó §§§§ §§§§ §§§§§§ §§§§§§§§§§§§ §§§²§§ §§
§§§§§ §§ §§§ §§§§§§§ §² §§§§§§ó§§§§§§ §§§§§§§§§§§§ò
̧ §§§§§§ó§§§§§§ §§§§§§§§§§§§ §§§§§§§ó §§ §§§§ §§ §§§§ §§
§§§§§§§§§§­§ §§§ §§§§§ §§§§§§§§§ §§§§§§ §§ §§ §§ §§²§§§ §§§
§§§§§§§§§§§§§§ §² §§§§ §§§§ §§§§§§§§ §§§§§§§ §§§§§§§ §²
§§§§§§ó§§§§§§ §§§§§§§§§ò ò§§§§§§ §§§§§ §§§§§§§§§ §§§§§§§ §§§§
§§§§§§§§§ §§§§§§§ §² §§§²Œ §§§ §§§§§Œ §§§§ §§§ §§ §§§§§§§§ §§
§§§§§§§§§§ ²§§ §§§§§§ §§§§§§§§ ò̧§§§§§ ¿ ̧§§§²§§ ççïïó
ò§§§ò§ §§ §§ò ççïçóò ̧ §§§§ §§§§§§§§§ §§§§§ §§§§§§§ §§§§ §§§
§§§§§ §§§§§§ §§§§§ §§§§ §§§§§²§ §§§ §§§§§§§§§§§ §§§§§§§ §² §§§²
§§§ §§§§§ó §§§ §§§ §§§§§ §§§§§ó §§§§§§§§§ §§§§§§§§§ §§§§§ò
¬§§§§§§ó §§§ §§§§§§§§§§ §§§§§§§§§§ §§§ §§§§§§§§§§§§ó §§ §§§
§§§§§ §§§§§§§§§ §§§§§§ §§ §§§§§§§§ §§§§§§§§§§§§ó §§§ §§§§§§§ §§
§§§ §§§§§ §§§§ §§§§§§§ §§§ §§§§§§§§§§ §² §§§§§§§§§§§ §§§§§§§ §§
§§§ §§§§§§ §§§§ò ק §§ §§§§§§ §§ §§§ §§§§ ²§§§§§§§§§§ §§§§§§ §§§
§§§²§§§ §§ ²§§§§§ §§ §¬§§§§§§ ¬§§ §§§§ó §§§§§§ §§§§ §§§§§ §§ §§§§ó
§§§§ §§ §§§§ §§§§§ §§§§§§ §§§§§§§§§§§ò
ק §§§§§ §§§§§§ §§§§§§§ó ­§§§§§§§§ §§ §§ò §§§§§§§ § §§§§§§§ ²§§
§§§§§§ó§§§§§§ §§§§§§§§§§§§ ²§§§ §§§§§ §§ §§§ §¬§§§§§ §§§ ²§§ó
§§§§§§ §§§§§§§§§§ ²§§ §§§§§§ó§§§§§§ §§§§§ §§§§§§§§§¼
ïò ¿§§ §§§§§§ §§§§§§ §§ §§§§§§§§§ §§ §§§§§§§§ §§§§§§ò ¿§§§ §§
§§§§§§§ §§ §§§ §§§§§§§§§ §§§§§§§§§§ §§ §§§§§ §§§§§§§§§§ §§§§§§§§§
§§ ¿§§§§§§§§ òççççóó §§§§§ §§ §§§ §§§§§§ §§§§§§§ò
çò ¿§§§ §§§§§§§§§§ §§ §§§§§§ §§ §§§ §§§§§§§§§§ §² §§§§§§
§²²§§§§§§§§ò
óò ­§§§§§ §²²§§§§§§§§ §§§§§§ ²§§§ §§§ §§§§§§§§§§§ §§§§§§§§ §²
§§§ §§§§§§ §§§§ §§ §§§§§§§§§§§ §§ §§§ §§§ §§§§§§ò
îò ¿§§§ §§±§§§§§ §§§§ §§§ §§§§§§ §§§ §§§§§§§§ §§§ §§§§§ §§ §
§§§¬§§§ó §§§§§ §§ §§§§ §§§§ §§§§§§§­§§§ §§§ §§§§§ §§ §§ §§§§§ò
ק§§§§§ó §§§ §§§§§ §§§§§ §§§§ §§ §§§§§§§­§§ §§ §²²§§§§§§ §§§§§§
§§§§§§§§§§§ò ­§§§§§§§§ §§ §§ò §§§§§§ §§§§ §§§§§§ §§§§§§§§§§§ §§§§§§§§
§§§§§§§§§§§ §§§§§§§ §§§² §§§ §§§§§ò ¿§§ §§§§§§§§§ §² §§§§§§§§§§§
§§ §§§§§§§ §§ §§§ §§§§§§§§ §§§§§§ §² §§²§§§§ §§§ §§§§§ § §§§§§§§ §§§§ §§
Guillaume DEZECACHE
Studies on emotional propagation in humans:
the cases of fear and joy
Etudes sur la propagation émotionnelle chez
l’humain :
les cas de la peur et de la joie
Les psychologues de la foule des 19e et 20e siècles nous ont légué l’idée que les émotions
sont si contagieuses qu’elles peuvent conduire un grand nombre d’individus à rapidement et
spontanément adopter une même émotion. L’on pense par exemple aux situations de panique
de foule, où, en l’absence de coordination centrale, des mouvements de fuite collective sont
susceptibles d’émerger. Les travaux présentés dans cette thèse se proposent d’étudier la
propagation de deux émotions considérées comme particulièrement contagieuses, la peur et
la joie. Leur propagation est étudiée à deux niveaux d’analyse : d’abord, au niveau proximal
(la question du "comment"), je discute les mécanismes potentiels permettant à l’émotion de
se propager en foule ; aussi, je soulève la question du bien-fondé de considérer la transmission
émotionnelle comme un processus de contagion. Dans un second temps, au niveau d’analyse
évolutionnaire ou ultime (la question du "pourquoi"), je pose la question de savoir pourquoi
les individus de la foule ont ainsi l’air de partager leur états émotionnels de peur et de joie
avec leurs voisins. A ce propos, je présente une étude montrant que la transmission de la peur
peut être facilitée par la propension du système cognitif humain à moduler l’intensité des
réactions faciales liées à la peur, en fonction de l’état informationnel de leurs congénères. Ces
résultats suggèrent que les réactions faciales spontanées de peur ont pour fonction biologique
la communication, à autrui, d’information cruciale pour la survie. Pour finir, je discute les
implications de ces travaux pour notre compréhension plus générale des liens entre émotions
et comportement de foule.
Transmission émotionnelle ; contagion émotionnelle ; communication émotionnelle ; peur ;
joie ; psychologie de la foule
Crowd psychologists of the 19th and 20th centuries have left us with the idea that emotions
are so contagious that they can cause large groups of individuals to rapidly and spontaneously converge on an emotional level. Good illustrations of this claim include situations
of crowd panic where largemovements of escape are thought to emerge through local interactions, and without any centralized coordination. Our studies sought to investigate the
propagation of two allegedly contagious emotions, i.e., fear and joy. This thesis presents two
theoretical and two empirical studies that have investigated, at two different levels of analysis, the phenomenon of emotional propagation of fear and joy: firstly, at a proximal level
of analysis (the how-question), I discuss the potential mechanisms underlying the transmission of these emotions in crowds, and the extent to which emotional transmission can be
considered analogous to a contagion process. Secondly, at an evolutionary/ultimate level of
analysis (the why-question), I ask why crowd members seem to be so inclined to share their
emotional experience of fear and joy with others. I present a study showing that the transmission of fear might be facilitated by a tendency to modulate one’s involuntary fearful facial
reactions according to the informational demands of conspecifics, suggesting that the biological function of spontaneous fearful reactions might be communication of survival-value
information to others. Finally, I discuss the implications of these studies for the broader
understanding of emotional crowd behavior.
Emotional transmission; emotional contagion; emotional communication; fear; joy; crowd
psychology
159