from sysu.edu.cn

Transcription

from sysu.edu.cn
Money or Friends: Social Identity and Deception in Networks¶
Rong Rong*, Daniel Houser‡ and Anovia Yifan Dai§
28 October 2015
Abstract: Strategic communication occurs in virtually all committee decision environments.
Theory suggests that small differences in monetary incentives between committee members can
leave deception a strategically optimal decision (Crawford and Sobel, 1982; Galeotti et al, 2013).
At the same time, in natural environments social incentives can also play an important role in
determining the way people share or withhold truthful information. Unfortunately, little is known
about how monetary and social incentives interact to determine truth-telling. We investigate this
issue by first building a novel model and then testing its equilibrium predictions using laboratory
data. In the absence of social identity, the model’s predictions are supported: there is more
truthful communication between those who share monetary incentives than those who do not.
We find that the effect of identity is asymmetric: sharing the same identity does not promote
truth-telling but holding different identities reduces truthfulness. Overall, as compared to
environments lacking social identity, committees with both monetary and social incentives
exhibit truthful communication substantially less frequently.
JEL classification: D85, D02, C92
Keywords: social networks, deception, committee decision making, strategic information
transmission, parochial altruism, experiments
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
¶
!We thank NSF Dissertation Improvement Award for financial support of this project. For helpful comments we
thank our colleagues at ICES, George Mason University and Goddard School of Business and Economics at Weber
State University, seminar participants at Chapman University (2013), Utah Valley University (2013) and the ESA
North-American meeting (2012). The authors are of course responsible for any errors in this paper.
*
Corresponding author, Department of Economics, Weber State University, [email protected]
‡
Department of Economics, George Mason University, [email protected]
§
Department of Economics, University of Iowa, [email protected]!
!
1!
I. Introduction
People with different financial incentives often deceive each other. A familiar example is
department hiring decisions, where information may be shared strategically with an eye towards
raising the priority of hiring a colleague in one’s own preferred field. Similar examples exist in
all organizations. For example, cases of CEOs manipulating information to favor themselves
have been widely noted (Cloke and Goldsmith, 2000; Cowan, 2003; Tobak, 2008) and studied in
economics, organizational psychology and management (Colb et al, 1992; Rahim, 2000; Dreu
and Galfand, 2007; Conrad and Poole, 2011; Miller, 2011).
Strategic manipulation of information has been studied within the context of senderreceiver games. Seminal work by Crawford and Sobel (1982) describes the case of one sender
and one receiver (also known as the strategic information transmission game, or cheap talk
game)1. One important variation enriches the theoretical framework by introducing players who
both send and receive cheap-talk messages (Hagenbach and Koessler, 2010; Galeotti et al., 2013).
In this paper we draw from this literature in modeling two “sub-groups”, where players share the
same payoff function within but not between sub-groups. Our model predicts that when
monetary incentives of the sub-groups are sufficiently misaligned, truthful communication
between sub-groups cannot occur in equilibrium.
Many past studies imply that social incentives substantially impact behavior. In particular,
social identity impacts decisions in group settings including contributions to public goods (Eckel
and Grossman, 2005; Chen and Li, 2009), punishment (Bernard et al, 2006), cooperation (Goette
et al, 2006; Charness et al, 2006; Brewer, 1999) and self-esteem (Shih et al, 1999). This social
identity effect is reflected in the literature on parochial altruism where in-group members are
treated more favorably than out-group members (Bernhard et al 2006). Broadly speaking, the
findings of this literature are that people often sacrifice personal economic gain in order to
deliver benefits to in-group members. Similarly, our interest is in whether social identity (either
shared or different) impacts the propensity to misrepresent information when it is in one’s selfinterest to do so. To do this, we build social identity into our model. Our approach is in the spirit
of Chen and Li (2009) and Chen and Chen (2011), in that other-regarding preference parameters
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
1
!Experimental studies supporting this prediction include Dickhaut et al (1995), Blume et al (1998), Blume et al
(2001), Cai and Wang (2006) and Wang et al (2010).!
!
2!
take different values according to whether one is interacting with a person holding the same
identity or different social identity. Our theory predicts that the extent of deception will increase
as more information receivers hold a different social identity.
We design and implement a laboratory experiment that assigns social identity and
monetary incentives independently in order to test the predictions of our theory. Laboratory
experiments are ideal for this topic. The reason is that in natural environments it can be difficult
to identify separately the effects of monetary incentives and social identity since (1) shared social
identity may form around similar monetary incentives; (2) people have multiple social identities
(e.g., gender, ethnicity, age) and it can be difficult to know which identity is relevant to any
particular decision process. Our experiment circumvents these problems by inducing identity
(see, e.g., Tajfel et al., 1971; Chen and Li, 2009). By randomly assigning players with different
identities to different incentive groups, we achieve variation that enables us to identify the
separate and joint effects of “money” and “friend” on decisions to deceive.
Our main findings are as follows. First, absent social-identity identity, consistent with
theory, truth-telling occurs among those with identical monetary incentives with high frequency
(above 95%). We also find substantial truth-telling when monetary incentives are misaligned in
the absence of social-identity, indeed even more than 50% rate predicted by theory2. Further,
sharing an identity does not increase the frequency of truth-telling in relation to the baseline
environment. One is more willing to lie, however, to those holding a different identity, and this
occurs even when players face identical financial incentives.
This study contributes to the social identity literature by providing new theory and
evidence with respect to how social identity affects decisions to deceive in a group environment.
While many insights have been gleaned from one-sender-one-receiver environments, extending
the strategic information transmission to a group context is important3. In particular, much real
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
2
!The data from the “Control” treatment in this paper are a subset of the data reported in Rong and Houser (2014).
The data in that paper include three additional “Control” sessions not available when this paper was originally
drafted. To maintain consistency with early versions of this manuscript, we have elected not to include those
additional data here. Including those data, however, does not change any of the results reported in this paper.!!
3
!A few studies consider environment with one sender and two receivers (Battaglini and Makarov ,2011) or two
senders and one receiver (Minozzi and Woon, 2011; Lai, Lim, and Wang, 2011). Such studies differ from ours as
players in those experiments make decisions as either sender or receiver, but never both. Another literature features
players as both sender and receiver, but uses public chat room to implement communication. The free-style language
makes it hard to examine deception in the communication observed. See Goeree and Yariv (2011) for an example.
!
3!
world communication occurs in multi-person groups where each group member is able both to
send and to receive information. Our experiments shed light on the way social and financial
incentives interact to affect deception decisions in these environments, and may inform the
design of conflict-reducing institutions that foster truthful information transmission within
organizations.
The remainder of the paper is organized as follows: The next section briefly reviews the
theoretical and experimental literature. Section III lays out the theoretical background of the
study. Section IV presents the hypotheses. Section V clarifies experimental design and procedure.
Section VI reports experimental results. Section VII concludes.
II. Literature on Deception and Social Identity
A number of economic theories and experimental tests of these theories have appeared in
the literature. We review key contributions and related experimental evidence, with particular
attention to the literature on deception and social identity.
II.1. Cheap Talk Games4
Information can be delivered strategically. When information holders and uninformed
decision makers have different incentives, information may be withheld or distorted in order to
gain advantage. In the seminal model by Crawford and Sobel (1982) a sender sends messages in
an effort to affect receivers’ beliefs and decisions. The receiver responds to these messages
strategically. The Crawford-Sobel model predicts that when payoff differences are sufficiently
large, senders send random messages in equilibrium (i.e., senders use a babbling strategy).
Recent models of strategic information transmission consider network environments
where each person acts as both sender and receiver (Hagenbach and Koessler, 2010; Galeotti et
al, 2013). Hagenbach and Koessler (2010) investigate an environment where the aggregation of
all private signals reveals the true state of the world. In their model, players send costless
messages prior to making their decision. Galeotti et al (2013) models group communication
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Wilson (2012) also allows subjects to both send and receive messages, however, the subject has no incentive to
deceive. Instead, the study focuses on the impact of monetary cost of sending information.
4
Sections 2.1, 2.2 and 3 below draw heavily from Rong and Houser (2014), which reports data from the “Control”
treatment. This information keeps the current paper self-contained.!!
!
4!
under a slightly different payoff structure than Hagenbach and Koessler (2010). Nevertheless, the
predictions of this and Hagenbach and Koessler (2010) are quite similar: equilibrium play
implies that when payoff differences are sufficiently large senders adopt a babbling strategy. Our
model draws from both frameworks in that we have a relatively simply payoff structure and
signal generating process (which laboratory participants can easily understand). Our model is
detailed in section III.
II.2. Deception Experiments
The early experimental literature on cheap talk games tests predictions of Crawford and
Sobel (1982). In those games, senders can choose vague messages by sending a range of possible
states (e.g. sending anything from 1 to 3 when the signal is 2). Dickhaut, McCabe and Mukherji
(1995) report evidence supporting the key comparative statics of the model. Their data show that
receivers’ actions deviate more from the true state as preferences between senders and receivers
diverge. Cai and Wang (2006) replicated this finding and further show that senders overcommunicate and receivers over-trust. Wang, Spezio and Camerer (2010) investigate the source
of over-communication using eye-tracking data.
Recent experimental studies use sender-receiver games to study deception. Gneezy (2005)
analyzed an experiment with only two states of the world. They found that people are sensitive to
both their own gain and others’ losses when deciding whether to lie. Lundquist et al (2009)
extended this result by finding that lie aversion increases with the size of the lie and the strength
of a promise.
Other studies related to ours include Wilson (2012) who provides both a theory and
experiment regarding how people communicate within groups. That study, which focuses on the
cost of sharing information, employs a signal generating process similar to ours but does not
consider cheating. Goeree and Yariv (2011) also reported a laboratory experiment on group
(committee) decision-making in an effort to understand the role of deliberation in decisionmaking.
II.3. Parochial Altruism & Social Identity
!
5!
Whether social-identity impacts one’s propensity to deceive has not been previously
studied. Many experiments on the effect of social identity, however, suggest that such effects
might exist and be economically important. For example, Chen and Li (2009) divided people into
identity groups and found people to be more altruistic towards those of the same group. In
particular, they show people reward more and punish less those who are members of their ingroup. People with the same identity also choose more social-welfare maximizing actions, which
results in higher expected earnings. “Parochial altruism” is also found among indigenous people
in Papua New Guinea (Bernhard, Fischbacher and Fehr, 2006). Their subjects tended to favor
people of the same tribe by giving a higher transfer amount in dictator games and punishing
more when the unfair dictator is from another tribe. Identity also affects cooperation. Eckel and
Grossman (2005) found that with strong identity priming, players of similar identity could
achieve higher levels of contributions in public goods experiments. Finally, Charness, Rigotti
and Rustichini (2007) found making social identity salient can lead to increased aggressiveness.
III. Theory Background
We develop the theoretical framework for this paper in two parts. First, we establish a
baseline model of communication between two sub-groups. Second, we model social identity as
part of the individual utility function and deduce the effect of identity on truth-telling.
III.1 Baseline Model
The communication game involves n players, N = {1, 2, … , n}. Players are divided into
two groups. There are n1 players in Group 1 and n2 players in Group 2, with n1+n2 = n. Players in
the same group has same bias: group 1 has bias !! = !! … = !!! = !′, and group 2 has bias
!!! !! = ⋯ = !! = !′′, with ! ! ≤ ! !! ; !! = ! ! , !′′ is common knowledge to all players. The
state of the world θ is randomly picked from {0, 1, … , n} with equal probability. At the
beginning of the game, every player i receives a private signal !! ∈ {0, 1} on the realization of θ,
with!
!
!!! !!
= !. Players don’t observe others’ signals.
Communication among players is restricted by a communication mode, which describes
to what extent messages can be targeted to a subset of other players. The communication mode
available to i is ! i, a partition of N−i = N \ {i}, with the interpretation that player i must send the
!
6!
same message miJ ∈{0, 1} to all players j ∈ J, for any group of players J ∈ ! i ; we refer to each
set J as an audience. A communication strategy for player i specifies, for every si ∈ {0, 1}, a
vector !! (!! ) = !!" (!! )
!"!!
. ! = (!! , !! , … , !! ) denotes a communication strategy profile.
Let !! be the messages that player i sends, and ! = (!! , !! , … , !! ).
After communication occurs, player i chooses an action !! !ℜ. Player i’s action strategy
depends on player i’s signal and the other players’ messages, i.e., xi: {0, 1}n →ℜ. Let x =
{x1, … , xn} denote an action strategy profile. Given the state of the world θ, the monetary payoff
of player i facing a profile of action ! is
!! ! ! = −
!! − ! − !!
!
!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!(1)!
!∈!
That is, agent i’s payoffs depend on how close his own action xi and the actions taken by other
players are to her ideal action !! + !.
We consider Perfect Bayesian Equilibrium. We restrict attention to pure strategy
equilibria5. Hence, each agent’s communication strategy m with an audience J ∈ ! i, may take
one of only two forms: the truthful one, miJ(si) = si for all si, and the babbling one, miJ(0) = miJ(1).
Given her own signal and the received message !!!! ,! = {!!,! }!∈!!! , by sequential rationality,
player i chooses xi to maximize her expected payoff. Player i's optimization then reads
!
!"#!! −!
!! − ! − !! |!! , !!!! ,!
= !"#!! −! !! − ! − !! ! |!! , !!!! ,! !.!!!!!!!!!!(2)
!∈!
Hence, agent i chooses
!! !! , !!!! ,! = !! + ! !|!!!! ,! !,!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!(3)
where the expectation is based on equilibrium beliefs: all the messages !!,! received by an agent j
who adopts a babbling strategy are disregarded as uninformative, and all the messages !!,!
received by an agent j adopts a truthful strategy are taken as equal to sj . Hereafter, whenever we
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
5
!It is a tradition in strategic information transmission models to characterize the utility maximizing equilibrium,
since babbling is an equilibrium solution but not meaningful in most contexts.!
!
7!
refer to a strategy profile (m, x), each element of x is assumed to satisfy Equation 3.
Player's updating is based on the signaling generating process. Suppose that an agent i
holds k signals, i.e. she holds the signal si and k-1 players truthfully reveal their signal to her. Let
l denote the number of such signals that equal 1. If l out of such k signals equal 1, then
!"#$ ! = ! + ! =
!!!
!
!
!!!
!!, ! = 0, … , ! − !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!(4)
and
!!!
!!! (!
+ !)
! ! !, ! =
!!!
!!!
!!!
!
!
!!!
!!!
!
!
!!!
=
!+2 !+!−!
!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!(5)
!+2
In the first stage of the game, in equilibrium, each agent i adopts either truthful
communication or babbling communication with each audience J ∈ ! i, correctly formulating the
expectation on the action chosen by each player j ∈ J as a function of her message !!,! and with
the knowledge of the equilibrium strategies m−i of the opponents.
We first characterize equilibria for arbitrary modes of communication. A strategy profile
(m, x) induces a network, in which a link from a player i to another player j is associated with
truthful communication from i to j. We refer to this network as the truthful network and denote it
by c(m, x). Formally, c(m, x) is a binary directed graph where cij(m, x) = 1 if and only if j
belongs to some element J ∈ ! i and miJ (s) = s for every s = {0, 1}, with the convention that
cii(m, x) = 0. The in-degree of player j is the number of players who send a truthful message to j,
and it is denoted by kj (c(m, x)). When (m, x) is an equilibrium, we refer to c(m, x) as to the
equilibrium network. The Theorem 1 below provides the equilibrium condition for truthful
communication of an agent i with an audience J (see appendix A for proof)
!
8!
Theorem 1. Consider a collection of communication modes {!! }!"# . The strategy profile (m, x)
is an equilibrium if and only if for every truthful message from a player i to an audience J ∈ ! i,
1
(b′′ − b′)
jϵJ
kj + 3
n+2
≤
jϵJ
2 kj + 3
2 !!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!(6)
The theorem above provides an intuition similar to the classical Crawford and Sobel
(1982) model. As the difference of group biases widens, it becomes harder for truth-telling to
remain in equilibrium. The above truth-telling conditions could be further simplified when we
restrict the communication mode so that each player can only send two messages, one to her own
group member and one to the other group. With such restriction, Theorem 1 can be further
simplified to contain the following two predictions. First, all within-group messages are always
truthful. Second, between-group truthfulness depends on the difference in group biases; if
(b′′ − b′) ≤ 0.5, between-group communication is also completely truthful, while if (b′′ − b′) > 0.5,
between group communication will only consist of babbling messages.
III.2 Model with Social Identity
To incorporate social identity, Chen and Li (2009) and Chen and Chen (2011) adopted an
other-regarding utility function that includes one’s own payoff and a weighted payoff from one’s
opponent. The weights take different values for those opponents who share the same identity and
those who have different identities. Here we adopt a similar approach by modifying the baseline
model above to include an individual’s utility function as a weighted average of her own and
other players’ monetary payoffs
!
!! ! ! =
!!" !! ! ! ,!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!(7)
!!!
where wij is the weight player i assigned to player j. The weight is further defined as
!!" = ! + !!" !" + 1 − !!" !"!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!(8)
where the parameter ρ measures general altruistic concern. The identity indicator Iij takes the
value 1 if player i and j belong to the same artist group, and 0 otherwise. Therefore, parameters α
!
9!
and β capture different social identity effects toward ingroup and outgroup members,
respectively. We normalize the weights so they sum to 1:
!
!!! !!"
= 1.
This utility function specification indicates that an individual not only cares about his
own payoff but also holds other-regarding concerns towards the payoffs of others (ρ). Moreover,
this other-regarding concern is amplified towards those who share the same identity; the payment
of same identity individual is weighted by a parameter ! 1 + ! . The concern for others with
different identity is dampened; the payment of those with different identity is weighted by the
parameter ! 1 + ! . Based on parameter estimations from Chen and Li (2009), we assume that a
person cares about the in-group member’s payoff more than the out-group member’s payoff
(! 1 + ! > ! 1 + ! ) .!
Replacing the payoff function with the identity-weighted utility function in the
maximization process, Theorem 2 below provides the equilibrium condition for truthful
communication of an agent i with an audience J where we assume the identity of all group
members is public information (See Appendix B for proof).
Theorem 2.!Consider a collection of communication modes {!! }!"# . The strategy profile (m, x)
is an equilibrium if and only if for every truthful message from a player i to an audience J ∈ ! i,
!
!!!! !!
! !! − b′
jϵJ
!!" − !!"
kj + 3
n+2
≤
jϵJ
2 kj + 3
2 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!(9)
Theorem 2 echoes the result from Theorem 1 that the larger is the difference in bias, the
less likely it is to support truth-telling. Theorem 2 also implies that the larger is the differences in
the weight parameters, the more difficult it is to support truth-telling as an equilibrium outcome.
Since the identity explains the weight difference in our model, Theorem 2 also implies that it is
generally harder to support truth-telling between those who carry different social identities. We
make this point clear under the particular experiment parameterization in Section IV below.
IV. Experiment Parameterization and Hypothesis
Our experiment focuses on restricted communication with group-specific audiences: each
player can send two messages, one to her own group and one to the other group. We also choose
to fix the group size to five players with three subjects in Group 1, and two subjects in Group 2.
!
10!
The true state of the world takes values in {0,1,2,3,4,5}. We allow the bias levels ! ! and!! !! !to
either take values of 0 or 1.
With this parameterization, and in the absence of social identity, Theorem 1 shows that
when the bias is unity everybody babbles in equilibrium. When the bias is zero, on the other
hand, truth telling by everyone is the equilibrium prediction. This leads to the following
hypotheses:
Hypothesis T1: Players tell the truth to those in the same group.
Hypothesis T2: Players tell the truth to those in a different group if the difference in the
group bias is zero, and send random messages (babble) if the difference in group bias is unity.
Hypothesis T3: Players trust messages sent by everyone if the difference in the group
bias is zero. Players disregard messages from the other group if the difference in the group bias is
unity.
Turn next to the impact of social identity on truth-telling in equilibrium. Theorem 2
details necessary and sufficient conditions for the values of parameters α, β and ρ such that truthtelling emerges in equilibrium, either within or between social identity and monetary incentive
groups. Appendix C states these conditions, and it is clear they vary according to the distribution
of identities among the five-member group. Moreover, the necessary and sufficient conditions
for the parameters to support a truth-telling equilibrium are nested, and we assume that a less
restrictive condition implies that a person is (weakly) more likely to be in a truth-telling
equilibrium6. The nature of the nested conditions implies the following four hypotheses
regarding the effect of identity on truth-telling.
Hypothesis S1: A sender is weakly more likely to tell the truth to her own group
members as the proportion of her own group members with the same social identity increases.
Hypothesis S2: A sender is weakly more likely to tell the truth to the other group when
the other group contains more members with the sender’s social identity.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
6
!This!would!occur,!for!example,!if!all!groups’!parameters!were!drawn!from!the!same!distribution.!In!this!case!the!
probability!of!realizing!parameter!values!that!support!truthItelling!cannot!be!larger!as!the!conditions!become!more!
restrictive.!!
!
11!
Moreover, receivers are less likely to follow the messages when the group’s identity
composition implies a lower likelihood of receiving a truthful message. This is Hypothesis S3.
Hypothesis S3: A receiver is less likely to follow messages when the sender’s identity is
less represented in the receiver’s monetary incentive group.
V. Design and Procedures
V.1 Baseline treatment
The design of our baseline treatment follows our Baseline model (Section III.1).
Each experiment includes 15 subjects, randomly arranged into groups of five. All
subjects participate in three stage games. Each stage game consists of a random number of
rounds7. Players know that the other four players are fixed during each stage game, and each of
them holds a unique ID: J, K, L, M or N. Player J, K and L belong to Group 1. Player M and N
belong to Group 2. Group 1 and Group 2 players differ in their payoff function by only one
parameter: the bias, which is common knowledge among all players.
Each round of the experiment is a guessing game. Before a round starts, the computer
generates a random integer r between zero and five (including zero and five). The number is
unknown to all players. At the beginning of each round, each player receives a private signal of
either 0 or 1. Players do not see others’ signals. However, they are told that the sum of the five
signals received by all five players is equal to the randomly chosen integer. Before players guess
the number, they are given the opportunity to exchange “cheap-talk messages” between each
other. The messages must be either 0 or 1. Messages are group specific, so each player decides
which message to send to all Group 1 players and, separately, all Group 2 players. After all
players submit their messages, they observe the messages that are sent to them and are asked to
guess the value r randomly chosen at the beginning of the round. They also choose a number x
based on their guess of r to determine everyone’s payoff for that round. The payoff function for
Group 1 and Group 2 players are as follows:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
7
!There are at least 4 rounds in each stage. After round 4, the game ends with probability 0.04 each round. We
randomly generated predetermined round lengths of 19, 28 and 32 for experimental stages I, II and III respectively,
and used these stopping times for all sessions. Participants were unaware of the number of rounds in each stage.
!
12!
5
Payoff J , K , L = 20 − ∑ ( xi − r − b1 ) 2
(10)
i =1
5
Payoff M , N = 20 − ∑ ( xi − r − b2 ) 2
i =1
(11)
Players J, K and L share the same payoff function as shown in Equation (4). The payoff
is maximized when all five players, including him/herself, choose the number x that equals the
true value of the random number r plus a group-specific bias b1. Players M and N share the same
payoff function, as shown in Equation (5). The difference between their payoff functions and
those for Group 1 players is the group-specific bias b2.
As indicated in the theory above, this payoff structure incentivizes every player to (1)
choose a number x that is as close as possible to their best guess of r plus their own group’s bias
b and (2) make other players, both in the same group and in the different group, to choose the
same x. The presence of cheap talk messaging makes it possible for players in one group to
manipulate the choice of x made by players in the other group. In our experiment setting b1 and
b2 can only takes four different values, that is (0, 0), (0, 1), (1, 0) or (1, 1)8. The structure of the
game and all payoff-related information, including the value of b1 and b2, are common
knowledge. Players also know that the values of b1 and b2 remain fixed within a stage game, but
change between stages.
To implement this design, first, subjects send messages using the “messaging screen”
(see Appendix D, Fig. 1). Then, subjects make guesses on the random integer r and choose the
payoff relevant value of x on the “guessing screen” (See Appendix D, Fig 2). While making
those two choices, the same screen also details the messages they received from others. Finally,
the “result screen” (see Appendix D, Fig 3) reveals the true value of the random integer and
displays all the actions taken by the other four players as well as their current payoff.
Payoffs accumulate within, but not between, each of the three stage games. Players are
informed about their accumulated payoff at the end of each stage. They are also reminded that
they will be re-matched with a new set of players, and that their stage payoff will not be carried
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
8
!(1, 1) appears only in the practice stage and is not included in our data analysis. The other three combinations
appear in random order for the three experimental stages.
!
13!
over to the next stage. Each subject’s earnings for the experiment are determined by one
randomly-determined stage game according to a die roll at the end of the experiment.
V.2 Identity Treatment
The identity treatment differs from the baseline treatment described above in that it
includes an identity priming stage at the beginning of the experiment. We use the artist
preference as the method, as first introduced by Tajfel et al. (1971) and then reintroduced by
Chen and Li (2009). The procedure and the paintings we use follow the latter (see Appendix E
for description of the paired paintings). Specifically, subjects are presented with five paired
paintings sequentially. Within each pair, one painting is a Kandinsky and the other a Klee.
Subject can indicate their preference for each pair and are told that they will be assigned to an
“artist identity” according to the artist whose paintings they choose most frequently. After their
identity assignment, they are given another two new paintings by Kandinsky and Klee and are
asked to guess the correct artist within 5 minutes. Each correct answer earns an additional E$40.
People who are assigned to the same artist identity can exchange free form text through a chat
window9. This form of chat was introduced by Chen and Li (2009) to strengthen identity prime.
Once the identity priming is complete, instructions for the baseline game are distributed.
The only difference in the instructions from the baseline game is that subjects are told that their
artist identity will be publicly displayed during the entire “guessing number game”.
Note that players in different “groups” differ in their monetary incentives. Players that
are assigned to different artist “identity” only differ in their preference over painters and are
randomly assigned to the two incentive groups. Therefore, within an incentive group, there may
be people on the same or different identities. In this study, we use the word “group” and “identity”
to distinguish monetary (the former) from social (the latter) affiliation.
V.3 Procedures
Sessions were conducted between May 2012 and September 2012 in the ICES laboratory
at George Mason University. Subjects were recruited via email from registered students at
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
9
!Subjects are told not to reveal information regarding their name, race, age or anything that can reveal their identity.!
!
14!
George Mason University. Each subject participated in only one session and none had previously
participated in a similar experiment.
In total, 75 subjects participated in the computerized experiment programmed with z-Tree
(Fischbacher, 2007). Each experimental session lasted between 120 and 150 minutes. Subjects’
total earnings were determined by the Experimental Dollars (E$) earned at the end of the
experiment, which were then converted at a rate of E$20 per US dollar. Average earnings before
adding the $5 show up fee were $18.40, ranging from a maximum of $29.3 to a minimum of $4.8
across all sessions.
In all treatments, before a session started, subjects were seated in separate cubicles to
ensure anonymity. They were informed of the rules of conduct and provided with detailed
instructions. The instructions were read aloud. In order to ensure there is no confusion, after
subjects finished reading the instructions they were asked to complete a quiz. An experimenter
checked their answers and corrected any mistakes one by one. Then the experimenter worked
through the quiz questions on a white board in front of all subjects. The experiment began after
all subjects confirmed they had no further questions.
We ran three sessions for the baseline condition and two sessions for treated condition (9
subjects each session). Within each session, we obtained 97 message sending decisions for each
subject (excluding the practice stage). Our analysis (conservatively) assumes 45 independent
observations (27 in the baseline condition and 18 in the treated condition) unless otherwise
specified.
VI. Results
We describe the results following the order of hypotheses in Section IV. First, we discuss
the baseline data in comparison to theoretical predictions, addressing hypotheses T1-T3. Then,
we compare data from the social identity treatment against baseline data to inform hypotheses
S1-S3.
Result T1: Absent social identity, most within-group messages are truthful.
As detailed by Figure 1, we find that 95.3% of within-group messages are truthful in the
baseline treatment. Although the overall level of truth-telling is high, it is statistically
!
15!
significantly lower than the predicted level of 100% (p=0.001). Consistent with the theory, the
bias of the opposing group does not affect within-group messages in a statistically significant
way (pairwise comparisons, all p-value greater than 0.856). Moreover, group size does not
impact truthfulness of within-group messages (96.2% for Group 1 and 94.4% for Group 2,
p=0.412).
Result T2: Absent social identity, players tell less truth to members of different groups
than to their own group members.
Our data partially support hypothesis T2. In the baseline treatment, 79.0% of messages
sent between two groups are truthful. This level of truth-telling is much lower than within-group
messages (p<0.001). This effect is larger if the bias is (0,1) or (1,0). However the effect persists
even if the bias is (0,0).
In case where bias is (0, 0), groups share the same payoff function so that truth-telling is
predicted to be 100%. However, we find 87.9% of these between-group messages to be truthful,
significantly lower than predicted. It is also lower than the truthfulness for within-group
messages (p<0.001), suggesting that dividing subjects into two groups has an impact on their
truthfulness regardless of monetary incentives10.
When the bias is (0,1) or (1,0), the only equilibrium is to babble, implying 50% truthful
messages. We observe, however, 74.5% truthful messages between groups11, significantly higher
than predicted (p<0.001). Thus, we find over-communication in our environment. On the other
hand, under either bias truth-telling is significantly lower than the case where the bias is (0,0)
(pairwise comparisons, p=0.018 and p=0.047, respectively).
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
10
!Eckel and Grossman (2005), however, find that minimal group identity does not affect subjects’ behaviors in their
experimental setting. Our data suggest that effectiveness may be sensitive to environment.!
11
!We pool the data from the unequal bias cases, as we find no significant difference between them (p=0.652)!
!
16!
Figure)1.)The)Percentage)of)Truthful)Messages)without)Iden<ty)
Percentage)of)truthful)messages)
To!own!group!
To!the!other!group!
1.2!
1!
0.95!
0.88!
0.95!
0.95!
0.75!
0.8!
0.79!
0.6!
0.4!
0.2!
0!
(0,0)!
(0,1)or!(1,0)!
overall!
Bias)level)
Result T3: Absent social identity, players overly trust messages they receive.
To measure whether a player believes the messages s/he receives, we measure the
difference between a player’s guess and the sum of “1” messages s/he received. We find that
74.9% of all guesses submitted in the baseline treatments exactly equal the sum of “1” messages
received. When the bias is (0,0), 82.1% of guesses are consistent with the messages received,
which is significantly lower than the predicted 100% level of trust.
In equilibrium, when the bias is either (0,1) or (1,0), random choice will lead Group 1
players to appear completely trusting of between group messages 37.5% of the time. This can be
seen as follows. In equilibrium, each player in Group 1 faces four possible message
combinations sent at random by two Group 2 players: (0,0), (0,1), (1,0) and (1,1). Each of these
four outcomes is equally likely to appear. Group 1 players form beliefs about the true signal that
Group 2 players hold independent of these messages: (0,0), (0,1), (1,0) and (1,1). Each outcome
is also equally likely to happen. Therefore, out of 16 message-belief pairs with each combination
having the same probability, six of the sums can coincide at random (6/16=37.5%). Similarly,
Group 2 players may appear to be trusting in 31.25% of time even if they are choosing at random.
Overall then, random choice will lead 35% of choices to appear fully trusting. Our data show that
!
17!
72.1% of guesses are fully trusting, significantly higher than 35%. Moreover, the trust levels
between bias (0,0) and biases (0,1) and (1,0) are significantly different (p<0.001).
Turn now to social identity effects, which we infer using two simple metrics. To
understand these metrics, note first that any player in the game is related to each other player in
the game according to both group affiliation and artist identity. Intuitively, the first metric is the
frequency with which a player interacts within each type of relationship. It turns out there can be
six such relationships: (1) in the same group and shares the same artist identity (denoted as
SGSI); (2) in the same group but with a different artist identity (SGDI); (3) in a different group
but shares the same artist identity (DGSI); (4) in a different group and with different artist
identity (DGDI) ; (5) in the same group and have no artist identity (SGNI) and (6) in a different
group and have no artist identity (DGNI). Note that all subjects in baseline will fall into SGNI or
DGNI while all identity treatment subjects will belong to the first four categories. We construct
this categorization for each of the five individuals in a game, then average across players, and
finally sum the results over the number of rounds in each stage game.
For example, consider Player J. The relationship between J and K is determined as SGDI
if K and J prefer different artists in the identity priming stage. J and L interact as SGSI if L
shares the same artist preference with J. However, K and L are SG in relation to J, while M and
N are DG relative to J. Our procedure determines, for each player, how many times each of the
six relationships occurs within each stage game.
The second metric builds upon the first and calculates, under each of the six scenarios,
how many times a particular player sent a true message. If the message is true, we code the
variable to be one, and otherwise zero. For each stage game, we average across all five
individuals and all rounds to determine the average number of truthful messages delivered under
each of the six scenarios.
Finally, we divide the second metric by the first in order to reveal the percentage of
truthful messages sent under each of the six possible relationships. The goal is to compare those
percentages across relationships. Our main findings are as follows:
Result S1: Compared to the baseline with no identity, sharing the same identity does not
increase within-group truth-telling, but holding different identities reduces the truthfulness
!
18!
of within-group messages. Thus, the presence of social identity has a detrimental impact on
within-group honesty.
As shown in Figure 2, the only significant difference appears when comparing the truth
telling for those with different identities (the right-most three bars) with the “no identity” or
“same identity” condition, showing that holding different identities reduces truth-telling within a
group. Indeed, we observe a significant drop of truth-telling by 31.2% (p<0.001) between cases
of different identity and no identity.
Further, when breaking down the data by cases of equal or unequal bias, the negative
effect of holding different identities is due to the unequal bias cases. Note that theoretically,
within-group messages would not depend on the other group’s bias. Our data, however, suggest
otherwise: players do consider the other group’s incentive when deciding what message to send
to their own group. On the other hand, by comparing situations of no identity to those with the
same identity, we find no significant improvement (p=0.630). On average, 95.3% of withingroup messages are truthful in the former (baseline) data, and 94.2% for the latter. Overall, the
effect of introducing identity is to decrease the truthfulness of within-group messages by 16.2%,
and this effect is significant (p=0.001).
Figure)2.)Percentage)of)Truthful)Messages)Within)Group)
(0,0)!
(0,1)!or!(1,0)!
overall!
Percentage)of)truthful)messages)
1.2!
1!
0.95! 0.95! 0.95!
0.96!
0.93!
0.94!
0.8!
0.80!
0.56!
0.64!
0.6!
0.4!
0.2!
0!
no!idenLty!
same!idenLty!
different!idenLty!
Iden<ty)Condi<ons)
!
19!
Result S2: Holding different identities decreases truthfulness significantly. Having the same
identity increases truthfulness, but the effect is insignificant. Overall, there is no significant
change in honesty of between-group messages after introducing social identity.
Introducing different identities decreases the percentage of between-group truth-telling
significantly, from 79.0% to 56.2% (p=0.013). Holding the same identity seems to move the
measure in the predicted direction: truthfulness increases to 84.3%, but the effect is insignificant
(p=0.281). This result is driven by cases where the bias is unequal, indicating that the decisions
based on identity are not independent of groups’ monetary incentives. Figure 3 illustrates this,
showing the exact truth-telling percentages in between-group messages for all cases of bias and
all relationships.
Figure)3.)Percentage)of)Truthful)Messages)Between)Groups)
(0,0)!
(0,1)!or!(1,0)!
overall!
Percentage)of)truthful))messages)
1.2!
1!
0.8!
0.78!
0.93!
0.88!
0.75!
0.79!
0.80!
0.84!
0.56!
0.45!
0.6!
0.4!
0.2!
0!
no!idenLty!
same!idenLty!
different!idenLty!
Iden<ty)Condi<ons)
Result S3: Introducing identity increase subjects’ trust in messages they receive.
We use the same measure for “trust” as constructed for the analysis of result T3. When
the bias is (0,0), we find that 93.5% of guesses are consistent with fully believing in the
messages received, significantly higher than the level of trust in baseline treatment (p<0.001).
!
20!
When bias is either (0,1) or (1,0), 81.3% of guesses are fully trusting, again significantly higher
than the corresponding trust level in baseline (p<0.001).
VI. Conclusion
Members of groups often transmit information to each other strategically, sometimes
choosing to hide an underlying truth when they believe doing so serves their own self-interest.
Both monetary and social incentives may affect these strategic decisions, and our paper is a
critical examination of these effects. We first modeled an environment with separate monetary
and social incentives, deriving equilibrium predictions of these two effects on truthfulness. We
then tested these predictions using a laboratory experiment.
We found that, absent social identity, the message sending behavior of our subjects
generally conformed to theory.12 In particular, our evidence supported the model’s key prediction
that messages sent between monetary-incentive groups should be less truthful than those sent
within groups. While matching qualitatively, the frequency of true between-group messages is
higher than our theory predicts. This “over-communication” result is consistent with findings
from previous studies (see e.g., Gneezy, 2005 among others). The reason for overcommunication remains an important open question to be addressed by future research. For
example, our design cannot distinguish between explanations such as expectations based honesty
(Charness and Dufwenberg, 2006) and commitment based honesty (Vanberg, 2008; Ismayilov
and Potters, 2012).
Further, our data indicate that introducing social identity mitigates truthful information
transmission. Moreover, our data reveal that the effect of holding different social identities
(which theory predicts should lead to less truthfulness) to be stronger than the effect of sharing
the same social identity (which theory predicts should lead to more truthfulness). The detrimental
effect on truthfulness may reflect “moral wiggle room” (see, e.g., Dana et al 2007). In particular,
holding a different social identity may be a convenient reason to lie when facing a receiver who
also holds a different monetary incentive. In the environment without social identity, guiltaversion may play a more prominent role in deterring lying for monetary gain. This effect occurs
only in the case of different social identities because when monetary incentives are aligned any
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
12
!This!result!was!reported!in!Rong!and!Houser!(2014).!
!
21!
positive effect of identity on truthfulness is inconsequential, as players regardless have no
incentive to lie.
This asymmetric social identity effect may stem in part from the high levels of
truthfulness we observed in our baseline treatment, which may have created ceiling effects.
Another limitation of our design is that we did not elicit beliefs. Consequently, while we did find
that our participants followed the message they received more than theory predicts, it is not clear
that this behavior should be interpreted as “over-trusting”. Message following behavior can arise
for a number of reasons, depending on the beliefs and expectations group members hold over the
decisions of other group members.
A final limitation of our study worth noting is that social identity can in principle emerge
through any common affiliation, including perhaps holding the same monetary incentive. Neither
our theory nor experiment separate the social identity effects arising from being in the same
monetary incentive group from the effect of the monetary incentive itself. That said, some of the
departure we observe in our data from the theory could perhaps reflect this phenomenon.13 It
would be profitable for future research to explore this issue.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
13
!For example, in baseline treatment truthful between group messages when the bias is (0,0) occur at frequency
87.9%, significantly lower than the model’s prediction of 100%.
!
22!
References:
Battaglini, M. (2002): Multiple Referrals and Multidimensional Cheap Talk, Econometrica, 70(4):
1379–1401
Bernhard, H., E. Fehr and U. Fischbacher (2006): Group Affiliation and Altruistic Norm
Enforcement, American Economic Review, 96 (2): 217–221
Blume, A., D. DeJong, Y. G. Kim and G. Sprinkle (1998): Experimental Evidence on the Evolution
of Meaning of Messages in Sender-Receiver Games, American Economic Review, 88: 1323-1340
Blume, A., D. DeJong, Y. G. Kim and G. Sprinkle (2001): Evolution of Communication with Partial
Common Interest, Games and Economic Behavior, 37: 79-120
Brewer, M. B.(1999): The Psychology of Prejudice: Ingroup Love and Outgroup Hate?, Journal of
Social Issues, 55 (3): 429-444.
Cai, H and J.T. Wang (2006): Overcommunication in Strategic Information Transmission Games,
Games and Economic Behavior, 56 (1):7-36
Charness, G., L, Rigotti, and A, Rustichini(2007): Individual Behavior and Group Membership,
American Economic Review, 97: 1340-1352
Chen, Y. and R. Chen, (2011): The Potential of Social Identity for Equilibrium Selection, American
Economics Review 101(6): 2562-2589
Chen, Y. and S. X. Li, (2009): Group Identity and Social Preferences, American Economic
Review 99(1): 431-457
Cloke, K., J. GoldSmith (2000): Resolving Personal and Organizational Conflict: Stories of
Transformation and Forgiveness, Jossey-Bass
Conrad, Charles R., M.S. Poole (2011): Strategic Organizational Communication: In a Global
Economy, Wiley & Sons
Cowan, David. (2003): Taking Charge of Organizational Conflict: A Guide to Managing Anger and
Confrontation, Personhood Press
Crawford, V.P. and J. Sobel (1982): Strategic Information Transmission, Econometrica, 50(6):14311451
Dana, J., R. A. Weber and J. X. Kuang (2007): Exploiting moral wiggle room: experiments
demonstrating an illusory preference for fairness, Economic Theory, 33: 67–80
De Dreu, C.K.W., M. J. Gelfand (2007):The Psychology of Conflict and Conflict Management in
Organizations, Psychology Press
Dickhaut, J.W, K.A. McCabe and A. Mukherji (1995): An Experimental Study of Strategic
Information Transmission: Economic Theory, 6:389-403
Eckel, C. and P. J. Grossman (2005): Managing Diversity by Creating Team Identity, Journal of
Economic Behavior & Organization, 58 (3): 371–392
!
23!
Eckman, A., T. Lindlof (2003): Negotiating the Gray Lines: an ethnographic case study of
organizational conflict between advertorials and news, Journalism Studies, 4:65–77
Galeotti, A, C. Ghiglino and F. Squantani (2013): Strategic Information Transmission in Networks,
Journal of Economic Theory, 148(5) 1751-1769
Gneezy, U. (2005): Deception: The Role of Consequences, The American Economic Review, 95(1):
384-394.
Goette, L., D. Huffman, and S. Meier (2006): The Impact of Group Membership on Cooperation and
Norm Enforcement: Evidence Using Random Assignment to Real Social Groups, American
Economic Review, 96 (2): 212–216.
Gupta, A. K., S. P. Raj, and D. L. Wilemon (1985): R&D and Marketing Dialogue in High-Tech
Firms. Industrial Marketing Management 14, 289–300
Hagenbach, J. and F. Koessler (2010): Strategic Communication Networks, Review of Economic
Studies, 77(3):1072-1099
Kolb, D. M., L.L. Putnam, J. M. Bartunek (1992): Hidden Conflict in Organizations: 1st
Edition, SAGE Publications
Krishna, V. and J. Morgan (2001a): A Model of Expertise, Quarterly Journal of Economics,
116(2):747-775
Krishna, V. and J. Morgan (2001b): Asymmetric Information and Legislative Rules: Some
Amendments, American Political Science Review, 95(2):435-452
Lai, E. K., W. Lim, and J. T.-Y. Wang (2011): Experimental Implementations and Robustness
of Fully Revealing Equilibria in Multidimensional Cheap Talk, working paper
Lundquist, T., T. Ellingsen, E. Gribbe and M. Johannesson (2009): The Aversion to Lying, Journal
of. Economic Behavior and Organization, 70(1–2):81–92
Marchewka, A., K. Jednorog, M. Falkiewicz, W. Szeszkowski, A. Grabowska and I.
Szatkowska(2012): Sex, Lies and fMRI—Gender Differences in Neural Basis of Deception, Plos
One, Aug, 2012
Miller, Katherine (2011): Organizational Communication: Approaches and Processes, Cengage
Learning
Minozzi, W., and J. Woon (2011): Competition, Preference Uncertainty, and Jamming: A Strategic
Communication Experiment, working paper
Pirnejad, H., Z, Niazkhani, M.Berg and R. Bal (2008): Intra-organizational Communication in
Healthcare--Considerations for Standardization and ICT Application, Methods of Information in
Medicine, 47(4): 336-45
Rahim, Afzalur (2000): Managing Conflict in Organizations: 3rd Edition, ABC-Clio, LLC
Rong, R. and D. Houser (forthcoming): Deception in Networks: A Laboratory Study, Journal of
Public Economic Theory
!
24!
Shih, M., T. L. Pittinsky and N. Ambady (1999): Stereotype Susceptibility: Identity Salience and
Shifts in Quantitative Performance, Psychological Science, 10 (1):81–84
Snider, C., and T. Youle (2010): “Does the Libor Reflect Banks’ Borrowing Costs?”, Working Paper,
UCLA.
Sutter, M.(2009): Deception through Telling the Truth? Experimental Evidence from Individuals
and Teams. Journal of Econonomics. 119, 47-60
Tajfel, H., M. Billig, R. Bundy, and C. Flament (1971), Social Categorization and Inter-Group
Behavior, European Journal of Social Psychology 1, 149–177
Tobak, S. (2008): Marketing v. Sales: How To Solve Organizational Conflict, CBS,
MONEYWATCH
Wang, J.T., M. Spezio and C.F. Camerer (2010): Pinocchio's Pupil: Using Eyetracking and Pupil
Dilation to Understand Truth Telling and Deception in Sender-Receiver Games, American Economic
Review, 100 (3): 984-1007
Weinrauch, J. D., R. Anderson (1982): Conflicts Between Engineering and Marketing Units,
Industrial Marketing Management, 11(4): 291-301
Xiao, E. (forthcoming): "Profit Seeking Punishment Corrupts Norm Obedience, Games and
Economic Behavior
!
!
25!