ArticlePDF Available

Abstract and Figures

Information about the consequences of our consumption choices can be unwelcome, and people sometimes avoid it. Thus, when people possess information that is inconvenient for another person, they may face a dilemma about whether to inform them. We introduce a simple and portable experimental game to analyze the transmission of inconvenient information. In this game, a Sender can, at a small cost, inform a Receiver about a negative externality associated with a tempting and profitable action for the Receiver. The results from our online experiment (N = 1,512) show that Senders transmit more information when negative externalities are larger and that Senders’ decisions are largely driven by their own preferences towards the charity and their own use of information. We do not find evidence that Senders take the Receiver’s preferences into account, as they largely ignore explicit requests for information, or ignorance, even if Receivers have the option to punish the Sender.
Content may be subject to copyright.
Experimental Economics (2025), 1–19
doi:10.1017/eec.2025.6
ORIGINAL PAPER
Spoiling the party: Experimental evidence on the
willingness to transmit inconvenient ethical information
Jantsje M. Mol1,2, Ivan Soraperra3and Joël J. van der Weele1,2
1Center for Research in Experimental Economics and Political Decision Making (CREED), University of Amsterdam, 1018
WB Amsterdam, Netherlands
2Tinbergen Institute, Gustav Mahlerplein 117, 1082 MS Amsterdam, e Netherlands
3Max Planck Institute for Human Development, Center for Humans and Machines, Lentzeallee 94, 14195 Berlin, Germany
Corresponding author: Jantsje M. Mol; Email: j.m.mol@uva.nl
(Received 5 December 2024; revised 20 February 2025; accepted 27 February 2025)
Abstract
Information about the consequences of our consumption choices can be unwelcome, and people some-
times avoid it. us, when people possess information that is inconvenient for another person, they may
face a dilemma about whether to inform them. We introduce a simple and portable experimental game to
analyze the transmission of inconvenient information. In this game, a Sender can, at a small cost, inform a
Receiver about a negative externality associated with a tempting and protable action for the Receiver. e
results from our online experiment (N =1,512) show that Senders transmit more information when nega-
tive externalities are larger and that Senders’ decisions are largely driven by their own preferences towards
the charity and their own use of information. We do not nd evidence that Senders take the Receiver’s pref-
erences into account, as they largely ignore explicit requests for information, or ignorance, even if Receivers
have the option to punish the Sender.
Keywords: Information avoidance; Lab experiment; Unethical behavior; Willful ignorance
JEL Codes: B41; C91; C93
1. Introduction
In many contexts, people have preferences over information and sometimes try to avoid it (Golman
et al., 2017). Information avoidance oen serves to protect cherished beliefs, for instance the pro-
tection of one’s ego from bad feedback (Castagnetti & Schmacker, 2022) or the avoidance of bad
nancial news to reduce disappointment or stress (Sicherman et al., 2016). In particular, previous
research has shown that some people try to escape responsibility for ethical decisions and maintain
a good self-image by remaining uninformed about the consequences of their decisions (Dana et al.,
2007; Grossman & van der Weele, 2017; Vu et al., 2023). Such willful ignorance may have important
consequences for everyday consumption behavior, such as the decision to buy products that have
adverse impacts on the environment or are manufactured in exploitative conditions (Ehrich & Irwin,
2005; Amasino et al., 2025).
Information avoidance also has an interpersonal side that has received much less attention. People
oen have information that is potentially inconvenient for others, and must decide whether to share
© e Author(s), 2025. Published by Cambridge University Press on behalf of Economic Science Association. is is an Open Access article,
distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits
unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
2 Jantsje M Mol et al.
it. For instance, a vegetarian may ponder whether to give her carnivorous friends detailed informa-
tion about the environmental costs associated with meat eating. In doing so, she may weigh several
considerations. First, a concern for environmental consequences might motivate her to inuence her
friends’ diets in the “right direction. A second, more procedural reason to share would be to make
sure her friends know the truth, whatever they end up doing. Finally, she may hold back informa-
tion out of consideration for her friends’ feelings. She may assess that transmitting information may
make her friends feel judged, and even lead to confrontations that she may wish to avoid. Indeed,
there is evidence that vegetarians and vegans sometimes experience backlash for sharing information
about their diets, which causes some to keep a low prole (De Groeve & Rosenfeld, 2022; MacInnis
& Hodson, 2017).
Other applications may occur within politics, organizations, or markets. For instance, politicians
may have to decide whether to inform their voters about dicult trade-os. Employees who have
knowledge of organizational practices with negative external consequences, must decide whether to
pass it up the decision-making chain. In buyer-seller interactions, sellers may voluntarily disclose
ethical information about their products.
To study the trade-os facing a sender of information, we designed an experiment that we call
the “Button Game.” e game involves two participants in the role of a Sender and a Receiver.
e Receiver can press a large red button on the screen, which yields a bonus payo of £1
for the Receiver, but may or may not degrade a fund destined for donation to a worthy cause.
If the Receiver does not press the button, there are no additional payos for the Receiver or
for the charity. e button is designed to be tempting; indeed, in the absence of specic infor-
mation about the externality, virtually all Receivers in our experiment press the button and
pocket 1.
Our primary interest is the decision of the Sender. Before the Receiver presses the button, the
Sender can send information about the size of the externality at a small cost. In the Baseline treatment,
Senders make multiple decisions for dierent sizes of the externality, where one of their decisions
is randomly implemented. We nd most Senders are willing to pay to send information, but only
when externalities are relatively large. is indicates that some Senders trade o the payo for the
charity with the cost of sending. We also nd evidence that personal preferences for inconvenient
information, measured on a separate task, explain sharing. Finally, procedural concerns matter, as
some senders share even if it does not change the Receiver’s decision, and almost 30% of the Senders
that share information say explicitly that this is the right thing to do.
To further investigate the Sender’s willingness to accommodate the Receiver, we designed a treat-
ment in which we vary the Sender’s information about the Receiver’s preferences. Before the Sender
makes a decision, the Receiver can request either information or ignorance. We nd no evidence
that Senders respond to the Receiver’s preferences, as neither the request for ignorance nor the
request for information signicantly aects information sharing. To reinforce the power of the request
and mimic the possibility of conict, we add a treatment with an option for the Receiver to pun-
ish the Sender by denying part of the Sender’s participation payment. e threat of punishment
does not make either type of request more eective, even though we observe some punishment by
Receivers.
e key takeaway that emerges from our dataset is that sharing of inconvenient information is
driven by the Sender’s personal attitudes towards information and the externality. To the extent the
ndings from our stylized setting capture behavior outside the lab, the prevalence of sharing shows
most people are motivated to share unethical information when it can have a signicant impact.
Nevertheless, the results also indicate the limits of sharing. e central role of Sender’s own prefer-
ences for information in the decision to share suggests that sharing will be less prevalent for topics in
which people are widely averse to information, like in the meat-eating example above (Epperson &
Gerster, 2024; Onwezen & van der Weele, 2016).
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
Experimental Economics 3
Our paper contributes to a fast-growing experimental literature on information avoidance in eth-
ical dilemmas (Dana et al., 2007; Grossman, 2014; Vu et al., 2023) and a smaller literature on how
people share inconvenient information. Closest to our paper is Soraperra et al. (2023), who examine
the demand and supply of willful ignorance in a market setup. Over multiple rounds, senders choose
to release information or not, and decision-makers can choose to match with the sender they prefer.
In this setting, senders suppress about 25% of inconvenient information on average, which correlates
with their own preferences. However, the market setting is noisy, and there is not much control over
the strategic incentives of the Senders or their beliefs about the decision-makers, preferences, making
it hard to disentangle various explanations for information transmission and suppression. Another
closely related study is Vellani et al. (2024). In an online experiment, they examine the motives of
sharing potentially unpleasant information about monetary losses for the receiver. e results, which
are in line with ours, show that participants use their own information-seeking preferences when
deciding to share such information with others. Our study instead focuses on information sharing in
the ethical domain.
A number of further studies look at information transmission. Lind et al. (2019) allow senders to
force ethical information on decision-makers aer they made their own decision to avoid or obtain
information. ey nd that the option to be “overruled” by the sender results in more information
seeking by decision-makers. Lane (2022) investigates a setting in which subjects can inform others
about the externalities of their actions aer they have taken a decision, so the information has no
instrumental value but may reduce the happiness of the decision-maker. Most senders reveal infor-
mation, despite the potentially negative impact on the receiver. In our paper, information has no
instrumental value, but our nding that senders do not cater to the preferences of the receiver is in
line with Lane (2022).
Our paper also has a link to research on paternalism. In particular, Ambuehl et al. (2021) nd
that people engage in an “ideals-projective paternalism, where they assume their preferences are
relevant to others and restrict others’ options accordingly. While senders in our study do not restrict
any options, we do nd that the sender’s own preferences for information and their evaluation of the
charity are the main predictors of what they share with others.
e main contribution of our paper to this literature is to introduce a simple and portable setting
to analyze the transmission of inconvenient information. We oer new evidence on the determi-
nants of sharing decisions and the willingness of senders to accommodate the Receiver’s information
avoidance.
2. Method and experimental design
e experiment consists of two tasks and a nal survey. e rst task measures participants’
preferences for information in an adaptation of the binary dictator game in Dana et al. (2007,
DWK hereaer). e second and main task, a novel two-person game we call the “Button Game,
disentangles dierent motives to share information.
2.1. The DWK binary dictator game
Every participant played the binary dictator game, regardless of their role (Sender or Receiver) in the
Button Game. e binary dictator game is inspired by the Hidden Information treatment proposed
by Dana et al. (2007), with all participants acting as dictators and a charity as recipient, as in Lind
et al. (2019). In this task, the participant has to choose between two options, namely, Option Aand
Option B, which have consequences for their payo and for the donation to a charity, the Red Cross.
e payos of Option Aand Option Bfor the participant are £.6 and £.5, respectively. e payos
for the Red Cross, instead, depend on the scenario: in the conicting scenario Aand Bpay £.1 and
£.5 to the charity; in the aligned scenario, the payos for the charity are ipped, with Aand Bpaying
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
4 Jantsje M Mol et al.
Fig. 1 Decision screen for the receiver in the Button Game. (a) Uninformed (b) Informed
£.5 and £.1, respectively. Participants are informed that each scenario is randomly selected with equal
probability, and they can nd out the realized scenario by clicking a Reveal button. Alternatively, they
can select their preferred option directly, without knowing whether the payos for the charity follow
the aligned or the conicting scenario.
2.2. The Button Game
As the main task, we designed a two-person game in which a Receiver interacts with a Sender. e
Sender possesses superior information about the consequences of the Receiver’s action for a third
party. e Sender can inform the Receiver before the latter chooses an action. We consider three vari-
ants of the game that manipulate how the two parties interact and dene our treatments namely,
the Baseline treatment, the Request treatment, and the Request +Punishment treatment.
In all versions of the game, the Receiver has to decide whether to press a button; see Figure 1 for
an example. e button is displayed for a total time of 30 seconds, during which the Receiver can
press it. It was designed to be attractive to press: large and red. Pressing the button pays a bonus of £1
to the Receiver. In addition, it has consequences for a third party, the Red Cross, which range from
+£.5 to -£2.5 . Crucially, the Receiver has no information about the charity payos; neither about the
actual value nor about the possible values. According to the instructions: Pressing the button also has
consequences for the total amount donated to the Red Cross. ese consequences can be either positive
or negative, but you are not informed about them. ey are concealed by ???.
Not pressing the button means that the Receiver will not get a bonus payment, but it ensures
that the Red Cross will not be aected. To avoid Receivers pressing the button in order to
speed up the experiment, subjects still have to spend the remaining time on the page. During
this time, the button press cannot be reversed. On top of the button, we displayed the bonus
of £1, alongside the payo consequences of the charity depending on the decision of the
Sender.
e Sender is informed of the consequences for the charity, and this is common knowledge among
the players. e Sender’s task is to decide whether to share this piece of information with the Receiver
before the latter makes their choice. e decision to pass information comes at a small cost of £.1 for
the Sender.
In the experiment, we implemented the Sender’s decision using the strategy method (see Appendix
E for screenshots). Each Sender had to choose whether to share information for three negative impact
levels (-£2.5, -£1.0, and -£.5) and one positive impact level (+.5). If the Sender decided to send
information for a certain impact level and that level was randomly selected for implementation,
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
Experimental Economics 5
Request + Punishment
Request
time
Baseline
Sender may send
info to Receiver
Receiver may
press button
Receiver may
request info
Receiver may
punish Sender
Both players
Instructions +
Practice Task
Both players
Beliefs +
Final Survey
Fig. 2 Timeline of the dierent variants of the Button Game.
the Sender’s payo for participation was reduced from £.5 to £.4. We tested understanding of these
consequences with a comprehension question. A complete set of screenshots of all instructions and
comprehension questions can be found in Appendix E.1
e Button Game was designed to keep the strategic aspects of the Sender’s decision relatively
simple. In particular, inspired by the sender-receiver game in Gneezy (2005), we kept the infor-
mation to the Receiver about the payo consequences for the charity down to a minimum. is
feature encourages the Receiver to press the button in the absence of information. Moreover, since
the Receiver does not know what kind of information the Sender can communicate, it limits the
degree to which the Receiver can form beliefs about the externality in the absence of information,
or form higher-order beliefs about the Sender’s intentions. is simplies the analysis, where we
will (mostly) abstract from such higher-order beliefs. It also simplies the Sender’s decision prob-
lem, as she can assume that Receivers will press the button without information. To make sure that
the Sender understands the decision environment of the Receiver, both Senders and Receivers start
the game with an (unincentivized) practice round, where they can choose to press the button as an
uninformed Receiver.
2.3. Timeline and treatments
Figure 2 shows the timeline of the Button Game and highlights the dierences between the treatments.
e soware randomly allocates participants between the roles of Senders and Receivers. At the start
of the game, all participants play a test round as an uninformed Receiver. In the Baseline treatment,
the Sender moves rst and decides whether or not to inform the Receiver. Aer the decision made
by the Sender, the soware randomly selects one of the four possible consequences for the charity.
e information about these consequences is transmitted to the Receiver (or not), depending on the
Sender’s decision. If it is transmitted, it is displayed at the top of the red button.
e Receiver thus decides whether to press the button with or without information about the con-
sequences for the Red Cross, depending on the decision of the Sender for the selected consequence.
e Request treatment extends the Button Game by adding a stage at the beginning where Receivers
can either request information or ignorance about the payos for the charity. e Receiver selects
from two pre-specied message options: ere is no option not to send a request message. Finally,
the Request +Punishment treatment extends the Request treatment by adding a stage at the end.
In this nal stage, the Receiver chooses whether to conrm or cancel the £4 bonus payment of the
Sender. In the experiment, this decision was neutrally framed as “a nal choice to avoid normative
connotations related to the word “punishment. Finally, we administer a closing questionnaire.
1e binary nature of the sending decision and the use of the strategy method may induce some experimenter demand
eect by suggesting that sending is important, at least for some externality levels. We cannot test whether these elements
aect senders’ decisions, but since they are kept constant in the experiment, they should not aect our comparisons between
treatments.
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
6 Jantsje M Mol et al.
2.4. Hypotheses
Here, we explain how we interpret the treatment dierences and discuss our hypotheses. We prereg-
istered the hypotheses prior to data collection.2. Our hypotheses are not based on a formal model,
but on a reasoned assessment of how subjects understand the game.3
Before diving into the hypotheses about the S ender’s behavior, we briey discuss what we expect for
the Receiver. For the time being, we assume that these expectations reect the Sender’s beliefs about
Receivers. As for the Receivers, we expect that virtually all of them press the button when uninformed,
given that they earn a sure £1 bonus and the consequence to the Red Cross is ambiguous and possibly
positive. Uninformed Receivers may also infer that the lack of information from the Sender meant
that the externality was small or even positive, although the lack of precise information about the
outcome means that they cannot form a precise Bayesian posterior.
When informed, instead, we expect that some of the Receivers will decide not to press the button
to avoid generating harm to the charity. Moreover, we expect the likelihood of pressing the button to
be decrease weakly with the size of the consequences. Intuitively, if someone is willing to give up £1
to avoid a given level of consequences, the same person should also be willing to give up £1 to avoid
more serious consequences.
Since our main interest is in the Sender’s decision to share inconvenient information, we will focus
only on Senders’ choices for the negative consequences for the Red Cross, namely, we (mostly) ignore
Sender’s decisions for the £5.4Moreover, we expect Senders to understand that in the absence of
information, Receivers will press the button. is means that sending information about positive
consequences is unlikely to make a dierence to the outcome for the charity, although it may help the
Receiver feel better about her choice.
For each sender, we dene a “sender-index” that measures the point at which consequences for
the Red Cross become too large not to share information. e index ranges from 0, when the Senders
do not share information for any negative consequence, to 3, when the Sender shares information for
all negative consequences. An index of 1 identies those Senders that share information only for the
most extreme (-£2.5) consequence, and an index of 2 identies those Senders that share information
for the -£1.0 and -£2.5 consequences but not for the least extreme (-£.5) consequence.
e Baseline treatment measures whether Senders have preferences for sharing inconvenient infor-
mation of the Senders that are strong enough to overcome the small cost of sharing. As mentioned
in the introduction, such preferences could depend on various motives, for example, (1) a concern
for the charity, (2) procedural reasons like the belief that Receivers ought to make an informed
choice, or (3) the desire to help the Receiver, combined with a belief the Receiver would like to be
informed. Accordingly, our rst hypothesis is that a non-negligible fraction of Senders decides to
share inconvenient information.
Hypothesis 1. Senders send inconvenient information about the charity to their partners and are
more likely to do so as the externality becomes more negative.5
To hypothesize the impact of the Request treatment, we consider both requests for information and
for ignorance. e former request is straightforward to interpret: e main reason Receivers would
like information is to decide whether to push the button. A request for information is, therefore, a
signal that the information is likely to be used by the Receiver. Since sending helps both the Receiver
2.For the preregistration, see Appendix F or https://aspredicted.org/X8Y_Q7T.
3In particular, we cannot analyze communication as a fully Bayesian game, as we did not tell the Receiver the possible
outcomes for the charity nor the probabilities associated with these outcomes.
4e main reason for including a positive value is dictated by the need to truthfully tell the Receiver that consequences
could be either positive or negative.
5e second part of this hypothesis was not preregistered but is implied by our use of the “sender-index, as explained above.
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
Experimental Economics 7
and the charity, we thus expect the Sender to increase the likelihood of sending information compared
to the Baseline.
e eect of a request for ignorance is more complex. First, it may change the Sender’s beliefs
about the impact of information on the charity. e request may be a signal that the Receiver will
not use the information, which may make the Sender less willing to send it. ere is a caveat to this
reasoning, however: e literature on moral wiggle room shows that a sizable fraction of subjects
who choose to avoid information would nevertheless use it when they are confronted by it (Dana
et al., 2007; Grossman & van der Weele, 2017). To the extent the Sender anticipates this, she may still
perceive the potential impact on the charity to outweigh the cost of sending. To better understand
how requests change beliefs, we therefore measure the Sender’s beliefs about the Receiver’s action in
each condition. In addition, if the Sender cares about helping the Receiver, who expressed a wish for
ignorance, one would expect the Sender to be more likely to suppress information.
Taken together, these considerations lead us to expect that Senders decisions follow the direction
of the request:
Hypothesis 2. Relative to the Baseline treatment, the likelihood of sharing inconvenient infor-
mation increases with a request for information and decreases with a request for ignorance.
Finally, the comparison of the Request +Punishment is meant to amplify the strength of the
Request treatment . In the Request +Punishment treatment, the Receivers can actually harm the
Senders when they are unhappy about the provided or hidden information. Since Receivers can aect
the Senders’ payos, we expect Senders to follow the request of the Receiver more oen. Furthermore,
to the degree that the request aects the perceived cost of sharing information, the Request +
Punishment treatment provides a measure of the cost sensitivity of the supply of inconvenient
information.
Hypothesis 3. e possibility of Receiver punishment amplies the impact of the requests on the
likelihood of sharing information.
Along with hypotheses about the Sender’s behavior, we derive secondary hypotheses regarding the
impact on the overall welfare of the charity. Based on the previous hypotheses about Receivers’ and
Senders’ behavior, we expect that Receivers requesting information are motivated by a willingness
to avoid harming the charity. erefore, the nature of the request, when obliged by Senders, will be
correlated with the nal outcome for the charity. Specically, we expect the following:
Hypothesis 4. A request for information is associated with higher earnings for the charity and
a request for ignorance with lower earnings for the charity. ese eects are amplied in the
punishment treatment.
2.5. Procedure
e experiment started with the binary dictator game, followed by the Button Game and the nal
survey. In the Button Game, participants were matched in pairs by the soware, which meant that
they had to wait for another player to join. If no other player appeared within 5 minutes, the soware
moved on to the end of the experiment, and the bonus payment was based on the results of the binary
dictator game. When a match was possible, players were randomly assigned to the role of Sender and
Receiver to start the Button Game.
Aer reading the instructions, both Senders and Receivers faced a practice round to experience
the decision of the Receiver button page. In the practice round, no information about the conse-
quences for the charity was communicated (see panel (a) of Figure 1). Aer the practice round,
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
8 Jantsje M Mol et al.
Senders had to state their beliefs about the number of people pressing the button by moving a
slider from 0to100 (“‘we ask you to think of 100 participants choosing as player A and give your
best guess about how many of these chose to press the button”). To keep the game simple and pay-
ment quick, beliefs were not incentivized. Next, Senders answered a short set of comprehension
questions. No comprehension questions were asked to the Receiver due to the simplicity of the
button task.
At the end of the Button Game, Senders completed a belief elicitation page where they were asked
to guess the likelihood that their Receiver pressed the button for each possible consequence, again
unincentivized and on a scale from 0to100. In the Request +Punishment treatment, Senders were
further asked to guess the likelihood of punishment. Receivers were also asked about their belief
of other players A pressing the button. is page was identical to the Sender’s rst belief elicitation
page, but it was placed aer the Receiver’s own choice to avoid spillover eects. At the very end of
the experiment, all participants completed a demographics questionnaire, which included some open
questions about their motivations in the Button Game and a 10-point slider to indicate how much
they identied with the Red Cross (inspired by Ariely et al., 2009). Finally, each player was shown
an overview of payos and was informed about the task that was randomly selected for the bonus
payment.
e main study was run on Prolic in November 2022, where 1,796 participants started the study.
Seventy-one of them dropped out before starting the DWK task, 5 could not be matched with another
player, 171 dropped out during the DWK task, and 37 participants nished the study without a
partner, leaving N =1,512 responses (84.2% completion rate) for the analysis (nBaseline =302,
nRequest =610, nRequest+Punishment =600). Due to practical constraints of the live matching into
Sender-Receiver pairs, the treatments were run sequentially. To account for time eects, we started
each treatment session approximately on the same time of day. All participants gave informed con-
sent before participation. Participants were rewarded a £1.3 show-up fee plus the bonus earned in
one of the two tasks, which was randomly selected at the end of the experiment. On the rst page of
the study, participants were informed about the payos to the Red Cross.6We did not inform partic-
ipants about the size of the original fund (which was £100). e experiment was programmed and
data was collected via oTree soware (Chen et al., 2016). e analysis code can be found at https://
osf.io/download/d3zcj/.
Several days aer the end of the main study, all participants who completed at least the Dictator
game (in one of the pilots or the mainstudy) were messaged7via Prolic with a proof of the donation
to the charity.
3. Results
3.1. Preliminaries
Before analyzing Sender behavior, we rst conduct a randomization check, and verify whether some
key assumptions about the way both Receivers and Senders approach the game.
6e experimenters have prepared a f und to donate to the Red Cross at the end of the experiment. Your decisions may aect
the size of this fund, and can either increase or decrease the total donations to the Red Cross. ese donations are real, as our
ethical approval does not allow us to deceive participants. A proof of the charity donation will be available upon completion
of the experiment.
7Dear participant, In [month] 2022, you participated in our decision-making experiment on Prolic. As part of this exper-
iment, we scheduled a donation to the British Red Cross. Based on the decisions in the experiment, positive and negative
payos could be collected for the Red Cross. We would like to inform you that the donation to the Red Cross has been made.
You can nd the donation receipt and more details here: https://gshare.com/s/d684b47812a2585174f4 anks again for your
participation. You do not need to respond to this message. e researchers.
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
Experimental Economics 9
Table 1 Descriptive statistics by treatment
Baseline Request Req. +Pun. Overall p-value
Gender (%) .054
Female 136 (45) 316 (51.8) 323 (53.8) 775 (51.3)
Male 161 (53.3) 291 (47.7) 273 (45.5) 725 (47.9)
Non-binary/not say 5 (1.7) 3 (.5) 4 (.7) 12 (.8)
Age in years (SD) 39.5 (12.9) 39.1 (13.) 39.0 (13) 39.1 (13) .825
Monthly income (%) .209
<£999 29 (9.6) 55 (9) 47 (7.8) 131 (8.7)
£1,000-£1,999 68 (22.5) 156 (25.6) 130 (21.7) 354 (23.4)
£2,000-£2,999 80 (26.5) 175 (28.7) 162 (27) 417 (27.6)
£3,000-£3,999 61 (20.2) 103 (16.9) 123 (20.5) 287 (19)
>£4,000 51 (16.9) 88 (14.4) 88 (14.7) 227 (15)
Rather not say 13 (4.3) 33 (5.4) 50 (8.3) 96 (6.3)
Identify with charitya(SD) 0.2 (2.9) .2 (2.7) .2 (2.8) .2 (2.8) .947
Browser type Desktop (%) 281 (93) 566 (92.8) 517 (86.2) 1364 (90.2) <.001
Reveal in DWK (%) 119 (39.4) 256 (42) 223 (37.2) 598 (39.6) .232
Press btn in the test round (%) 292 (96.7) 579 (94.9) 572 (95.3) 1443 (95.4) .477
Observations 302 610 600 1512
Notes: The table reports the means for the continuous and the counts for the categorical variables with, respectively, SD and percentages in
parentheses. aResponse to the question How much do you identify with the charity Red Cross? ranging from -5 =not at all to 5 =very much.
The column “p-value” reports the results of a test comparingthe dierent treatments. A chi-squared test is used for categorical variables and
an Anova for the continuous variables.
3.1.1. Randomization check
Due to the live matching procedure of the Button Game, treatments were run sequentially on the
Prolic platform. It may be the case that dierent user groups log into Prolic at dierent times and
days of the week. Table 1 provides summary statistics about the participants’ demographics and other
variables. e table allows us to assess the quality of the randomization across treatments. Overall,
the sample is balanced regarding age, income, identication with the charity, button pressing in the
practice round, and most importantly, own preferences for information (measured by the decision to
reveal in the binary dictator game). Gender distribution (more women in the Request +Punishment
treatment) and the device used (more mobile devices in the Request +Punishment treatment) are
slightly unbalanced across treatments. To control for such dierences, we added gender and device
type as covariates in all further analyses.
3.1.2. Receiver’s behavior
We check whether Receiver behavior broadly aligns with our assumptions. Indeed, almost all
Receivers press the button when uninformed (96.4%; n =364) across all treatments. As mentioned
in the hypothesis section, this high rate is unsurprising, given the monetary payo of pressing the
button and the absence of any information about the charity.
Moreover, all 61 Receivers that saw good news namely, saw that the button increased the dona-
tion by an additional £.5 pressed the button. Finally, the likelihood of pressing the button decreases
with the severity of the negative consequence for the charity: 73.8% (n =107) of the informed
Receivers clicked the button when the consequences were -£.5 , 66.1% (n =112) when they were
-£1, and 51.8% (n =112) when they were -£2.5. is shows that, overall, the behavior of Receivers is
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
10 Jantsje M Mol et al.
Table 2 Senders’ beliefs about Receivers’ button pressing, by consequence and treatment.
Dependent variable: Beliefs about Receivers’ button pressing
All tmts Baseline Request Req. +Pun.
(1) (2) (3) (4)
consequence+.5 5.344*** 4.7154.593** 6.423***
(.967) (2.496) (1.426) (1.511)
consequence-.5 −23.134*** −20.808*** −24.636*** −22.777***
(.981) (2.132) (1.517) (1.608)
consequence-1.0 −31.354*** −26.596*** −33.272*** −31.800***
(1.075) (2.360) (1.631) (1.776)
consequence-2.5 −40.458*** −36.172*** −42.466*** −40.573***
(1.210) (2.671) (1.868) (1.968)
Observations 3,780 755 1,525 1,500
R2.465 .409 .490 .469
Adjusted R2.331 .258 .361 .334
Notes: Dependent variable: Response to the statement I believe ... in 100 players will press the button. Reference category: uninformed. Linear
model with individual level fixed eects and heteroscedasticity robust standard errors in parentheses (p<.10;*p<.05;**p<.01;*** p<
.001).
in line with our predictions, suggesting that they trade o the consequences for the charity with the
cost of sharing.8We will investigate the behavior of Receivers in more detail in Section 3.6.
3.1.3. Sender’s beliefs
Before we study the Sender’s decision in detail, we rst verify key Sender beliefs about the Receiver’s
behavior. In particular, to interpret the decision to share bad news as an attempt to help the charity, it
must be true that Senders believe that sharing bad news leads indeed to a lower likelihood of press-
ing the button. Before making any decisions, Senders believe on average that 80.0% (SD =18.1) of
the Receivers press the button when not informed about the consequences. Table 2 regresses Senders’
beliefs that Receivers will press the button, conditionally on being informed, for the dierent possible
consequences. It reveals that Senders’ beliefs about Receivers pressing the button decline as the sever-
ity of negative consequences increases, compared to uninformed Receivers across all treatments. By
contrast, Senders expect Receivers to press the button about 5 percentage points more oen when they
are informed about positive consequences. is shows that, on average, Senders (correctly) believed
that sharing information would be eective, and more so when the externality was more negative.
In Section 3.5, we provide more details on the role of Sender beliefs in decision-making.
3.2. Information sharing in the baseline treatment
On the aggregate, Senders’ decisions to share information increase with the size of the consequence. A
positive consequence of £.5 is shared by 32.5% of Senders in the Baseline treatment. Sending informa-
tion about a positive consequence may reassure Receivers that the charity benets from their decision
to press the button. However, there is a cost of sending information. Since most Senders (correctly)
expect uninformed Receivers to press the button regardless, this can explain why most senders did not
share this information. We discuss Sender’s motives to send positive information further in Appendix
B1.
Negative consequences of -£.5, -£1, and -£2.5 are shared by 40.4%, 57.6%, and 71.5% of
Senders, respectively. e dierences between these proportions are statistically signicant (pairwise
8e pattern is also conrmed also when looking at the Receiver’s behavior separated by treatment.
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
Experimental Economics 11
26.5%
15.0%
19.0%
39.5%
0
10
20
30
40
50
% of Senders
0123
−0.5 −1.0 −2.5 −0.5 −1.0 −2.5 −0.5 −1.0 −2.5 −0.5 −1.0 −2.5
hide
share
Sender−index
Choice
Fig. 3 Distribution of the sender-index in the Baseline treatment.
McNemar tests: consequence -.5 vs. consequence -1.0: 𝜒2(1) = 19.5, p<0.001; consequence -1.0 vs.
consequence -2.5: 𝜒2(1) = 17.4, p<.001). Moreover, almost all Senders act consistently with a strat-
egy where sharing small (negative) externalities implies sharing larger negative externalities. Only 40
out of 756 Senders decide to share information for less serious consequences and not for more serious
ones.
is provides a rationale for our (preregistered) use of a sender-index, which reects the small-
est consequence for which the Senders decide to share information.9Figure 3 shows the distribution
of the sender-index in the Baseline treatment. is shows that there is substantial heterogeneity in
the preferences for sharing information among our participants. On the one hand, 26.5% of the
Senders never share information with the Receiver (sender-index =0), and another 39.5% of Senders
always share information about the consequences (sender-index =3). e remaining Senders have
intermediate preferences and share only when consequences are suciently negative (sender-index
1 and 2).
Overall, these numbers show that the majority of the Senders trade o the cost of sharing with the
potential consequences for the charity, and provides support for Hypothesis 1.
3.3. The eect of requests
We now turn to the Request treatment, which allows us to investigate whether Senders take into
account the preferences of the Receivers when sharing information. In this treatment, the majority of
Senders (225; 73.8%) received a Request for information, while the rest (80; 26.2%) received a request
for ignorance. Figure 4 (the three middle panels) shows the distribution of the sender-index across
the various treatments. e Request and Punishment treatments are split by the nature of the request.
According to our Hypothesis 2, we should observe an increase in the sender-index when the request
is for information and a decrease when the request is for ignorance. However, the data do not show an
increase in the frequency of Senders with higher sender-index values when transitioning from the le
to the right panel in Figure 4. e average sender-index gives a similar picture, with an average of 1.73
9Following the preregistration protocol, we exclude the 40 non-monotonic participants from the analysis of the sender-
index. Table B3 examines the pre-registered robustness check of a binary sender-index (including these 40 participants with
non-monotonic sharing behavior). e appendix shows that results are robust to the inclusion of these participants. A detailed
analysis of the sender-index can be found in Appendix A.
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
12 Jantsje M Mol et al.
Punishment
Ignorance
requested
Request
Ignorance
requested
Baseline
Request
Information
requested
Punishment
Information
requested
0123 0123 0123 0123 0123
0%
10%
20%
30%
40%
50%
Sender−index
% of Senders
Fig. 4 Distribution of the sender-index by treatment.
when information is requested, of 1.65 when ignorance is requested, and of 1.71 when the request is
not present. Indeed, a non-parametric Jonckheere-Terpstra trend test fails to reject the hypothesis of
no dierence in the sender-index across dierent requests (z=.31, p=.377).10
Regression evidence.
To examine the sharing decision more closely and with additional statistical power, we regress the
sender-index on the treatments, as well as variables that measure the Senders’ preferences for infor-
mation and their identication with the charity. We also include control variables such as gender,
age, income, type of device used, and the number of attempts to get the comprehension questions
correctly. Since the sender-index is ordinal by nature, we employ an ordinal probit model to explore
the correlation between such variables and the decision to share information.
Model (1) in Table 3 presents the results of the regression using the Baseline and Request data. It
shows that requests for information have little eect, but requests for ignorance have a negative impact
on the sender-index (compared to the Baseline without requests), although this is not statistically
signicant. Furthermore, sharing is positively related to how close the Senders feel to the charity, and
whether they themselves revealed it in the DWK game. is result is highly statistically signicant,
and shows that preferences about the information one would like to have for oneself play an important
role in sharing information with others.
To further understand the channels through which the request for ignorance aects the sender, we
investigate whether senders are more likely to oblige when the request aligns with their own prefer-
ences for information. To test this, we run the same model restricting the data to those who remained
ignorant in the DWK game (column 2) and those who informed themselves (column 3). e results
show no signicant interaction between the Sender’s preferences andthe request: While Senders who
chose to remain ignorant are more likely to accommodate a request for ignorance, this eect is not
statistically signicant.
10Testing the moregeneral assumption of a dierence across distributions also does not support the idea that the request has
an eect on the decision to share information. A 𝜒2test cannot reject the null hypothesis of no dierences in the distributions
of the sender-index (𝜒2(6) = 2.38, p=.882)
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
Experimental Economics 13
Table 3 Ordered probit regressions of sender-index
Dependent variable: Sender-index
(1) (2) (3) (4) (5) (6)
Information preference (ref =Baseline)
Request info −.011 −.111 .085 −.028 −.072 .023
(.124) (.162) (.202) (.122) (.157) (.197)
Request ignorance −.130 −.255 .023 −.150 −.232 −.063
(.172) (.227) (.272) (.169) (.219) (.273)
Request info under punishment threat .096 −.048 .356
(.130) (.164) (.224)
Request ignorance under punishment threat .083 .097 −.008
(.167) (.228) (.245)
Control variables
Identify with charity .990*** 1.234*** .764*.731*** .884*** .555
(.214) (.269) (.365) (.167) (.212) (.287)
Revealed in DWK .257*.323***
(.119) (.094)
Log likelihood -522 -312.5 -203.6 -842.8 -523.1 -311.9
Pseudo R2(McFadden) .038 .048 .028 .029 .028 .026
Covariates Yes Yes Yes Yes Yes Yes
Revealed in DWK No Yes No Yes
Observations 412 244 168 667 402 265
Notes: Ordinal probit model of the sender-index. Covariates suppressed for brevity: gender, age, income, browser type, comprehension ques-
tions. Models 1, 2, and 3 include all participants in the Baseline and Request treatments. Models 4, 5, and 6 include all participants across all
treatments. Robust standard errors in parentheses(p<.10;*p<.05;** p<.01;***p<.001).
3.4. The eect of adding punishment
e Request +Punishment treatment allows us to test whether Senders stick to their preferences
for sharing even when they risk punishment for not following the request (Hypothesis 3). Note that
punishment rates were low, but not negligible. Requests were followed about half of the time, and
deviations in responses to information requests were more likely to be punished (32%) compared
to deviations aer requests for ignorance (14.6%).11 In line with the pre-registration, we test the
null hypothesis that punishment does not change the pressure to follow the request of the Receiver
against the alternative hypothesis that it increases the pressure to follow the request of the Receiver.
Specically, we test whether the threat of punishment increases the sender-index when information
is requested and decreases the sender-index when ignorance is requested compared to the Request
treatment.
As for the Senders’ decision in the Request +Punishment treatment, the le- and rightmost panels
of Figure 4 show the distribution of the sender-index when a request for ignorance and for informa-
tion are received, respectively. Visually, these distributions do not dier substantially from the ones
observed in the Request treatment, which are reported in the second and fourth panel, respectively.
11e distribution of requests observed in the punishment treatment is similar to the one observed in the treatment without
punishment. In Request +Punishment treatment, 206 Senders (68.7%) received a request for information and 94 (31.3%)
received a request for ignorance. 52 (17.3%) of the 300 Receivers punished the Sender for (not) responding to their request.
Information requestswere followed by 113 Senders and ignored by 93 Senders, of which 30 were punished(32.2%). C onversely,
48 Senders sent information when ignorancewas requested, but only 7 of those were punished (14.6%). In a few cases, Receivers
were punished when following the request for information (6 cases) or ignorance (9 cases).
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
14 Jantsje M Mol et al.
Indeed, a one-sided Wilcoxon rank-sum test fails to reject the null hypothesis that punishment has
no eect on the sender-index both when ignorance is requested (p=.865) and when information is
requested (p=.164).12
We also use regressions to investigate whether requests combined with the threat of punish-
ment induce dierent patterns from the baseline. Column (4) of Table 3 provides results that are
in line with the graphical and non-parametric evidence: we do not observe any signicant eect
of requests when punishment is present. Columns (5) and (6) further reveal that the Sender’s
preferences for information do not interact with the request when the threat of punishment is
introduced. All treatment coecients in these regressions are statistically insignicant at the 5%
threshold.
3.5. The role of sender’s beliefs and motives
We previously showed that, prior to making a decision or learning about the requests, most Senders
believed that Receivers will press the button when uninformed and that they are more likely to refrain
from doing so when informed about the impact of externalities (see Table 2). In this section, we
aim to better understand the Sender’s motives in the decision to send information by analyzing text
responses and stated beliefs. Appendix D provides the visual and statistical evidence corresponding
to the claims in this section.
Motives in open-ended text responses.
In Appendix D1, we examine sender’s motive to share elicited in open-ended text responses at the
end of the experiment. Overall, these responses support the idea that a majority of Senders who
revealed information did so out of a concern for the charity. Procedural motives are also prevalent, as
almost 30% of Senders who send information indicates that it is the “right thing to do. A few Senders
expressed concerns for Receiver’s preferences, but this does not appear to be a dominant motive, in
line with the muted impact of request. Overall, these results are in line with evidence in Lane (2022),
which shows that most Senders will send information about negative externalities aer Receivers
have already made a decision, and the information is likely to have negative hedonic value. It is also
consistent with Arrieta & Bolte (2023), who show that a majority of people think that having false
beliefs is detrimental to a person’s welfare.
Did the nature of the request aect Sender’s beliefs in the Request treatment?
We nd that the nature of the request does aect Sender’s beliefs about the impact of sharing infor-
mation on Receiver’s behavior. Model (1) and (2) of Appendix Table D1 report regressions studying
how the belief about the eect of sharing information measured by the dierence between the belief
of pressing the button when informed and when uninformed changes with the request and conse-
quence levels. Focusing on the Request treatment and the largest externality level (-€ 2.5 ), Senders
believe that sharing information when Receivers request it reduces the likelihood of Receivers press-
ing the button by almost 48 percentage points. is dierence becomes smaller aer receiving a
request for ignorance: e estimated drop in the coecient ranges from 9 to 11 percentage points,
depending on the model, which is statistically signicant at the 10% level.
While requests for ignorance decrease S ender’s expectations that the Receiver will use the informa-
tion, Senders still expect information sharing to reduce button pressing by more than 37 percentage
points. us Senders expect that Receivers who prefer to remain ignorant may nevertheless refrain
12A Jonckheere-Terpstra trend test using all ve combinations of treatments and requests does not reject the null hypothesis
that the sender-index is not increasing in the pressure to follow the request (z=.38, p=.352). Similarly, a 𝜒2test cannot
reject the null hypothesis of no dierences in the distributions of the sender-index across all ve combinations of requests and
treatments (𝜒2(12) = 7.58, p=.817).
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
Experimental Economics 15
from pushing the button when they are informed of the consequences. is expectation is partially
correct (see Section 3.6) and helps explain the lack of response to requests.
Did the presence of punishment aect sender’s beliefs?
In Models (5) and (6) of Table D1, we look at the dierence in the Senders beliefs about getting
punished when sharing and when not sharing information. Regression results show that Senders do
not perceive much dierence in the likelihood of being punished when sharing or when withholding
information, as the constants in the model are not signicantly dierent from zero (although expected
punishment is slightly higher aer withholding information). Moreover, the nature of the request
barely moves this expectation, suggesting that Senders were not sure how to interpret the request
of the Receiver in this treatment. is may explain why punishment was not eective in enforcing
responses to the request.13
Do beliefs explain sender’s decisions?
To further understand the impact of beliefs on decisions, we regress the Senders’ decisions on
their beliefs that information makes a dierence to the charity. We measure this as the decrease
in the subjective Sender belief that the Receiver will press the button when informed, compared
to being uninformed. We control for the treatments, the nature of the request, and the externality
size.
e results of this exercise are presented in Appendix Table D2. ere is a clear eect of
Sender beliefs: For the case where the externality is -€ 2.5 , namely, Model (1), an increase of
1 percentage point in the Sender’s beliefs that information will sway the Receiver’s behavior is
associated with a statistically signicant increase in the likelihood to send information of about
.138 percentage point. Similar eects are observed in Models (2) and (3) for the other negative
externalities.
Interestingly, the eect of the Sender’s identication with the charity remains statistically signi-
cant, even when controlling for beliefs about the impact on the charity. One interpretation of this
result is that Senders who care about the charity wish to signal to themselves that, regardless of
whether Receivers act on the information, they have fullled their duty” by providing information
about the consequences. e eect of the Senders own preferences for information, as measured by
revelation in the DWK task, is also robust, underscoring the conclusion that Senders want others to
have information that they value for themselves.
3.6. Receiver behavior and consequences for the charity
We now analyze the consequences of sharing information for the charity. Figure 5 shows the aver-
age Receiver impact on charitable donations in the dierent treatments. e le panel examines the
aggregate eect of the treatment. e results show relatively small dierences in the average payo of
the Red Cross. Statistically, we cannot reject the null hypothesis that the aggregated outcome is the
same across the three treatments (F(2,753) = .87, p=.419).14
In Hypothesis 4, we predicted that a request for information is associated with higher earnings for
the charity and a request for ignorance with lower earnings for the charity. To visually evaluate the
hypothesis, we split the results for the charity at the request of the Receiver. e right panel of Figure 5
suggests that Receivers who ask for ignorance cause more harm to the charity. A test that the distri-
bution of charity outcomes is the same across all ve groups shows a signicant dierence in the
13Moreover, the presence of punishment changes the Sender’s beliefs about button pressing. As we discussed above, in the
Request treatment, Senders correctly expected that information was less likely to have an impact on Receivers who requested
ignorance. Models (3) and (4) in Table D1 and the le panel of Figure D2 show that this is no longer the case in the Request +
Punishment treatment.
14Comparing the distribution of payos leads to the same conclusion (𝜒2(8) = 4.55, p=.804).
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
16 Jantsje M Mol et al.
−1.0
−0.8
−0.6
−0.4
Baseline Request Req. +
Pun.
Treatment
Consequences for the Red Cross (in Pounds)
Baseline Request
Information
requested
Request
Ignorance
requested
Req. + Pun.
Information
requested
Req. + Pun.
Ignorance
requested
Treatment and request
Fig. 5 Consequences for the charity. Average transfer of the Receiver to the charity fund (means and SE).
outcomes for the charity (𝜒2(16) = 34.13, p=.005). e OLS regressions in Table C1 show a nega-
tive eect of requesting ignorance (relative to Baseline) in both Request and Request +Punishment
treatments, although the coecients are only signicant at p<.1. Moreover, we do not nd evidence
that punishment amplies the eect of requests.
Receiver selection and response to information.
Above, we have shown that there are no statistical dierences in Sender behavior in response to the
request. Yet Figure 5 reveals that the impact on the charity varies with the request, suggesting that
the Receivers’ behavior correlates with the request they make. To understand this, we investigate the
impact of both making a request for information and actually becoming informed on the behavior
of Receivers. In Table 4, we regress the decision to press the button on dummies capturing informa-
tion requested and received. To fully control for Senders’ behavior, which varies with the size of the
externality, we run separate regressions for each level of the externality. Moreover, we pool the data of
the Request and Request +Punishment treatments to increase statistical power. Appendix Figure B1
shows the proportion of button clicks across treatment, type of request, and received message (again
pooling the Request and Request +Punishment treatments).
e results show that uninformed Receivers have a similar behavior independently of their
requests, as they universally press the button. Moreover, we observe that receiving information
has a strong impact on pressing the button in the Baseline and that this impact increases with
the size of the externality, as the estimated parameter of “Information received” gets smaller
moving from Model (1) to Model (3). ese results are in line with the raw data reported
in Section 3.1.2.
When looking at the interaction of information preference and information received, we con-
clude that supplying information to those Receivers who request information has the same eect as
supplying information in the Baseline. By contrast, supplying information to people who requested
ignorance has a smaller impact than supplying information in the Baseline, principally because
Receivers who request ignorance do not use the information as much. Nevertheless, Receivers who
request ignorance are not wholly unresponsive to information, as they do not press the button as
oen as uninformed Receivers. is pattern is in line with previous research (Dana et al., 2007), and
and Senders seem to anticipate it (see Section 3.5).
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
Experimental Economics 17
Table 4 Receivers’ button pressing by request and information obtained.
Dependent variable: Button pressing
Cons. -2.5 Cons. -1.0 Cons. -.5
(1) (2) (3)
Constant 1.019*** 1.012*** 1.020***
(.061) (.056) (.057)
Information preference (ref =Baseline)
Request info .011 .021 .013
(.032) (.032) (.032)
Request Ignorance .016 .019 .026
(.03) (.030) (.030)
Information (ref =Ignorance)
Information received −.546*** −.393*** −.192
(.105) (.109) (.100)
Interactions
Request Info × Information received .027 .028 −.126
(.123) (.127) (.117)
Request Ignorance × Information received .362** .202 .101
(.134) (.130) (.114)
Observations 447 447 442
R2.374 .223 .181
Adjusted R2.358 .203 .160
Notes: Dependent variable: Button pressed. Covariates suppressed for brevity: revealed in DWK, age, income, browser type, comprehension
questions. Linear model with heteroscedasticity robust standarderrors in parentheses (p<.10;*p<.05;**p<.01;*** p<.001).
In summary, our data show that a) information has a clear impact, as on average, Receivers want
to avoid imposing negative externalities, and b) Receivers who request ignorance are less responsive
to information and act more selshly. Nevertheless, even among this last group, information does
reduce button clicking.
4. Conclusion
We investigated the willingness to share inconvenient information in an online experiment. e key
take-away that emerges from our dataset is that Senders are willing to pay to share inconvenient news
out of concern for the otherwise negative consequences. e Senders’ own preferences for informa-
tion also play a signicant role, thus showing that people share information that they consider in
their own decision-making, in line with Vellani et al. (2024). We nd little evidence that Senders try
to cater to the preferences of Receivers. In particular, we do not nd that they respond to explicit
requests for ignorance or information, even when there is a threat of punishment. Indeed, Senders
correctly anticipate that sharing information will still lead to better results for the charity, even if
Receivers asked to remain ignorant.
If these results replicate in other settings, it implies that there is scope for inconvenient information
to spread in society or organizations, as long as there are enough people who care about the aected
third party. However, the results also point to the limits of information sharing. e fact that people
share what they think is valuable for themselves suggests that people may end up in information silos.
ere is evidence that people dislike obtaining information that casts their behavior in a negative light
(Golman et al., 2017; Vu et al., 2023). If social networks are characterized by homophily, namely,
people interact with others who have similar preferences or behavior, this might lead to information
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
18 Jantsje M Mol et al.
bubbles, in line with results in Soraperra et al. (2023). For instance, returning to the example in the
introduction, there is evidence that meat eaters do not like to receive information about the negative
consequences of meat (Epperson & Gerster, 2024; Onwezen & van der Weele, 2016). To the degree
that carnivores seek each other out, they are unlikely to hear about the negative impacts of meat
production.
ere are a number of avenues for further research to address the limitations of the current study.
One interesting extension would be to consider less anonymous interactions between senders and
receivers, as in Foerster & van der Weele (2021). More generally, stronger forms of receiver punish-
ment or opportunities for conict may induce more self-censorship by senders. Second, one could
look at dierent audiences: Perhaps people would be more motivated to share information with multi-
ple Receivers as the potential impact is bigger. One could also look at more extensive sharing networks
to understand how inconvenient information spreads, perhaps testing predictions in Bénabou et al.
(2020). Finally, one might look at various formats for information sharing, perhaps also including
advice on the decision, which is the focus of Coman & Gotthard-Real (2019).
Replication Packages. To obtain replication material for this article, https://doi.org/10.17605/OSF.IO/MZPTY.
Acknowledgements. We would like to thank Yves Le Yaouanq, Lenka Fiala, as well as conference participants at ESA Online
Global 2021, TIBER 2021, ICSD 2022, and ESA World 2023 for their valuable comments. Sam Walsh, Britt van der Elsken,
Ruben Bijl, and Paulina Schulte-Vels provided excellent research assistance. is research has received nancial support from
the European Research Council (ERC) under the Europe anUnion’s Horizon 2020 research and innovation programme (Grant
agreement No. 865931) and the Dutch Research Council (NWO; Vi.Vidi.195.137 and 452.17.004).
Competing interests. e authors declare no conict of interest.
References
Amasino, D. R., Oosterwijk, S., Sullivan, N. J., & van der Weele, J. J. (2025). Seeking or ignoring ethical certications in
consumer choice. Ecological Economics,229, 108467. https://doi.org/10.1016/j.ecolecon.2024.108467
Ambuehl, S., Bernheim, B. D., & Ockenfels, A. (2021). What motivates paternalism? An experimental study. American
Economic Review,111(3), 787–830. https://doi.org/10.1257/aer.20191039
Ariely, D., Bracha, A., & Meier, S. (2009). Doing good or doing well? Image motivation and monetary incentives in behaving
prosocially. American Economic Review,99(1), 544–555. https://pubs.aeaweb.org/doi/10.1257/aer.99.1.544
Arrieta, G., & Bolte, L. (2023). What you don’t know may hurt you: A revealed preferences approach. Available at SSRN.
Bénabou, R., Falk, A., & Tirole, J. (2020). Narratives, imperatives, and moral persuasion. Working Paper 24 https://scholar.
princeton.edu/sites/default/les/rbenabou/les/morals_september_15.pdf
Castagnetti, A., & Schmacker,R . (2022). Protect ingt heego: Motivated information selection and updating. European Economic
Review,142, 104007, https://doi.org/10.1016/j.euroecorev.2021.104007
Chen, D. L., Schonger, M., & Wickens, C. (2016). oTree:An open-source platform for laboratory, online,and eld experiments.
Journal of Behavioral and Experimental Finance,9(1), 88–97. http://dx.doi.org/10.1016/j.jbef.2015.12.001
Coman, L. C., & Gotthard-Real, A. (2019). Moral perceptions of advised actions. Management Science,65(8), 3904–3927.
http://pubsonline.informs.org/doi/10.1287/mnsc.2018.3134
Dana, J., Weber, R. A., & Kuang, J. X. (2007). Exploiting moral wiggle room: Experiments demonstrating an illusory preference
for fairness. Economic eory,33(1), 67–80. http://link.springer.com/10.1007/s00199-006-0153-z
De Groeve, B., & Rosenfeld, D. L. (2022). Morally admirable or moralistically deplorable? A theoretical framework for
understanding character judgments of vegan advocates. Appetite,168, 105693. https://doi.org/10.1016/j.appet.2021.105693
Ehrich, K. R., & Irwin, J. R. (2005). Willful ignorance in the request for product attribute information. Journal of Marketing
Research,42(3), 266–277. https://doi.org/10.1509/jmkr.2005.42.3
Epperson, R., & Gerster, A. (2024). Willful ignorance and moral behavior. Available at SSRN: 3938994.
Foerster, M., & van der Weele, J. J. (2021). Casting doubt: Image concerns and the communication of social impact. e
Economic Journal,131(639), 2887–2919. https://doi.org/10.1093/ej/ueab014
Gneezy, U. (2005). Deception: e role of consequences. American Economic Review,95(1), 384–394. https://doi.org/10.1257/
0002828053828662
Golman, R., Hagmann, D., & Loewenstein, G. (2017). Information avoidance. Journal of Economic Literature,55(1), 96–135.
https://doi.org/10.1257/jel.20151245
Grossman, Z. (2014). Strategic ignorance and the robustness of social preferences. Management Science,60(11), 2659–2665.
http://pubsonline.informs.org/doi/abs/10.1287/mnsc.2014.1989
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
Experimental Economics 19
Grossman, Z., & van der Weele, J. J. (2017). Self-Image and willful ignorance in social decisions. Journal of the European
Economic Association,15(1), 173–217. https://academic.oup.com/jeea/article-lookup/doi/10.1093/jeea/jvw001
Lane, T. (2022). Intrinsic preferences for unhappy news. Journal of Economic Behavior & Organization,202, 119–130. https://
doi.org/10.1016/j.jebo.2022.08.006
Lind, J. T., Nyborg, K., & Pauls, A. (2019). Save the planet or close your eyes? Testing strategic ignorance in a charity context.
Ecological Economics,161, 9–19. https://doi.org/10.1016/j.ecolecon.2019.02.010
MacInnis, C. C., & Hodson, G. (2017). It ain’t easy eating greens: Evidence of bias toward vegetarians and vegans from both
source and target. Group Processes & Intergroup Relations,20(6), 721–744. https://doi.org/10.1177/1368430215618253
Onwezen, M. C., & van der Weele, C. N. (2016). When indierence is ambivalence: Strategic ignorance about meat
consumption. Food Quality and Preference,52, 96–105. https://doi.org/10.1016/j.foodqual.2016.04.001
Sicherman, N., Loewenstein, G., Seppi, D. J., & Utkus, S. P. (2016). Financial attention. e Review of Financial Studies,29(29),
863–897. https://doi.org/10.1093/rfs/hhv073
Soraperra, I., van der Weele, J., Villeval, M. C., & Shalvi, S. (2023). e social construction of ignorance: Experimental evidence.
Games and Economic Behavior,138, 197–213. https://doi.org/10.1016/j.geb.2022.12.002
Vellani, V., Glickman, M., & Sharot, T. (2024). ree diverse motives for information sharing. Communications Psychology,
2(1), 107–120. https://doi.org/10.1038/s44271-024-00144-y
Vu, L., Soraperra, I., Leib, M., der Weele, J. J. V., & Shalvi, S. (2023). Ignorance by choice: A meta-analytic review of the
underlying motives of willful ignorance and its consequences. Psychological Bulletin,149(9–10), 611–635. https://doi.org/
10.1037/bul0000398
Cite this article: Mol, J. M., Soraperra, I., & van der Weele, J. J. (2025). Spoiling the party: Experimental evidence on the
willingness to transmit inconvenient ethical information. Experimental Economics, 1–19. https://doi.org/10.1017/eec.2025.6
https://doi.org/10.1017/eec.2025.6 Published online by Cambridge University Press
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Knowledge is distributed over many individuals. Thus, humans are tasked with informing one another for the betterment of all. But as information can alter people’s action, affect and cognition in both positive and negative ways, deciding whether to share information can be a particularly difficult problem. Here, we examine how people integrate potentially conflicting consequences of knowledge, to decide whether to inform others. We show that participants (Exp1: N = 114, Pre-registered replication: N = 102) use their own information-seeking preferences to solve complex information-sharing decisions. In particular, when deciding whether to inform others, participants consider the usefulness of information in directing action, its valence and the receiver’s uncertainty level, and integrate these assessments into a calculation of the value of information that explains information sharing decisions. A cluster analysis revealed that participants were clustered into groups based on the different weights they assign to these three factors. Within individuals, the relative influence of each of these factors was stable across information-seeking and information-sharing decisions. These results suggest that people put themselves in a receiver position to determine whether to inform others and can help predict when people will share information.
Article
Full-text available
People sometimes avoid information about the impact of their actions as an excuse to be selfish. Such “willful ignorance” reduces altruistic behavior and has detrimental effects in many consumer and organizational contexts. We report the first meta-analysis on willful ignorance, testing the robustness of its impact on altruistic behavior and examining its underlying motives. We analyze 33,603 decisions made by 6,531 participants in 56 different treatment effects, all employing variations of an experimental paradigm assessing willful ignorance. Meta-analytic results reveal that 40% of participants avoid easily obtainable information about the consequences of their actions on others, leading to a 15.6-percentage point decrease in altruistic behavior compared to when information is provided. We discuss the motives behind willful ignorance and provide evidence consistent with excuse-seeking behaviors to maintain a positive self-image. We investigate the moderators of willful ignorance and address the theoretical, methodological, and practical implications of our findings on who engages in willful ignorance, as well as when and why.
Article
Full-text available
We experimentally study the social transmission of “inconvenient” information about the externalities generated by one's own decision. In the laboratory, we pair uninformed decision makers with informed senders. Compared to a setting where subjects can choose their information directly, we find that social interactions increase selfish decisions. On the supply side, senders suppress almost 30 percent of “inconvenient” information, driven by their own preferences for information and their beliefs about the decision maker's preferences. On the demand side, about one-third of decision makers avoids senders who transmit inconvenient information (“shooting the messenger”), which leads to assortative matching between information-suppressing senders and information-avoiding decision makers. Having more control over information generates opposing effects on behavior: selfish decision makers remain ignorant more often and donate less, while altruistic decision makers seek out informative senders and give more. We discuss applications to information sharing in social networks and to organizational design.
Article
Full-text available
We investigate whether individuals self-select feedback that allows them to maintain their motivated beliefs. In our lab experiment, subjects can choose the information structure that gives them feedback regarding their rank in the IQ distribution (ego-relevant treatment) or regarding a random number (control). Although beliefs are incentivized, individuals are less likely to select the most informative feedback in the ego-relevant treatment. Instead, many individuals select information structures in which negative feedback is less salient. When receiving negative feedback with lower salience subjects update their beliefs less, but only in the ego-relevant treatment and not in the control. Hence, our results suggest that individuals sort themselves into information structures that allow them to misinterpret negative feedback in a self-serving way. Consequently, subjects in the IQ treatment remain on average overconfident despite receiving feedback.
Article
Full-text available
We investigate strategic communication about the social impact of costly prosocial actions. In our model, a “sender” with noisy information about impact sends a cheap-talk message to a “receiver”, upon which both agents choose whether to act. In the presence of social preferences and image concerns, the sender trades off persuasion, exaggerating impact to induce receiver action, and justification, downplaying impact to cast doubt on the effectiveness of action and excuse her own passivity. In an experiment on charitable giving we find evidence for both motives. In line with our theory and a justification motive, increasing image concerns reduces communication of positive impact.
Article
In ordinary life, many types of situation occur in which a better-informed agent faces a decision over whether to reveal some bad news to an uninformed counterpart. This paper reports an experiment designed to explore how, in such contexts, the decisions of informed individuals are shaped by intrinsic preferences over information. Impartial spectators are tasked with deciding whether to make initially uninformed subjects aware that their actions have inadvertently generated negative externalities. Knowledge of this consequence is materially useless and reduces the subjective happiness of the externality-generating subjects, as is anticipated by the vast majority of spectators. However, 72% of spectators still choose to reveal the information. This suggests the existence of intrinsic preferences for information possession which are not based on hedonic considerations and indeed are sufficiently strong as to trump spectators’ concerns about the negative hedonic consequences of revelation. However, the hedonic impact of information is not completely overlooked, as spectators are less likely to reveal information the more strongly they believe it will damage the recipient's happiness.
Article
Over the last decade, vegan advocates have become a growing minority. By arguing against animal-product consumption and imposing the virtue-loaded call to “go vegan,” advocates have posed a direct challenge to the mainstream dietary ideology (termed “carnism”) in hopes of positive social change. As a consequence, while vegan advocates may be admired for their morality and commitment, they may also be derogated with moralistic traits such as arrogance and overcommitment. We call this mixed-valence perception the ”vegan paradox” and propose a theoretical framework for understanding it. Next, we develop a future research agenda to test and apply our framework, and inquire vegan advocacy for ethical, health, and environmental aims. Using the perspective of the idealistic vegan advocate as a reference point, we discuss the roles of the advocate's motives for change (i.e., the effectiveness of moral persuasion), the advocate's call for change (i.e., radical versus incremental change), the target's moral and carnist identification, and source attributes of the advocate. Lastly, we qualify our framework by highlighting further conceptual and methodological considerations.
Article
We study experimentally when, why, and how people intervene in others’ choices. Choice Architects (CAs) construct opportunity sets containing bundles of time-indexed payments for Choosers. CAs frequently prevent impatient choices despite opportunities to provide advice, believing Choosers benefit. They violate common behavioral welfare criteria by removing impatient options even when all payoffs are delayed. CAs intervene not by removing options they wish they could resist when choosing for themselves (mistakes-projective paternalism), but rather as if they seek to align others’ choices with their own aspirations (ideals-projective paternalism). Laboratory choices predict subjects’ support for actual paternalistic policies. (JEL C92, D12, D15)