ResearchPDF Available

Who Said or What Said? Estimating Ideological Bias in Views Among Economists

Authors:

Abstract

There exists a long-standing debate about the influence of ideology in economics. Surprisingly, however, there is no concrete empirical evidence to examine this critical issue. Using an online randomized controlled experiment involving economists in 19 countries, we examine the effect of ideological bias on views among economists. Participants were asked to evaluate statements from prominent economists on different topics, while source attribution for each statement was randomized without participants' knowledge. For each statement, participants either received a mainstream source, an ideologically different less-/non-mainstream source, or no source. We find that changing source attributions from mainstream to less-/non-mainstream, or removing them, significantly reduces economists' reported agreement with statements. Using a model of Bayesian updating we examine two competing hypotheses as potential explanations for these results: unbiased Bayesian updating versus ideologically-biased Bayesian updating. While we find no evidence in support of unbiased updating, our results are consistent with biased Bayesian updating. More specifically, we find that changing/removing sources (1) has no impact on economists' reported confidence with their evaluations; (2) similarly affects experts/non-experts in relevant areas; and (3) affects those at the far right of the political spectrum much more significantly than those at the far left. Finally, we find significant heterogeneity in our results by gender, country, PhD completion country, research area, and undergraduate major, with patterns consistent with the existence of ideological bias.
0
Who Said or What Said?
Estimating Ideological Bias in Views Among Economists
*
Mohsen Javdani
Ha-Joon Chang
September 2019
Abstract
There exists a long-standing debate about the influence of ideology in economics. Surprisingly,
however, there is no concrete empirical evidence to examine this critical issue. Using an online
randomized controlled experiment involving 2425 economists in 19 countries, we examine the
effect of ideological bias on views among economists. Participants were asked to evaluate
statements from prominent economists on different topics, while source attribution for each
statement was randomized without participants’ knowledge. For each statement, participants either
received a mainstream source, an ideologically different less-/non-mainstream source, or no
source. We find that changing source attributions from mainstream to less-/non-mainstream, or
removing them, significantly reduces economists’ reported agreement with statements. This
contradicts the image economists have of themselves, with 82% of participants reporting that in
evaluating a statement one should only pay attention to its content. Using a framework of Bayesian
updating we examine two competing hypotheses as potential explanations for these results:
unbiased Bayesian updating versus ideologically-/authority-biased Bayesian updating. While we
find no evidence in support of unbiased updating, our results are consistent with biased Bayesian
updating. More specifically, we find that changing/removing sources (1) has no impact on
economists’ reported confidence with their evaluations; (2) similarly affects experts/non-experts
in relevant areas; and (3) has substantially different impacts on economists with different political
orientations. Finally, we find significant heterogeneity in our results by gender, country, PhD
completion country, research area, and undergraduate major, with patterns consistent with the
existence of ideological bias.
Keywords: Ideology, ideological bias, authority bias, Bayesian updating, views among
economists.
JEL Codes: A11, A14.
We thank John List, Syngjoo Choi, Carey Doberstein, Julien Picault, Michele Battisti, Erik Kimborough,
Ali Rostamian, Leonora Risse, Behnoush Amery, Sam van Noort, and Tony Bonen for helpful comments
and discussions. We would also like to thank Michele Battisti, Julien Picault, Taro Yokokawa, and Allan
Vidigal for their help in translating the survey into Italian, French, Japanese, and Brazilian Portuguese,
respectively. We are also grateful to our research assistant, Jay Bell, for his assistance. We would also like
to thank all the people who took the time to participate in this survey. Funding for this project was provided
by Social Sciences and Humanities Research Council of Canada (SSHRC) FAS# F16-00472. This study is
registered in the AEA RCT Registry and the unique identifying number is AEARCTR-0003246.
Corresponding Author. Department of Economics, University of British Columbia Okanagan, 3333
University Way, Kelowna, BC V1V 1V7, Canada. Tel: 250-807-9152, E-mail: mohsen.javdani@ubc.ca
Faculty of Economics, University of Cambridge, Cambridge, UK, E-mail: hjc1001@cam.ac.uk
1
1. Introduction
“[E]conomists need to be more careful to sort out, for ourselves and
others, what we really know from our ideological biases.
(Alice Rivlin, 1987 American Economic Association presidential address)
One of the dominant views in mainstream (Neoclassical) economics emphasises the
positivist conception of the discipline and characterizes economists as objective, unbiased, and
non-ideological. Friedman (1953) describes in his famous essay that “positive economics is, or can
be, an 'objective' science, in precisely the same sense as any of the physical sciences.
1
Similarly,
Alchian asserts that “[i]n economics, we have a positive science, one completely devoid of ethics
or normative propositions or implications. It is as amoral and non-ethical as mathematics,
chemistry, or physics.
2
Boland (1991) suggests that “[p]ositive economics is now so pervasive
that every competing view has been virtually eclipsed. There exists, however, a long-standing
debate about the role of ideology in economics, which some argue has resulted in rigidity in the
discipline, rejection and isolation of alternative views, and narrow pedagogy in economic training
(e.g. Backhouse 2010, Chang 2014, Colander 2005, Dobb 1973, Fine and Mikonakis 2009,
Fullbrook 2008, Frankfurter and McGoun 1999, Galbraith 1989, Harcourt 1969, Hoover 2003,
Krugman 2009, Morgan 2015, Robinson 1973, Romer 2015, Rubinstein 2006, Samuels 1992,
Stiglitz 2002, Thompson 1997, Wiles 1979, and others).
In this study, we are not directly investigating the credibility of the different arguments
about the influence of ideological bias in economics by checking the validity of their evidence and
the consistency of the conclusions drawn. We will instead take an agnostic view on these
discussions and rather take them as alarming signs that invoke important questions which require
further investigation. We believe that the answer to whether there is an ideological bias among
economists has important intellectual implications, both theoretical and practical. Theoretically, it
will help us investigate the extent to which the theoretical arguments behind the positivist
methodology of neoclassical economics are consistent with empirical evidence. In terms of
1
Interestingly, Coase (1994) suggests that Friedman’s essay is a normative rather than a positive theory and that “what
we are given is not a theory of how economists, in fact, choose between competing theories, but [..] how they ought
to choose.” (see van Dalen (2019) for more details).
2
Letter from Armen Alchian to Glenn Campbell, January 20, 1969. See Freedman (2016).
2
practical implications, there is growing evidence that suggests value judgments and political
orientation of economists affect different aspects of their academic life including their research
(Jelveh et al. 2018, Saint-Paul 2018), citation network (Önder and Terviö 2015), faculty hiring
(Terviö 2011), as well as their position on positive and normative issues related to both public
policy and economic methodology (e.g. Beyer and Pühringer 2019; Fuchs, Krueger and Poterba
1998; Mayer 2001; van Dalen 2019).
These results are consistent with the views of some prominent economists who emphasize
the role of value judgments and ideology in economic analysis. For example, Modigliani (1977:
10) suggests that “there is no question but that value judgments play a major role in the differences
between economists. And I think it is unfortunate, but true, that value judgments end up by playing
a role in your assessment of parameters and of the evidence we consider. […] And there is no
question that Milton [Friedman] and I, looking at the same evidence, may reach different
conclusions as to what it means. Because, to him, it is so clear that government intervention is bad
that there cannot be an occasion where it was good! Whereas, to me, government discretion can
be good or bad. I'm quite open-minded about that, and am therefore willing to take the point
estimate. He will not take the point estimate; it will have to be a very biased estimate, before he
will accept it. Similarly, Tobin (1976: 336) argues that [d]istinctively monetary policy
recommendations stem less from theoretical or even empirical findings than from distinctive value
judgments.” Therefore, the answer to whether economists are influenced by ideological bias will
further inform this debate about the various factors underpinning economists’ views and its
practical implications, which will also inform the discussion about the evolution of the mainstream
economics discourse and economic training.
In order to examine the effect of ideological bias on views among economists, we use an
online randomized controlled experiment involving 2425 economists in 19 countries.
3
More
specifically, we ask participants in our online survey to evaluate statements from prominent
(mainly mainstream) economists on a wide range of topics (e.g. fairness, inequality, role of
government, intellectual property, globalization, free market, economic methodology, women in
economics, etc.). All participants receive identical statements in the same order. However, source
3
By economists we mean those with a graduate degree in economics who are either academics, or work in government
agencies, independent research institutions, or think tanks. The majority of economists in our sample (around 92%)
are academics with a PhD degree in economics. See the data section and Table A3 in our online appendix for more
details.
3
attribution provided for each statement is randomized without participants’ knowledge. For each
statement, participants randomly receive either a mainstream source (Control Group), a relatively
less-/non-mainstream source (Treatment 1), or no source attribution at all (Treatment 2).
We then measure whether economists agree/disagree with identical statements to different
degrees when statements are attributed to authors who are widely viewed to adhere to different
views (ideologies), which put them at different distances to mainstream economics, or when no
source attributions are provided for the statements. Implementing two different treatments
potentially allows us to distinguish between the influences of ideological bias and authority bias,
which could lead to similar results. For example, finding an effect only for treatment 2 would be
consistent with the existence of authority bias but not ideological bias. Similarly, finding an effect
only for treatment 1 would be in line with the existence of ideological bias but not authority bias.
Finding an effect for both treatments would be consistent with the existence of both ideological
bias and authority bias.
We find clear evidence that changing or removing source attributions significantly affects
economists’ level of agreement with statements. More specifically, we find that changing source
attributions from mainstream to less-/non-mainstream on average reduces the agreement level by
around one-fourth of a standard deviation. These results hold for 12 out of 15 statements evaluated
by participants, across a wide range of topics and ideological distances between sources. Similarly,
we find that removing mainstream source attributions on average reduces the agreement level by
more than one-third of a standard deviation. These result holds for all 15 statements evaluated by
participants.
We implement several tests to examine two competing hypotheses as potential
explanations for our results: unbiased Bayesian updating versus ideologically-/authority-biased
Bayesian updating (we also develop a model of Bayesian updating in our online appendix that
further informs the intuition behind these tests in distinguishing between the two hypotheses).
Under unbiased Bayesian updating, higher level of agreement with statements that are attributed
to mainstream sources is justified by objective differences in credibility of mainstream sources
relative to less-/non-mainstream sources. In contrast, under ideologically-biased Bayesian
updating, economists interpret mainstream sources as more credible, not based on objective
evaluation but because they are more (less) likely to confirm (disconfirm) their prior views as
mainstream economists. Similarly, under authority-biased Bayesian updating, while economists
4
might not have any particular priors or ideological views, they are more likely to agree with
statements attributed to mainstream sources since they are considered as authority figures in the
profession. Since these different mechanisms are likely to generate similar treatment effects, we
use our empirical tests to examine their validity.
While we find no evidence in support of unbiased Bayesian updating, our results are all
consistent with biased updating among economists. More specifically, and in contrast (consistent)
with the implications of unbiased (biased) Bayesian updating, discussed in more detail in Section
5.2, we find that changing/removing sources (1) has no impact on economists’ confidence with
their evaluations; (2) similarly affects experts/non-experts in relevant sub-fields of economics; and
(3) has substantially different impacts on economists with different political orientations.
Moreover, as it is discussed in more detail in Section 5.2.3, differences in our estimated effects of
treatment 1 and treatment 2, and in their heterogeneity patterns by political orientation, highlight
the distinct role of both ideological bias and authority bias in influencing views among economists.
More specifically, our results suggest that the reduction in agreement level caused by changing
sources (i.e. treatment 1) seems to be mainly driven by ideological bias while the reduction in
agreement due to removing sources (i.e. treatment 2) seems to be mainly driven by authority bias.
In addition to the aforementioned empirical tests that support the existence of
ideological/authority bias, participants’ own expressed views on how to evaluate a statement lends
more credibility to the hypothesis that biased updating is the driving mechanism behind our
estimated treatment effects. More specifically, in an accompanying questionnaire at the end of the
survey, a majority of participants (82 percent) report that a statement should be evaluated based on
its content only, as opposed to its author (0.5 percent), or a combination of both (around 18
percent), which is in sharp contrast with how they actually evaluate statements. This suggests that
perhaps part of the ideological/authority bias evident in our results operates through implicit or
unconscious modes (Bertrand and Duflo 2017).
We also use background information collected from participants to examine whether our
results vary systematically by characteristics such as gender, country, area of research, country
where PhD was completed, and undergraduate major. We find that the estimated ideological bias
among female economists is around 40 percent less than their male counterparts. Interestingly, on
one statement in our survey which examines the issue of gender gap in economics, there is a clear
and significant disagreement between male and female economists, with women much more
5
strongly agreeing with the existence of a serious and persisting gender gap in the discipline. In
addition, on this specific statement, while men still exhibit strong ideological bias, women display
no signs of ideological bias. This is perhaps due to the fact that when it comes to the important
issue of gender gap in economics, which involves female economists at a personal level, women
put aside ideology and focus on the content of the statement as opposed to its source.
We also find systematic and significant heterogeneity in ideological/authority bias by
country, area of research, country where PhD was completed, and undergraduate major, with some
groups of economists exhibiting no ideological bias (towards mainstream) and some others
showing very strong bias (towards mainstream). In addition, the heterogeneity patterns found in
our results remain consistent with the existence of ideological bias.
The remainder of the paper proceeds as follows. Section 2 provides a brief overview of the
discussion about economics and ideology. Section 3 describes our experimental design. Section 4
discusses our data and empirical methodology. Section 5 presents and discusses our results.
Section 6 concludes.
2. Economics and Ideology: a Brief Overview
Our hypothesis regarding the potential influence of ideological bias among economists is
rooted in a long-standing debate about the influence of ideology in economics. Therefore, a better
understanding of this literature will better inform our analysis and the interpretation of any results
associated with ideological bias. Milberg (1998) elegantly summarizes the long-standing debate
about the influence of ideology in economics by stating that “the history of economic thought can
in fact be read as a series of efforts to distance knowledge claims from the taint of ideology, a
continuing struggle to establish the field’s scientific merit.”
About a century ago, Irving Fisher, in his presidential address to the American Economic
Association, raised his concern about ideological bias in economics by stating that, “academic
economists, from their very open-mindedness, are apt to be carried off, unawares, by the bias of
the community in which they live.” (Fisher 1919). Other prominent economists such as Joseph
Schumpeter and George Stigler also made substantial contributions to this discussion over the next
few decades (see Schumpeter (1949) and Stigler (1959, 1960, 1965) for examples). However, the
change in the nature of economic discourse, the increasing use of mathematics and statistics, and
the increasing dominance of the positivist methodology, represented by Friedman’s “Methodology
6
of Positive Economics, have reduced the concern with ideological bias in economics, which has
gradually given way to a consensus that “economics is, or can be, an objective science.”
4
Due to this prevailing consensus, the issue of ideological bias has been largely ignored
within mainstream economics in the last few decades. Critics, however, argue that the increasing
reliance of economics on mathematics and statistics has not freed the discipline from ideological
bias; it has simply made it easier to disregard it (e.g. Myrdal 1954, Lawson 2012).
There also exists evidence that could suggest that economics has not successfully rid itself
of ideological bias. For example, Hodgson and Jiang (2007) argue that due to ideological bias in
economics, the study of corruption has been mainly limited to the public sector, when there is
abundant evidence of corruption in the private sector (sometimes in its relation to the public sector
but also internally). Jelveh et al. (2018) point to ideological overtones that could be identified in
public debates between prominent economists over public policy during the last financial crisis as
an example of ideological bias in economics. They also point out that these perceptions of
ideological bias among economists have even affected the selection of economists as experts for
different government positions.
5
Yet another example could be found in a 2006 interview with
David Card by the Minneapolis Fed.
6
Talking about his decision to stay away from the minimum
wage literature after his earlier work on the topic, which according to the article “generated
considerable controversy for its conclusion that raising the minimum wage would have a minor
impact on employment,he laments that one of the reasons was that it cost me a lot of friends.
People that I had known for many years, for instance, some of the ones I met at my first job at the
University of Chicago, became very angry or disappointed. They thought that in publishing our
work we were being traitors to the cause of economics as a whole.
Other prominent manifestations of ideological bias in economics include the so-called
fresh-water/salt-water divide in macroeconomics (Gordon and Dahl 2013) and its impact on
citation networks (Önder and Terviö 2015) as well as faculty hiring (Terviö 2011), the conflicts
4
See Friedman (1953).
5
They point out to the following two examples: “The rejection of Peter Diamond, a Nobel laureate in economics, by
Senate Republicans, as the nominee to the Federal Reserve Board, with one of the top Republicans on the Banking
Committee calling him “an old-fashioned, big government, Keynesian” at the nomination hearing (see here). And, the
withdrawal of Larry Summers from his candidacy for the chairmanship of the Federal Reserve Board due to strong
opposition from a coalition group over several issues related to ideology, including his role to “push to deregulate
Wall Street”” (see here).
6
Interview with David Card, by Douglas Clement, The Region, Minneapolis Fed, December 2006 (Interview: October
17, 2006).
7
between liberal/conservative camps in economics (especially regarding the possible distribution-
efficiency trade-off), the Borjas versus Card debate on immigration, and the ideologically charged
debates over the controversial book by Thomas Piketty (2014) or over Paul Romer (2015) and his
criticism that “mathiness lets academic politics masquerade as science.” Finally, recent results
from the Professional Climate Survey conducted by the American Economic Association also
highlight some of the challenges in the profession that are potentially driven by ideological bias.
For example, 58% of economists feel that they are not included intellectually within the field of
economics.
7
In addition, 25% of economists report that they have been discriminated against or
treated unfairly due to their research topics or political views.
There also exists a long-standing charge laid mainly by non-neoclassical economists
regarding the prevalence of ideological bias among neoclassical economists (e.g. Backhouse 2010,
Fine and Milonakis 2009, Fullbrook 2008, Frankfurter and McGoun 1999, Morgan 2015, Samuels
1992, Thompson 1997, Wiles 1979). For example, summarizing the views of the Post-Autistic
economics movement in France, Fullbrook (2003) argues that the economic profession is the
“opposite of pluralistic” and is “dogmatically tied to value-laden neoclassical orthodoxy.” Samuels
(1980) suggests that economics is much more a system of belief than it is a corpus of verified
logical positivist knowledge and that many uses of economics may represent only the clothing
of normativism with the garments of science”. Rothbarb (1960) criticizes what Hayek calls
‘scientism” in economics and argues that it is a “profoundly unscientific attempt to transfer
uncritically the methodology of the physical sciences to the study of human action.” McCloskey
(2017) asserts that economics has deliberately clad itself in a garb of positivism, even when
scholars knew the critical importance of the historical, social, and political embeddedness of their
interventions.
There are also studies that point to the ideological biases in economic training. Based on a
survey of graduate students in economics, Colander (2005) raises concerns regarding how graduate
training in economics may lead to biases in students’ views. For example, he argues that graduate
training in economics induces conservative political beliefs in students. Allgood et al . (2012) also
find evidence that suggests that “undergraduate coursework in economics is strongly associated
with political party affiliation and with donations to candidates or parties”. Using laboratory
experiments, other studies find that compared to various other disciplines, economics students are
7
AEA Professional Climate Survey: Main Findings. Released on March 18, 2019.
8
more likely to be selfish (Frank et al. 1993 and 1996, Frey et al. 1993, Rubinstein 2006), free-
riding (Marwell and Ames 1981), greedy (Want et al. 2012), and corrupt (Frank and Schulze 2000).
Frey et al. (1993) attribute these results to the economic training which “neglects topics
beyond Pareto efficiency […] even when trade-offs between efficiency and ethical values are
obvious.” Frank et al. (1993) highlight the exposure of students to the self-interest model in
economics where motives other than self-interest are peripheral to the main thrust of human
endeavor, and we indulge them at our peril.” Rubinstein (2006) argues that “students who come to
us to 'study economics' instead become experts in mathematical manipulation” and that their
views on economic issues are influenced by the way we teach, perhaps without them even
realising.” Stiglitz (2002) also argues that “[economics as taught] in America’s graduate schools
… bears testimony to a triumph of ideology over science.”
Surprisingly, however, there is very thin empirical evidence to rule out or establish the
existence of ideological views among economists. We are only aware of few studies that examine
this issue to some extent. Gordon and Dahl (2013) use data from a series of questions from the
IGM Economic Expert Panel to examine to what extent prominent economists (51 economists)
from the top seven economics departments disagree about key economic issues. Their results
suggest that “there is close to full consensus among these panel members when the past economic
literature on the question is large. When past evidence is less extensive, differences in opinions do
show up.
8
They also find that “there is no evidence to support a conservative versus liberal divide
among these panel members, at least on the types of questions included so far in the surveys.
9
Van Gunten et al. (2016) suggest that Gordon’s and Dahl’s focus on testing factionalism,
which reduces the social structure of the discipline to discrete and mutually exclusive group
memberships, is a poor model for economics. They instead suggest an alignment model according
to which “the field is neither polarized nor fully unified but rather partially structured around the
state versus market ideological divide.” They use principle component analysis to reanalyze the
data used by Gordon and Dahl (2013) and find that, in contrast to their findings, there exists a
8
The variable that measures the size of the economic literature related to a certain question is constructed based on
judgment calls by Gordon and Dahl (2013).
9
They use two approaches here. First, they use different distance-based clustering methods to examine whether panel
members are clustered into “two or even a few roughly equal-sized camps” based on their responses. As their second
approach, they identify a subset of questions that are likely to generate disagreement among panel members, and then
classify answers to these questions as either consistent with “Chicago price theory” or consistent with concerns
regarding distributional implications or market failures. They then test whether participants’ responses are
homogenous as a panel or are divided into two groups. They find evidence that supports the former.
9
latent ideological dimension clearly related to contemporary political debates and, more
importantly, show that ideological distance between economists is related to partisan and
departmental affiliations—as well as to the similarity of respondents’ informal social networks.
They argue that our results suggest that paradigmatic consensus does not eliminate ideological
heterogeneity in the case of the economics profession. Although there is indeed substantial
consensus in the profession, we show that consensus and ideological alignment are not mutually
exclusive. Finally, they suggest that “one implication of our findings is that consumers of
economic expertise must exercise healthy skepticism faced with the claim that professional
opinion is free of political ideology.
Jelveh et al. (2018) use purely inductive methods in natural language processing and
machine learning to examine the relationship between political ideology and economic research.
More specifically, using the member directory of the AEA, they identify the political ideology (i.e.
Republican versus Democrat) of a subset of these economists by (fuzzily) matching their
information to publicly disclosed campaign contribution and petition signings (35 petitions). Next,
using the set of JSTOR and NBER papers written by these economists with an identified political
ideology, they estimate the relationship between ideology and word choice to predict the ideology
of other economists.
10
Finally, examining the correlation between authors’ predicted ideology and
their characteristics, they find that predicted ideology is robustly correlated with field of
specialization as well as various department characteristics. They suggest that results are
suggestive of substantial ideological sorting across fields and departments in economics.”
Van Dalen (2019) uses an online survey of Dutch economists to examine the effect of
personal values of economists on (1) their positive or normative economic views; (2) their attitudes
towards scientific working principles; and (3) their assessment of assumptions in understanding
modern-day society. To measure the personal values of economists, he uses 15 questions “as
formulated in the European Social Survey (2014) and in some cases the World Values Survey
(2012)” and employs factor analysis to summarize the results into three dominant categories:
achievement and power, conformity, and public interest. He finds a significant variation in
opinions and a clear lack of consensus among economists when it comes to their views of both
10
These estimates are relied on the strong assumption that the relationship between word choice and ideology is the
same among economists whose political ideology is identified through their campaign contributions and petition
signings and those whose political ideology is unidentified.
10
positive and normative economic issues as well as their views regarding scientific working
principles in economics and their adherence to specific assumptions in economic theory. He also
finds clear evidence that personal values of economists have significant impacts on their views
and judgments in all these three areas.
Finally, Beyer and Pühringer (2019) use petitions signed by economists as an indicator for
ideological preferences to analyze the social structure of the population of politically engaged
economists and to uncover potentially hidden political cleavages. Their sample includes 14,979
signatures from 6,458 signatories using 68 public policy petitions, addressing a wide range of
public policy issues, and 9 presidential anti-/endorsement letters from 2008 to 2017 in the US.
They find a very strong ideological division among politically engaged economists in the US,
which mirrors the cleavage within the US political system” with three distinct clusters: a non-
partisan (13 petitions), a conservative (27 petitions), and a liberal cluster (37 petitions). Estimating
the closeness centrality and the clustering coefficients of the petitions, they also find that while
petitions on some public policy issues (e.g. benefits of immigration, preserving charitable
deductions, etc.) exhibit non-partisan status, others on issues such as tax policy, labour market
policy and public spending exhibit strong partisan status. Moreover, analyzing the network
structure of 41 fiscal policy petitions, they find evidence of a strong ideological divide among
economists. They conclude that their results support the hypothesis that political preferences also
imprint on economic expert discourses.”
3. Experimental Design
It is well understood that examining issues such as the impact of bias, prejudice, or
discrimination on individual views and decisions is very challenging, given the complex nature of
these types of behaviour. For example, the issue of discrimination in the labour market has long
been an issue of importance and interest to labour economists. However, as Bertrand and Duflo
(2017) put it, “it has proven elusive to produce convincing evidence of discrimination using
standard regression analysis methods and observational data. This has given rise to a field
experimentation literature in economics that has relied on the use of deception, for example
through sending out fictitious resumes and applications, to examine the prevalence and
consequences of discrimination against different groups in the labour market (see Bertrand and
Duflo (2017) and Riach and Rich (2002) for a review. Also see Currie et al. (2014) as another
example of experimental audit studies with deception).
11
Given that answering our question of interest is subjected to the same challenges, we take
a similar approach, namely using fictitious source attributions, in order to produce reliable results
(see Section 1 in our online appendix for a more detailed discussion on the use of deception in
economics). More specifically, we employ a randomized controlled experiment embedded in an
online survey. Participants are asked to evaluate a series of statements presented to them by
choosing one of the following options: strongly agree, agree, neutral, disagree, and strongly
disagree. They are also asked to choose a confidence level on a scale from 1 to 5 for their selected
answer. These statements are on a wide range of topics in economics and while they are mainly
from prominent (mainstream) economists, most of them challenge, to different extents, certain
aspects of mainstream economics discourse (see Table A15 in our online appendix).
Our choice of critical statements, as opposed to neutral or supportive statements such as
those in the IGM Expert Panel analyzed by Gordon and Dahl (2013), is based on the idea that
ideological reactions are more likely to be invoked, especially through changing sources, when
one encounters views that are in contrast to his views/ideologies. Changing sources on views that
one agrees with are less likely to induce an ideological reaction. Van Gunten et al. (2016) also
emphasize this issue by arguing that the IGM Expert Panel survey is not designed to elicit ideology
and “includes many didactic items that illustrate the implications of textbook economics, such as
the benefits of free trade (3/13/12), the impossibility of predicting the stock market (10/31/2011),
the disadvantages of the gold standard (1/12/12), the state-controlled Cuban economy (5/15/12),
and the proposition that vaccine refusal imposes externalities (3/10/2015). Although these items
usefully illustrate mainstream professional opinion on particular issues, for the purposes of strictly
testing the hypothesis of aggregate consensus, these softball items have dubious value, because
few observers doubt that there is a consensus on such issues within this mainstream. Had the survey
included heterodoxor Austrian economists, it may have encountered a wider range of opinion.
Another issue to highlight is that most of our statements are not clear-cut one-dimensional
statements. Given the complex nature of ideological bias, it is more likely to arise, or to be revealed
by individuals, in situations where the issues discussed are more dense, complex and multi-
dimensional. This is partly due to the fact that ideological arguments are more easily concealed
when it comes to more complex multi-faceted issues. Moreover, we believe that many of the issues
that are a subject of disagreement and ideological divide in economics are not simple clear-cut
issues.
12
All participants in our survey receive identical statements in the same order. However,
source attribution for each statement is randomized without participants’ knowledge.
11
For each
statement, participants randomly receive either a mainstream source (Control Group), or a
relatively less-/non-mainstream source (Treatment 1), or no source attribution (Treatment 2). See
Table 1 for a list sources for each statement. Table A15 in our online appendix provides a complete
list of statements and sources.
Participants who are randomized into treatment 2 for the first statement remain there for
the entire survey. However, those who are randomized into control group or treatment 1 are
subsequently re-randomized into one of these two groups for each following statement. Moreover,
those randomized into treatment 2 were clearly informed, before starting to evaluate the
statements, that “All the statements that you are going to evaluate are made by scholars in
economics, and do not necessarily reflect the views of the researchers. We have not provided the
actual sources of these statements to make sure they are evaluated based on their content only.”
Three points are worth highlighting here. First, our dichotomization of the sources into
“mainstream” and “less-/non-mainstream” is meant to simplify and summarize the relative
ideological differences between sources, even though we believe these differences are more
appropriately understood as a continuum rather than a dichotomy. Of course, it is well-understood
that this classification does not readily apply to some sources, such as older ones (e.g. Marx or
Engels) or sources from other disciplines (e.g. Sandel or Freud) in the same way it applies to others.
However, to remain consistent and to avoid confusion for the reader, we stick to the same naming
convention for all sources.
Second, statements were carefully selected so that their attribution to fictitious sources is
believable by participants. All selected statements were also relatively obscure so the
misattribution would not be easily noticed by participants. Although we cannot rule out the
possibility that some participants might have identified some of the misattributions, this seems to
be a very rare incidence at least based on the emails we received.
12
Nevertheless, participants’
11
For the most part, the randomization was done across countries. Participants got randomized into different groups
upon visiting the online survey. Since in most cases the survey was run concurrently in different countries, this led to
randomization of subjects across countries.
12
We received less than a dozen emails from people who had recognized the misattribution of a statement to a source.
In all but one of these cases, the statement identified as being misattributed was statement 13, which is perhaps the
least obscure statement used in our survey. All the emails we received, however, made it clear that this was perceived
as a mistake in our survey and not part of our survey design.
13
identification of misattributions will lead to one of the following two outcomes: some people will
stop completing the survey, while some others will continue the survey to its completion. As we
explain in more detail in our data section, the first group is not part of our analysis since we restrict
our sample to only those who completed the entire survey. Any attrition introduced as a result will
affect our control and treatment 1 similarly since randomization of real and fictious sources occurs
at the statement level and not at the individual level. As for the second group, they will either
consider the misattribution a mistake and will therefore evaluate the statement based on the real
source, or will become aware that this is part of our survey design to examine bias. Both of these
scenarios will lead to an underestimation of the true bias. In the first case, any potential bias the
treatment could have revealed is eliminated by the discovery of the true source, while in the second
case the self awareness about the purpose of the study will likely induce people to reveal less bias.
It is very hard to imagine people exhibiting stronger bias if they find out about the fictitious sources
and the true objective of the survey.
Third, the two sources for each statement were carefully paired such that they can be
easily associated to commonly known but different views (such as different schools of thoughts,
political leanings, disciplines, attitudes towards mainstream economics, etc.). In addition, for each
source, we also provide information on their discipline, their affiliation, and the title of one of their
publications. This is to further accentuate the ideological differences between each pair, especially
in cases where sources might be less known.
13
We therefore assume that most of our participants,
who are mainly academic economists, are able to identify the ideological differences between each
pair of sources. We believe it is quite reasonable to expect from those with reasonable knowledge
about economics, which we assume economists possess, to be able to identify the ideological
differences between Marx versus Smith, Mill versus Engels, Summers versus Varoufakis, Keynes
versus Arrow, Levine versus Wolff, Romer versus Keen, Rodrik versus Shaikh, and Coase versus
Milberg, especially given the additional information provided for each source.
We also included few pairs where these ideological differences are arguably less distinct,
such as Rodrick versus Krugman or Deaton versus Piketty. This is to examine whether smaller
13
For example, while some economists might not know Richard Wolff or Anwar Shaikh, knowing that they are
affiliated with the University of Massachusetts Amherst or the New School for Social Research, the two famous
heterodox schools in economics, makes it more likely to induce an ideological reaction. Similarly, titles of selected
publication for each source, such as Rethinking Marxism”, “The Crisis of Vision in Modern Economic Thought”, or
What Money Can't Buy: The Moral Limits of Markets”, serve the same purpose.
14
ideological differences, especially between two economists who are both mainstream, can also
induce ideological reactions among economists. Finally, three of the less/non-mainstream sources
selected are not economists (i.e. Freud, Sandel, Gigerenzer), although they are prominent scholars
in their fields evident by their reputation and affiliations. These sources are carefully paired with
equally prominent economists (Hayek versus Freud, Sen versus Sandel, Thaler versus Gigerenzer)
to empirically test the common view that “economists tend to look down on other social scientists,
as those distant, less competent cousins.”
14
The statements used in these three cases were carefully
selected so that they do not favour expertise in economics and scholars from these other disciplines
can equally weigh in on the discussion.
Although we believe our assumption regarding the ability of participants to identify the
ideological differences between provided sources is quite reasonable, given that a strong majority
of participants (92%) are academics with a PhD in economics, we nevertheless implement several
empirical tests to provide further support for this assumption. These tests are based on the idea that
if ideological differences between our mainstream versus less-/non-mainstream sources are
identified by participants, then we should observe that participants with different views and
ideologies react differently to these ideologically-different sources.
Findings from these tests are discussed in detail in Section 2 in our online appendix. To
summarize the results, we find clear evidence that conditional on observed characteristics,
participants with different ideologies (measured using their self-reported political orientation, and
political/economic typology) respond differently to statements when they are attributed to
ideologically-different less-/non-mainstream sources. More specifically, those whose political
ideology is more oriented towards right, or are less progressive, or are stronger supporters of the
free market, are significantly less likely to agree with a statement, even with a mainstream source.
More importantly, this disagreement systematically and significantly widens when the source is
less-/non-mainstream, but stays unchanged when no source is provided. This suggests that
participants in our sample are able to locate the ideological coordinates of the “mainstream” versus
“less-/non-mainstream” sources, and react to them according to their own personal views and
ideologies. However, when the differences are eliminated by entirely removing the source,
differences by participantsideology also disappear.
14
Dani Rodrik, 2013 interview with the World Economic Association.
15
4. Data
The target population for this study were economists from 19 different countries.
15
We
used Economics Departments, Institutes and Research Centers in the World (EDIRC) website,
which is provided by the Research Division of the Federal Reserve Bank of St. Louis, to identify
economic institutions (economics departments, government agencies, independent research
institutions, and think tanks) in each target country. We then used the website of each institution
to manually extract the email addresses of economists in each institution. The extracted email
addresses were then used to send out invitations and reminders to ask economists to participate in
the survey. The survey was conducted between October 2017 and April 2018. While the survey’s
exact opening and closing dates were different for some countries, the survey was open in each
country for approximately two months.
In many cases during email extraction, especially in the case of multidisciplinary
departments, research institutions, and government agencies, it was not clear from the institution’s
website which listed faculty members or researchers were economists and which ones held a
degree from other disciplines. In these cases, we asked our team of research assistants to extract
all listed email addresses. Our rationale was that sending email invitations to some non-economists
was clearly better than risking to exclude some economists, especially since this exclusion could
be systemically related to the type of institution and lead to sample selection. We made sure
however that non-economists who received the survey invitation were self-filtered out by making
it clear in our email invitation as well as on the first page of the survey that the target population
of the survey are economists.
16
As a result, we are not able to provide a reliable estimate of the participation rate in our
survey since that would require the total number of economists in the target population, which is
considerably smaller than the total number of email addresses we extracted online, for the reason
discussed above. In addition, this calculation is further complicated by the fact that upon sending
email invitations we received a considerable number of auto-replies from people who had left their
institution, were on sabbatical, parental, or sick leave, or temporarily had no access to their email.
15
These countries include Australia, Austria, Brazil, Canada, Denmark, Finland, France, Germany, Ireland, Italy,
Japan, Netherlands, New Zealand, Norway, South Africa, Sweden, Switzerland, the UK, and the US. The entire
(English) survey was translated into French, Italian, Japanese and Brazilian Portuguese to allow participants from
corresponding countries to complete the survey in their own native language if they choose to.
16
As expected, we received many emails from faculty members who were not economists (historians, statisticians,
sociologists, political scientists, engineers, etc.) asking us to remove them from the email list.
16
With these in mind, a very rough estimate of the participation rate in our survey is around 15%.
17
Although we cannot measure a reliable participation rate for our survey for the reasons discussed
above, our summary statistics (Table A3 in our online appendix) suggest that we have a very
diverse group of economists in our final sample. We have also reported the distribution of
responses by institution of affiliation in the US, Canada, and the UK in figures A4 to A6 in our
online appendix as examples to show that participants in our survey come from a very diverse
group of institutions in each country and are not limited to certain types of institutions.
Participants in our survey were required to complete each page in order to proceed to the
next page. As a result, they could not skip evaluating some statements. However, participation in
the survey was entirely voluntary and participants could choose to withdraw at any point during
the survey, without providing any reason, by simply closing the window or quitting the browser.
Participants were assured that any responses collected up until the point of withdrawal will not be
included in the study. For this reason, we are not allowed, by the terms of our ethics approval, to
use data collected from people who did not complete the entire survey. As a result, we have
restricted our sample to participants who completed the entire survey.
18
Our final sample includes
2,425 economists from 19 different countries. We run several tests to ensure that our focus on
participants who completed the entire survey does not introduce sample selection bias in our results
and we find no evidence of such a bias. See Section 3 in our online appendix for more details.
The primary dependent variable in our analysis is the reported agreement level with each
statement. In our baseline analysis, we estimate linear regression models in which the agreement
variable is coded as 1 for “strongly disagree”, 2 for “disagree”, 3 for “neutral”, 4 for “agree”, and
5 for “strongly agree”. We also estimate ordered logit models for robustness check. The agreement
level of participant with statement is represented by the variable  and is modeled as:
          
(1)
where  and  are indicators that are equal to one if for statement participant received a
less-/none-mainstream source, or no source, respectively. The estimated coefficients of interest are
17
This estimate was obtained by simply dividing the number of participants by the total number of emails addresses
without excluding non-economists or those with their emails set to auto-reply. It therefore underestimates the true
participation rate in our survey.
18
A total of 3,288 economists participated in our survey. There were 454 participants who quit the survey at the very
beginning (in the questionnaire section where they were asked to provide background information). Another 409
people withdrew from the survey at some point after they started evaluating the statements. See Table A5 in our online
appendix for more details.
17
and and measure average difference in agreement level between those who randomly
received a less-/non-mainstream source or no source, respectively, compared to those who
received a mainstream source. We also include several individual-level control variables () in
some of our specifications.
19
However, if our randomization is carried out properly, including
these control variables should not affect our results (and as reported later on we find that they
don’t).
5. Results
5.1. Main Findings
Figure 1 displays the probability of different agreement levels for each statement as well
as their relative entropy index which measures the comparative degree of consensus with each
statement. The relative entropy index is derived from information theory and has a theoretical
range of 0 for perfect consensus and 1 for no consensus at all.
20
Results reported in Figure 1 suggest
that there is a significant dissensus on the wide variety of issues evaluated by economists. Despite
this consistent pattern of dissensus, there exists some variation across statements in the relative
entropy index with economists exhibiting the highest degree of consensus on statements 1 and 8
and the lowest degree of consensus on statements 5, 6, 9, 11, and 12. We find similar patterns if
we restrict the sample to economists who only received mainstream sources or no sources. As we
discussed before, our statements either deal with different elements of the mainstream economics
paradigm, including its methodology, assumptions, and the sociology of the profession (e.g.
Statements 3, 6, 7, 11, 14, 15), or issues related to economic policy (e.g. Statements 1, 2, 4, 5, 10).
Therefore, the significant disagreement evident in Figure 1 highlights the lack of both paradigmatic
and policy consensus among economists on evaluated issues.
These results stand in sharp contrast with Gordon and Dahl (2013) who find strong
consensus among their distinguished panel of economists on different policy-oriented economic
questions from IGM Expert Panel survey. One potential explanation for this stark difference could
be the critical nature of our statements, as well as the fact that they are not clear-cut and one-
19
Our primary control variables include: gender, PhD completion cohort (15 categories), Current Status (8 categories),
Country (19 categories), Research Area (18 categories). Additional control variables used in some specifications
include age cohort (13 categories), country/region of birth (17 categories), English proficiency (5 categories),
department of affiliation (8 categories), country/region where PhD was completed (16 categories). See Table A3 in
the online appendix for more detail on different categories.
20
The entropy index is given by , where is the observed relative frequencies for our five response
categories. The relative entropy index is then calculated by dividing the entropy index by the maximum possible
entropy (i.e.  .
18
dimensional. However, several other studies that use positive and clear-cut statements on both
public policy and economic methodology issues also find results that do not support the strong
consensus found by Gordon and Dahl (2013) and exhibit patterns of dissensus that are more similar
to our findings (e.g. Beyer and Pühringer 2019, Fuchs 1996; Frey et al. 1984, Fuchs, Krueger and
Poterba 1998; Mayer 2001; van Dalen 2019; Whaples 2009). Van Gunten et al. (2013) suggest
that the strong consensus found by Gordon and Dahl might be driven by what they refer to as
“softball questions”.
21
Consistent with this idea, Wolfers (2013) also suggests that rather than
testing the professional consensus among economists, “a founding idea of the IGM Expert Panel
seems to be to showcase the consensus among economists.” It is also important to note that Gordon
and Dahl (2013) use a very small and a highly-selective sample of economists which is a serious
limitation for the purpose of demonstrating consensus among economists (Van Gunten et al. 2016).
Table 2 displays the results from linear models that estimate how these agreement levels
are influenced by our two treatments. Column (1) uses a simplified model with no additional
control variables, while columns (2) to (4) add personal and job characteristics as well as individual
fixed effects.
22
We find clear evidence that changing source attributions from mainstream to less-
/non-mainstream significantly reduces the agreement level by 0.26 points. This is around one-
fourth of a standard deviation or a 7.3 percent reduction in an average agreement level of 3.6 in
our control group. Our results also suggest that removing mainstream sources (i.e. providing no
source) also significantly reduces the agreement level by 0.41 points (an 1l.3 percent reduction
which is equal to 35% of a standard deviation).
As estimates reported in Columns (2), (3), and (4) suggest, controlling for different
individual characteristics and individual fixed effects does not change our results, which provides
further support that our randomization protocol was implemented properly. In addition, results
from our specification with individual fixed effects suggests that our estimate of treatment 1 is
unlikely to suffer from sample selection bias due to non-random attrition across treatment groups.
Finally, as the results reported in Table A12 in our online appendix suggest, estimating the same
specifications while clustering the standard errors at the individual level does not have any
appreciable impact on our results. More specifically, clustering has virtually no impact on standard
21
Van Gunten et al. (2016) note that “overall, on 30 percent of [IGM Expert Panel] items, not a single respondent
took a position opposite the modal view (i.e., agreed when the modal position was disagree or vice versa).
22
Refer to Table A4 in our online appendix for the estimated coefficients of our control variables.
19
errors for treatment 1, while slightly increasing standard errors for treatment 2.
23
However, the t-
statistics for estimates of treatment 2 are so large (around 27 before clustering) that this slight
increase has virtually no impact on the outcomes of our hypothesis testing.
While OLS estimates are perhaps easier to summarize and report, given the discrete
ordered nature of our dependent variable, perhaps a more appropriate model to use in this context
is an ordered logit model. Another advantage of using ordered logit is that it allows us to examine
whether our treatments have polarizing effects. More specifically, those who (strongly) agree with
a statement that is critical of mainstream economics are potentially more likely to be less-/non-
mainstream, and therefore treatment 1 might induce higher agreement among them. On the other
hand, those who (strongly) disagree with the same statement are potentially more likely to be
mainstream, and therefore treatment 1 might induce higher disagreement among them.
24
These
polarizing effects will not be captured by our OLS model.
Table 3 reports the estimates from our ordered logit model. Overall, we find results similar
to those reported in Table 2 using OLS.
25
We find that changing sources from mainstream to less-
/non-mainstream, or to no source, significantly increases (decreases) the probability of
disagreement (agreement) with statements. More specifically, we find that providing a less-/non-
mainstream source on average increases the probability of “strong disagreement” by 2.2
percentage points or 44 percent, increases the probability of “disagreement” by 5 percentage points
or 30 percent, increases the probability of reporting “neutral” by 2.1 percentage points or 12.6
percent, reduces the probability of “agreement” by 3.6 percentage points or 9 percent, and reduces
the probability of “strong agreement” by 5.7 percentage points or 27 percent. This suggests that
regardless of the extent to which participants agree or disagree with a statement when the attributed
source is mainstream, changing the source to less-/non-mainstream significantly decreases
(increases) their agreement (disagreement) level. In addition, while the effect of treatment 1 on
23
These changes are consistent with Abedie et al (2017) who suggest clustering is relevant “when clusters of units,
rather than units, are assigned to a treatment.”
24
This potential heterogeneous effect might also be responsible for the larger effect estimated for treatment 2. For
example, providing a less-/non-mainstream source might generate biases in two directions, with the positive effect on
agreement partially canceling out the negative effect. Removing the source however might only introduce a negative
effect. As a result, our treatment 2 will have a larger effect than treatment 1. Ordered logit model will allow us to
examine this possibility as well.
25
We also estimate multinomial logit models for robustness check. Results from these models are very similar to those
from ordered logit models.
20
increasing the probability of (strong) disagreement is larger relative to its impact on reducing the
probability of (strong) agreement, we find no evidence of polarizing effects.
Moreover, similar to our linear estimates, we also find larger effects in the same direction
when no sources are provided for the statements. Also, in line with our linear estimates, we get
almost identical results when we include control variables in our specification. Several other
robustness checks performed (see Section 5 in our online appendix) also fully confirm the
robustness of our results.
One point is worth emphasizing here. The lack of polarizing effects, which we discussed
above, does not necessarily rule out the possibility of heterogenous treatment effects. More
specifically, our treatments could theoretically induce two different effects in two directions. Some
participants might agree relatively less with the statements when sources are less-/non-mainstream
(i.e. a negative treatment effect), while some others might agree relatively more (i.e. a positive
treatment effect). However, since a strong majority of our participants are mainstream economists,
as most academics in the discipline are, the former effect is much more likely to be present than
the latter. We also examine the potential existence of positive treatment effects in our sample.
More specifically, we run regressions similar to those reported in Table 2 where we restrict our
sample to only self-reported heterodox economists, or economists who self-identify as very left. If
positive treatment effects are likely to occur, we should find some evidence of it among these
groups who are potentially more prone to it. However, we find no evidence of a positive treatment
effect among any of these subgroups.
Nevertheless, since our OLS and logit estimates are average treatment effects, they
measure the average of the two effects, if both are present. Therefore, to the extent that some
participants might agree more with less-/non-mainstream sources, although we find no evidence
of it, our estimated treatment effects should be interpreted as underestimations of the true effect.
Theoretically, the true effect of interest would be the sum of the absolute value of the two effects,
if one is interested in measuring potential bias both towards and against mainstream. Alternatively,
it could be only limited to the negative effect, if one is interested in specifically measuring bias
towards the mainstream.
26
26
The justification for focusing on the negative effect as the effect of interest could be that while there might also
exists biases against mainstream views in economics, it is reasonable to focus on bias towards the mainstream since it
is by far the most dominant and the most influential discourse in economics.
.
21
5.2. Ideological/Authority Bias or Unbiased Bayesian Updating?
As our results clearly suggest, the significant reduction in agreement level we find is
distinctly driven by changing the source attributions, which influences the way economists
perceive the statements. However, one could argue that these results might not be necessarily
driven by ideological bias induced by attributed sources and the extent to which their views are
aligned with mainstream economics. Therefore, in order to organize and examine different
potential explanations for our results, we use Bayesian updating as a guiding framework that fits
reasonably well with different elements of our experiment.
More specifically, our experiment involves evaluating statements in an environment with
imperfect information about the validity of the statements. This imperfect information could be
due to not having enough knowledge about the subject, lack of conclusive empirical evidence, the
statements being open to interpretation, etc.. Bayesian updating models suggest that in such an
environment with imperfect information, individuals make judgement using a set of prior beliefs
that are updated using Bayes’ rule as new information arrives. In the context of our study, this
translates into prior beliefs held by economists on each statement’s validity, which is then updated
using a signal they receive regarding the validity of the statement in form of an attributed source.
It is important to note however that the process of updating the priors could be both biased
or unbiased. Bayes’ Theorem does not say anything about how one should interpret the signals
received in the process of updating priors and therefore does not preclude the influence of cognitive
biases or ideological biases in interpreting signals and updating priors (Gerber & Green 1999,
Bartels 2002, Bullock 2009, MacCoun & Paletz 2009, Fryer et al. 2017). Unbiased Bayesian
updating requires the processing of information to be independent from one’s priors (Fischle 2000,
Taber and Lodge 2006, Bullock 2009). In contrast, under ideologically-biased Bayesian updating,
one selectively assigns more weight to information that is more likely to confirm one’s ideological
views (Bartels 2002, Taber and Lodge 2006, Gentzkow and Shapiro 2006).
In the context of our study, lower agreement level associated with less-/non-mainstream
sources could be attributed to unbiased Bayesian updating under the assumption that mainstream
sources systematically provide objectively more credible signals regarding the validity of the
statements compared to less-/non-mainstream sources. Alternatively, mainstream sources could be
perceived as more credible not based on objective evaluation unrelated to priors, but rather based
on the fact that mainstream sources are more likely to confirm a mainstream economist’s views.
22
The distinction with unbiased updating therefore is that this interpretation is not made objectively
and independent from prior beliefs, but that it is made based on the fact that mainstream (less-
/non-mainstream) sources are more likely to confirm (disconfirm) mainstream views.
There exists extensive evidence that suggests individuals tend to agree more with findings
or views that are more (less) likely to confirm (disconfirm) their beliefs (e.g. McCoun 1998, Gerber
and Green 1999, Bartels 2002, Bullock 2009, Hart et al. 2009, MacCoun and Paletz 2009, Fryer et
al. 2017, and others). This is broadly referred to as the confirmation bias. Beliefs that one seeks to
confirm have different natures and could be formed by ingrained, ideological or emotionally
charged views. If beliefs that an individual is trying to confirm or validate are shaped by her
ideological views, we are dealing with what is often referred to as ideological bias.
27
For example, MacCoun and Paletz (2009) conducted an experiment to examine how
ordinary citizens evaluate hypothetical research findings on controversial topics. They find that,
when findings challenge their prior beliefs, people are more skeptical of the findings. Their results
also suggest that “citizens, especially those holding conservative beliefs, tended to attribute studies
with liberal findings to the liberalism of the researcher, but citizens were less likely to attribute
conservative findings to the conservatism of the researcher.” They interpret this as effects of
“partisanship and ideology”.
Determining whether less/non-mainstream sources are more or less objectively credible
than mainstream sources is of course extremely difficult since both groups include individuals who
are prominent scholars in their fields with views that put them at different distances, sometimes
relatively close and sometimes rather far, to mainstream economics. Therefore, one main problem
with unbiased Bayesian updating as a potential explanation is that there are no objective measures
that could be used to assess the credibility of our sources. Any claims of systematic differences
between these sources in terms of credibility is inevitably based on subjective metrics that correlate
with where one stands relative to mainstream views and its academic norms. It is exactly for this
reason that traditional norms of modern science suggest that any serious evaluation of an argument
27
As Eagleton (1991) suggests, the term “ideology” has been used in different ways by different social scientists. This
is partly due to the complex and multi-dimensional nature of the concept, which does not yield itself very easily to a
neat definition. We therefore see little advantage in providing a narrow definition by singling out one trait among a
complex of traits. It is the complex itself that we are interested in, and in this paper, we examine a clearly-defined
manifestation of this complex notion.
23
should be based on the content of the argument as opposed to the source attributed to it (Merton
1973, McCoun 1998).
In fact, economists in our sample strongly agree with this view. More specifically, as part
of the questionnaire that appears at the end of the survey, we ask participants to express their own
views regarding several issues including how they believe “a claim or argument should be
rejected?” A strong majority of participants (around 82%) report that “a claim or argument should
be rejected only on the basis of the substance of the argument itself.” Around 18% of participants
report that “a claim or argument should be rejected based on what we know about the views of the
author or the person presenting the argument as well as the substance of the argument.” There
exists only a tiny minority (around 0.5%) who report “a claim or argument should be rejected
based on what we know about the views of the author or the person presenting the argument.”
28
Despite the aforementioned issue related to unbiased Bayesian updating as a potential
explanation, we nevertheless propose three empirical tests that would allow us to examine the
validity of biased versus unbiased updating. See Section 4 in our online appendix for a Bayesian
updating model we developed to more formally inform these empirical tests.
5.2.1. Test 1: Differences in Confidence Level
As our first test, we examine how changing or removing attributed sources influences the
participants’ reported level of confidence in their evaluations. More specifically, we estimate
models similar to those reported in Table 2 but instead use as our dependent variable the level of
confidence reported for one’s evaluation of a statement (on a scale from 1 to 5, with 1 being least
confident and 5 being most confident).
29
If economists agree more with statements attributed to
mainstream sources because mainstream sources are objectively more credible/reliable, then this
higher degree of credibility should also inspire more confidence in the evaluation of statements
attributed to mainstream sources. In the language of Bayesian updating, assuming that mainstream
sources are objectively more credible would mean that they are also considered more likely to be
28
We also examine differences in our estimated treatment effects between those who claim statements should be
evaluated only based on their content versus those who claim both the content and the views of the author matter. We
find almost identical estimates for treatment 1 (-0.22 versus -0.20) and treatment 2 (-0.34 versus -0.35) for both groups.
This suggests that our results are not driven by one of these two group.
29
See Figure A7 in our online appendix for the probability of different confidence levels by statement.
24
unbiased and more precise.
30
It therefore follows that the precision of the posterior beliefs, proxied
by reported confidence, will be higher when signals are considered more precise (more credible).
31
As the results reported in Table 4 suggest, however, altering or removing source
attributions does not influence participants’ confidence in their evaluations (the estimated
coefficients are quantitatively very small and statistically insignificant).
32
These results therefore
are in contrast with unbiased updating, which is expected to be associated with higher confidence
in evaluations when statements are attributed to mainstream sources. They are consistent, however,
with ideologically-/authority-biased updating that could affect the level of agreement with little or
no impact on confidence level. For example, an ideologically-biased individual is likely to put the
same level of confidence in accepting a story if it comes from the Fox News and in rejecting the
same story if it comes from the New York Times. That is, ideological bias affects one’s judgement
often without casting doubt on one’s confidence with the judgement.
5.2.2. Test 2: Differences Between Experts and Non-Experts
As our second test, we examine whether our estimated treatment effects are different
among experts and non-experts. It is reasonable to assume that experts have more precise priors
regarding the validity of a statement that is in their area of expertise. Therefore, compared to non-
experts who are engaged in unbiased updating, experts should rely more heavily on their own
priors and less on the signals to inform their evaluation. As a result, changing the signal from
mainstream to less-/non-mainstream is expected to have a smaller effect on experts relative to non-
experts under unbiased updating.
33
In contrast, under ideologically-biased updating, mainstream
experts’ stronger views on a subject, and/or their higher ability at interpreting how mainstream or
non-mainstream a source is, could create a stronger bias in favour of mainstream sources among
experts. Therefore, in contrast to unbiased updating, which results in smaller treatment effects for
experts, under biased updating the combination of the two opposing forces (i.e. more precise priors
which give rise to smaller treatment effects, and stronger mainstream biases which produces larger
30
While unbiasedness and precision are two separate concepts from a statistical point of view, the two concepts are
likely to be inter-related in the context of our study. It seems reasonable to consider the perception regarding a source’s
degree of unbiasedness to also affect the perception regarding source’s degree of precision. For example, if a source
is considered to be biased to some extent, it is hard to know the exact degree of bias to subtract it from the signal in
order to achieve an unbiased signal to use to update priors. Therefore, not knowing the exact level of bias is likely to
affect the perception regarding the source’s level of precision with more biased sources being considered less precise.
31
See Implication 1 of the Bayesian updating model developed in Section 4 of the online appendix for a formal proof.
32
Results from ordered logit (not reported here) also suggest that our treatments have no impact on confidence level.
Our estimated differences in predicted probabilities are all small and statistically insignificant for all five categories.
33
See Implication 2 of the Bayesian updating model in Section 4 of the online appendix for a formal proof.
25
treatment effects) could result in similar or larger treatment effects among experts compared to
non-experts.
We create an indicator that is equal to 1 if a participant’s reported area of research is more
likely to be relevant to the area of an evaluated statement and zero otherwise (see Table A14 in
our online appendix for more details). If our categorization reasonably separates experts from non-
experts, one would expect that those categorized as experts will be on average more confident with
their evaluations, especially when the source is mainstream. We find this to be true in our sample,
where, conditional on observed characteristics, experts in the control group are 12% of a standard
deviation more confident with their evaluations compared to similar non-experts.
Next, we estimate linear models similar to those reported in Table 2 where we allow our
treatment effects to vary between experts and non-experts. Results from this model are reported in
Table 5. First, and conditional on observed characteristics, we find no difference in average
agreement levels between experts and non-experts in the control group. In other words, one’s
expertise in the subject area does not affect one's evaluation of a statement when sources are
mainstream. Second, we find that both treatments similarly reduce the agreement level among
experts and non-experts. These findings are therefore inconsistent with unbiased updating which
suggests higher level of expertise is expected to result in smaller treatment effects. However, they
are consistent with biased updating which could result in similar treatment effects between experts
and non-experts as discussed above.
One could argue however that, apart from the precision of the priors and stronger biases,
experts and non-experts may also systematically differ along other dimensions, which would make
the interpretation of the above results more complicated. For example, those categorized as experts
might have a different likelihood to be mainstream, which could in turn induce different treatment
effects, or their priors may be more centered giving them more space to update up or down. We
address these potential caveats in Section 6 in our online appendix and find results similar to those
reported above.
5.2.3. Test 3: Differences by Political Orientation
As our third and perhaps most important test, we examine whether our estimated treatment
effects vary across different groups with different political orientations/ideologies. If the reduction
in agreement level associated with changes in sources is based on objective differences in
credibility of the sources, by definition this objective difference should not depend on one’s
26
political views. As a result, our estimates should not vary systematically by political orientation
under unbiased updating. In fact, if anything, those on the right should be less affected by changing
the source attributions since they are significantly more likely to report that a statement should be
evaluated based on its content only. More specifically, among those at the far right, 86.7 percent
of participants report that in evaluating a statement only its content matters, while 13.3 percent
report that both content and author matter. In contrast, among those at the far left, these numbers
are 73.8 percent and 25.1 percent, respectively.
In contrast, evidence suggesting that the effect of treatment 1 varies systematically by
political orientation is consistent with ideological bias. More specifically, our less-/non-
mainstream sources often represent views or ideologies that are (politically) to the left of
mainstream sources.
34
Therefore, if our results are driven by ideological bias, reduction in
agreement level should be larger among those more to the right of the political spectrum since
altering the sources creates a larger contrast with their prior beliefs which will in turn induce a
larger ideological reaction among this group.
35
We estimate linear models similar to Equation (1) above where we allow the effect of each
treatment to vary by political orientation. Political orientation is reported by participants on a scale
from -10 (far left) to 10 (far right). We use the reported values to group people into 5 categories.
36
Results from this model are reported in Table 6. Estimates reported in Column (1) suggest that
there exists a very significant difference in the average agreement level among economists with
different political orientations, even when the sources are all mainstream. For example, the
average agreement level among economists categorized as left is one-fourth of a standard deviation
lower than those categorized as far left. This already large difference increases consistently as we
move to the far right, reaching a difference of 60% of a standard deviation, which is an increase of
150 percent. This strong effect of political orientation, which does not change after controlling for
a wide set of observed characteristics, seems to be a clear manifestation of ideological bias. These
results are also consistent with other studies that suggest economists with different political
34
Our results, discussed in detail in Section 2 of our online appendix, which we also discussed briefly in Section 4
above, clearly demonstrate this.
35
See Implication 3 of the Bayesian updating model in Section 4 of the online appendix for a more formal treatment
of this proposition.
36
Far left = [-10 -7], Left = [-6 -2], Centre = [-1 1], Right = [2 6], Far Right = [7 10].
27
leanings adhere to different views regarding different economic issues and policies (Beyer and
Pühringer 2019; Horowitz and Hughes 2018; Mayer 2001).
Estimates reported in Column (2) suggest an even more drastic effect by political
orientation which further reinforces the influence of ideological bias. More specifically, for those
on the far left, altering the source only reduces the average agreement level by 4.6% of a standard
deviation, which is less than one-fourth of the overall effect we reported in Table 2 (22% of a
standard deviation). However, moving from the far left to the far right of the political orientation
distribution consistently and significantly increases this effect, with the effect of altering the source
being almost 7 times (678 percent) larger at the far right compared to the far left (-0.36 versus -
0.046, respectively). We reject the null hypothesis that the effect at the far left (left) is equal to the
effect at the far right at 0.1% (5%) confidence level. We also reject the null that the effects are
equal across all five groups (F-statistic = 17.27) or across all four groups excluding the far left (F-
statistic = 3.12).
Our estimates reported in Column (3) suggest that for every given category of political
orientation, removing the sources has a larger effect on reducing the agreement level compared to
altering the sources. However, this difference becomes smaller as we move towards right,
suggesting that for those more oriented towards right receiving a less-/non-mainstream source or
no source at all makes little difference compared to those more oriented towards left. Another
important pattern to highlight is that while the estimated effect of treatment 1 consistently and very
significantly increases as we move to the far right, we fail to reject the null that the estimated effect
of treatment 2 is similar across all five groups.
The lack of difference in the estimated effect of treatment 2 by political orientation could
be due to the fact that removing the source induces what is known as authority bias. Authority bias
is the tendency to assign more credibility to views that are attributed to an authority figure
(Milgram 1963). Under ideological bias, individuals’ interpretation of the signals and their level
of agreement with statements are influenced by their ideological views, including their political
orientation/ideology. Therefore, as discussed before, the significant effect of political orientation
on the estimated effect of treatment 1 is consistent with the existence of ideological bias. In
contrast, under authority bias, it is the presence/absence of an authority figure that affects
individuals’ interpretation of signals and agreement level, while they might not hold any particular
28
ideological views on the subject or their perception of authority.
37
This implies that, under
authority bias, political orientation does not necessarily influence the agreement level, which is
consistent with out estimated effect of treatment 2.
Given that we found a robust and significant estimated effect for both treatment 1 and
treatment 2, up until this point, we could not rule out that both ideological bias and authority bias
contribute to each treatment effect. However, we find that the estimated effect of treatment 2 does
not follow the same meaningful pattern by political orientation as treatment 1. This is consistent
with the aforementioned distinction between ideological and authority biases. This suggests that
there are important differences in underlying forces driving our estimated effects of treatment 1
and treatment 2, with the former (latter) more likely to be driven by ideological (authority) bias.
It is reasonable to argue, however, that the self-reported measure of political orientation
used to categorize people depends on political environments and contexts that could vary
significantly from one country to another. For example, someone who is considered a centrist or
centre-right in the UK could perhaps be categorized as left in the US. This could complicate the
interpretation of our results. To address this issue, we use participants’ answers to a series of
questions at the end of our survey that are designed to identify their political and economic
typology.
38
More specifically, we regress our self-reported political orientation measure on a series
of indicators created based on answers to these questions. We then use predicted values from this
regression to categorize people into five groups based on its distribution quintiles. These results
are reported in Table A11 in our online appendix and remain similar to those discussed above.
Two points are worth highlighting in relation to the results discussed above. First, as an
alternative explanation for our overall findings, it could be argued that given the low stake nature
of our survey, economists did not have the incentive to exert much effort and read each statement
carefully. Therefore, when the attributed source for a statement was a prominent mainstream
economist who they recognized and trusted as a scholar, they glossed over the statement and relied
on the source for their evaluation. As a result, statements attributed to mainstream sources received
37
From a theoretical perspective, the main distinction we draw between ideological bias and authority bias is that,
under ideological bias, individuals are more likely to agree with views that confirm their own ideological views.
However, under authority bias, individuals are more likely to agree with views that are attributed to an authority figure
regardless of their own ideological position. For example, authority bias will result in higher admiration for a poem if
it is attributed to a famous poet, but lower admiration if it is attributed to a school teacher, with neither of the two
assessments influenced by ideology.
38
Participants were asked to read a series of binary statements and for each pair pick the one that comes closest to
their view. See Table A15 in our online appendix for a list of these statements.
29
a higher level of agreement. However, results discussed in this section stand in contrast to this
hypothesis. These results suggest that even conditional on sources being mainstream, there still
exists significant difference in agreement level by political orientation. Furthermore, treatment 1
has systematically and significantly different impacts on people with different political ideologies.
In other words, in evaluating the statements, the identified ideological contours of different sources
clearly interact with participants’ own political ideologies.
Furthermore, if this alternative explanation is valid, then one of its implications is that
participants in our control group should have spent less time completing the survey compared to
those in the two treatment groups. However, our estimates reported in Table A7 in our online
appendix suggest that there are no differences in average survey completion time between control
group and treatment 1. We find that those randomized into treatment 2 on average took less time
to complete the survey, but the estimated difference is very small (less than a minute) and is to be
expected since people in this group had less text to read, given that there were no source
attributions provided. Consistent with these results, our estimates in Table A8 in our online
appendix suggest that restricting our sample to individuals with different survey completion times
(a potential proxy for different levels of effort exerted to read the statements) also has no impact
on our results.
The second point to highlight is that as discussed in Section 5.1, our treatments could
theoretically induce two type of biases: bias towards mainstream (i.e. a negative treatment effect)
and bias against mainstream (i.e. a positive treatment effect). In addition to the previously
discussed potential impact this could have on our overall results (i.e. underestimation of the true
effect), the potential heterogenous effects also have a potential implication for results that compare
treatment effects across different subgroups depending on the true effect of interest. More
specifically, if the effect of interest is bias both towards and against the mainstream, then it should
be kept in mind that a smaller average treatment effect for one group compared to another does
not strictly imply that the former group is less biased.
The following example might make this point clearer. Consider two groups, A and B. For
group A, treatment 1 reduces the agreement level for everyone by 20%. However, for group B,
treatment 1 reduces the agreement level by 40% for half of the people, and increases it by 30% for
the other half. Therefore, the estimated treatment effect for group A will show a 20% reduction in
30
agreement level, while for group B this number will be 10%. This is despite the fact people in
group A are more biased that those in group B.
At the same time, it should be also noted that since a strong majority of our participants are
mainstream economists, the existence of a positive treatment effect is not very likely. In fact, as
discussed before in Section 5.1, we find no empirical evidence of positive treatment effects among
participants who are expected to be positively affected (i.e. heterodox economists, or those who
identify as very left). In addition, if the effect of interest is bias towards mainstream, then
comparing estimated treatment effects as indicators of bias towards mainstream won’t be subjected
to the aforementioned caveat (in the example above, group A is still more biased towards
mainstream than group B). Focus on bias towards mainstream views could be justified by the fact
that we are examining a discipline that is strongly dominated by the mainstream discourse and the
tiny minority who are non-mainstream, even if biased, cannot exercise much influence.
Altogether, our results presented and discussed above are consistent with the existence of
ideological and authority biases in views among economists and are in contrast with the
implications of unbiased Bayesian updating. We would like to emphasize that, while some of our
results might be considered (at least by some of our readers) as more compelling than others, it is
the entirety of the body of evidence we have provided that should be considered when one ponders
the validity of these two competing hypotheses (i.e. biased versus unbiased updating), or other
potential hypotheses. This is of course consistent with the methodology of modern economics, that
the relative superiority of a hypothesis, relative to other alternative hypotheses, is determined by
its relative degree of success in explaining the observed patterns. Therefore, to suggest that
ideological/authority bias is not driving our results, one needs to posit an alternative hypothesis
that is on average more successful in explaining all the observed patterns, and not only a cherry-
picked few.
5.3. Statistical Power and Reproducibility of Findings
A growing body of literature is raising concern regarding the dominant practice in
empirical studies to restrict attention to type-I-error and highlights the importance of statistical
power as a critical parameter in evaluating the scientific value of empirical findings (see Maniadis
and Tufano (2017) for a review). This is especially important since there is growing evidence that
suggests empirical findings in economics, and several other disciplines, are significantly
underpowered. For example, assessing more than 6,700 empirical studies in economics, Ioannidis
31
et al. (2017) find that “half of the areas of economics research assessed have nearly 90% of their
results under-powered. The median statistical power is 18%, or less.” As Maniadis and Tufano
(2017) suggest, “an important implication of the overall inadequate power of empirical research
in economics is that a sizable majority of its studies have less than 50% probability of detecting
the phenomenon under investigation.”
Following List et al. (2011) and Maniadis et al. (2017), we calculate the optimal sample
size for our control and treatment groups that allows us to calculate a minimum economically
relevant treatment effect at what is considered a reasonable significance level (α=0.05) and
statistical power (1-β=0.80) in the literature (e.g. List et al. 2011, Ioannidis et al. 2017). We find
that for a given statement, in order to find a treatment effect equal to 15% of a standard deviation,
which we consider an economically relevant treatment effect in our study, we need to have
approximately 800 participants in each group, which is the number of participants we actually
have. This suggests that, given our sample size, the treatment effects we have estimated for each
individual statement meet a high standard in terms of statistical power and degree of
reproducibility. Doing the same exercise for our analysis of all statements combined, which is most
of our analysis, suggests that our sample size of 36,375 individual-statements allows us to find a
treatment effect as small as 6% of a standard deviation at α=0.01 and 1-β=0.99.
Another important and related question raised by Maniadis et al. (2017) is that, “given
publication of a newly discovered finding, how much confidence should we have that it is true?”
Maniadis et al. (2014) propose a measure of post-study probability (PSP) that allows one to
measure “the probability that a declaration of research finding, made upon reaching statistical
significance, is true.” As Maniadis et al. (2014) show, the value of PSP depends on significance
level α, statistical power β, and priors about the true probability of association between two
phenomena, π. In the context of our study, π translates into priors regarding the probability that
the existence of ideological bias among economists is a true phenomenon. We consider a low, a
medium, and a high prior to measure how much these priors should change in light of our results.
We use β=0.99 and α=0.01 which we showed above are reasonable numbers given our sample size
and estimated effects when all statements are combined.
We find that, if one holds a prior that there is only a 10% probability that the existence of
ideological bias among economists is a true phenomenon (π=0.1), then the probability that our
estimated treatment effects are true is PSP=91.6%. This probability is 99% and 99.8%, for π=0.5
32
and π=0.9, respectively. This suggests significant updating in those prior beliefs in light of our
results.
39
5.4. Heterogeneity Analysis
In this section, we examine how our estimated treatments vary by statement as well as by
different characteristics including gender, country of residence, country where PhD was
completed, undergraduate major, and main research area. It is interesting and important to
understand how the biases we have found in our analysis vary across different groups. This could
help to shed more light on some of the factors underlying ideological/authority bias. As it is
discussed in more detail in the next section, we find evidence of significant and systematic
heterogeneity in our estimated treatment effects. Consistent with our previous results, we cannot
square these findings with unbiased Bayesian updating since we cannot think of compelling
reasons to explain why unbiased updating will lead to systematically different treatment effects
for people from a certain gender, country, or research area, especially after controlling for a wide
set of observed characteristics. However, as we discuss in more detail in the following section,
these differences by personal characteristics remain consistent with the existence of
ideological/authority bias among economists.
We continue to use OLS to estimate treatment effects since its estimates are easier to
summarize and present, and since they are similar to estimates from ordered logit models as
discussed before.
5.4.1. Heterogeneity by Statement
First, we investigate the effect of our two treatments on agreement level separately for each
statement. These results are summarized in Figure 2. Consistent with our overall results, we find
that for all but three statements, changing source attributions to a less-/non-mainstream source
significantly reduces the agreement level. The estimated reductions range from around one-tenth
of a standard deviation to around half of a standard deviation. Interestingly, we find that the largest
reduction in agreement level for treatment 1 occurs for Statement 6, which is arguably the
statement that is most critical of mainstream economics and its methods, and also brings up the
issue of ideological bias in mainstream economics.
40
This again is consistent with ideological bias
39
Even for a prior of 1%, the PSP = 50%.
40
The statement reads: “Economic discourse of any sort - verbal, mathematical, econometric-is rhetoric; that is, an
effort to persuade. None of these discursive forms should necessarily be privileged over the others unless it is agreed
by the community of scholars to be more compelling. Only when economists move away from the pursuit of universal
knowledge of 'the economy' and towards an acceptance of the necessity of vision and the historical and spatial
33
where views that are more likely to disconfirm previously held beliefs are more strongly
discounted when the source is less-/non-mainstream.
Regarding the three statements with no reduction in agreement level (i.e. statements 1, 3,
and 7), one potential explanation is that the ideological distance between the sources is not large
enough to induce ideological bias. Taking a closer look at the sources for each statement seems to
suggest that this is indeed a plausible explanation. The sources for these statements are Dani
Rodrick vs. Paul Krugman, Hayek vs. Freud, and Irving Fisher vs. Kenneth Galbraith. Interestingly
and consistent with authority bias, we find that, for the same three statements, removing the source
attribution significantly reduces the agreement level, highlighting again the difference in driving
forces behind the estimated effects of treatment 1 and treatment 2.
Results displayed in the right panel of Figure 2 suggest that removing the source
attributions significantly reduces the agreement level for all 15 statements. Similar to our results
reported in Table 2, the estimated effects of treatment 2 are larger than treatment 1 in almost all of
the statements. Results reported in Figure 2 also suggest that it is not just extreme differences in
views (e.g. Smith vs. Marx) that invoke ideological bias among economists. Even smaller
ideological differences (e.g. Deaton vs. Piketty or Sen vs. Sandel) seem to invoke strong
ideological reactions by economists.
5.4.2. Heterogeneity by Gender
Next, we examine gender differences in our estimated treatment effects. These results are
reported in Table 7 and suggest that on average female economists who are randomized into
control group agree more with our statements compared to their male counterparts in the control
group. The estimated difference in agreement level is around 6 percent of a standard deviation. In
addition, we find that the estimated ideological bias is 44% larger among male economists as
compared to their female counterparts (24% of a standard deviation reduction in agreement level
versus 14%, respectively), a difference that is statistically significant at 0.1%. These results are
consistent with studies from psychology which suggest that women exhibit less confirmation bias
than men (Meyers-Levy 1986, Bar-Tal and Jarymowicz 2010). Gordon and Dahl (2013) also find
contingency of knowledge will the concern over ideological 'bias' begin to fade. Such a turn would have important
implications for economic method as well, as knowledge claims would increasingly find support, not in models of
constrained optimization, but with such techniques as case studies and historical analyses of social institutions and
politics. Increasing reliance of economics on mathematics and statistics has not freed the discipline from ideological
bias, it has simply made it easier to disregard.
34
evidence that suggests that male economists are less cautious in expressing an opinion. This seems
to be consistent with stronger ideological bias among male economists found in our results, since
ideological bias and assigning higher levels of certainly to our own views usually work hand in
hand. Finally, these results are consistent with van Dalen (2019) who finds that female economists
are more likely to believe that economic research is not affected by one’s political views, perhaps
because they more strongly aspire to be less ideologically biased.
We find, however, that the gender difference in authority bias is much smaller (34% of a
standard deviation reduction for males versus 35% for females) and statistically insignificant. In
other words, removing mainstream sources seems to affect men and women in similar ways. These
results hold even after including our extensive set of indicators for political orientation and
political/economic typology to control for potential gender differences. We also find similar results
when we estimate gender differences in treatment effects separately for each statement. In 9 out of
15 statements, the estimated ideological bias is larger for men than for women, while the results
are more mixed for our estimates of authority bias (see Figure A3 in our online appendix).
We would like, however, to highlight the estimated gender difference in ideological bias
for Statement 5, which involves the issue of gender gap in economics.
41
Overall, and without
considering group assignment, there exists a very large difference in the level of agreement with
this statement between male and female economists. More specifically, conditional on observed
characteristics, the average agreement level among male economists is 0.78 points lower than
female economists, a very large difference that is around 2/3rd of a standard deviation and
statistically significant at the 0.1%. Taking group assignment into account, female economists who
randomly receive Carmen Reinhart as the statement source (i.e. control group) report an agreement
level that is on average 0.73 points higher compared to their male counterparts in the control group.
Moreover, while switching the source from Carmen Reinhart to the left-wing
economist/sociologist Diane Elson does not affect the agreement level among female economists
(estimated effect is 0.006 points), it significantly decreases the agreement level among male
41
The statement reads: “Unlike most other science and social science disciplines, economics has made little progress
in closing its gender gap over the last several decades. Given the field’s prominence in determining public policy, this
is a serious issue. Whether explicit or more subtle, intentional or not, the hurdles that women face in economics are
very real.” The actual (mainstream) source of the statement is Carmen Reinhart, Professor of the International
Financial System at Harvard Kennedy School and the author of This Time is Different: Eight Centuries of Financial
Folly (2011). The altered (less-/non-mainstream) source of the statement is Diane Elson, British Economist and
Sociologist, Professor Emerita at the University of Essex, and the author of Male bias in the development process
(1995).
35
economists by 0.175 points (around 15% of a standard deviation). It seems that when it comes to
the important issue of the gender gap in economics, which involves female economists at a
personal level, women put aside ideology and merely focus on the content of the statement as
opposed to its source.
These results also highlight a large divide between male and female economists in their
perception and concerns regarding the gender gap in economics.
42
This is of critical importance
since the discussion around the gender problem in economics has recently taken the centre stage.
During the recent 2019 AEA meeting, and in one of the main panel discussions titled how can
economics solve its gender problem?”, several top female economists talked about their own
struggles with the gender problem in economics. In another panel discussion, Ben Bernanke, the
current president of the AEA, suggested that the discipline has “unfortunately, a reputation for
hostility toward women….”
43
This is following the appointment of an Ad Hoc Committee by the
Executive Committee of the AEA in April 2018 to explore “issues faced by women […] to improve
the professional climate for women and members of underrepresented groups.”
44
AEA also
conducted a climate survey recently to provide more comprehensive information on the extent
and nature of these [gender] issues.” It is well-understood that approaching and solving the gender
problem in economics first requires a similar understanding of the problem by both men and
women. However, our results suggest that there exists a very significant divide between male and
female economists in their recognition of the problem.
5.4.3. Heterogeneity by Country of Residence/PhD Completion
Next, we examine how our estimated effects vary by country of residence. These results
are reported in Table 8. Estimates reported in Column (1) suggest that even when sources are
mainstream, and conditional on observed characteristics, there are significant differences in
average agreement level by country (we reject the null of equality at 0.1%). On one side, we have
economists in South Africa, France and Italy who hold the highest level of agreement with the
statements when sources are mainstream, while on the other side we have economists in Austria
and the US who hold the highest level of disagreement. These results are consistent with Frey et
42
Gender differences in perception of gender discrimination is not limited to economics and has been documented in
other studies and in various contexts (e.g. Fisman and O’neill 2008, Miller and Katz 2018, Raggins et al. 1998).
43
Reported in a New York Times article titled “Female Economists Push Their Field Toward a #MeToo Reckoning”,
published on January 10, 2019.
44
American Economic Association, Ad Hoc Committee on the Professional Climate in Economics, Interim Report,
April 6, 2018.
36
al. (1984) who also find significant differences across five countries in views among economists.
Similarly, and for both treatments, we find that the estimated effects vary significantly across
countries, ranging from around half of a standard deviation to zero. We also reject the null that the
estimated effects of treatment 1/treatment 2 are the same across countries at the 0.1% confidence
level. More specifically, we find that economists in Austria, Brazil, and Italy exhibit the smallest
ideological bias (for Brazil and Austria, the estimated effects are also statistically insignificant).
On the other side of the spectrum we find that economists in Ireland, Japan, Australia, and
Scandinavia exhibit the largest ideological bias. Economists in countries such as Canada, the UK,
France, and the US stand in the middle in terms of the magnitude of the estimated ideological bias.
In addition, when we examine the effect of authority bias, these countries maintain their positions
in the distribution, although the estimated effects of authority bias remain larger than the estimated
effects of ideological bias for most of the countries.
Table 9 reports results that examine heterogeneity by country/region where PhD was
completed. We find that economists who completed their PhD in Asia, Canada, Scandinavia, and
the US exhibit the strongest ideological bias, ranging from 39% to 25% of a standard deviation.
On the opposite end we find that economists with PhD degrees from South America, Africa, Italy,
Spain, and Portugal exhibit the smallest ideological bias (statistically insignificant for South
America and Africa). These results are somewhat consistent with those reported in Table 8 and
suggest that some of the countries where economists exhibit the largest/smallest ideological bias
are also those that induce strongest/weakest ideological bias in their PhD students (e.g. Brazil,
Italy, Scandinavia). In addition, we find that our estimated effects of authority bias, while larger
in size, largely follow the same patterns as our estimates of ideological bias.
5.4.4. Heterogeneity by Area of Research
In Table 10 we take up the issue of heterogeneity by the main area of research. Results
reported in Column (1) suggest that similar to previous heterogeneity patterns, and conditional on
observed characteristics, there are significant differences in agreement level among economists
from different research areas, even when attributed sources are all mainstream. Estimates reported
in Columns (2) and (3) suggest that economists whose main area of research is history of thought,
methodology, heterodox approaches; cultural economics, economic sociology, economic
anthropology; or business administration, marketing, accounting exhibit the smallest ideological
37
and authority bias.
45
We find however that economists whose main area of research is
macroeconomics, public economics, international economics, and financial economics are among
those with the largest ideological bias, ranging from 33% to 26% of a standard deviation.
Another interesting point to highlight is that, while for economists in all research areas the
estimated effect of ideological bias is smaller or similar than the estimated effect of authority bias,
macroeconomists are the only group for whom the estimated ideological bias is significantly larger
than the estimated authority bias (1/3rd versus 1/5th of a standard deviation). This is potentially
driven by the fact that our less-/non-mainstream sources induce a stronger reaction in
macroeconomists than when we remove the sources altogether.
5.4.5 Heterogeneity by Undergraduate Major
Lastly, we examine heterogeneity by undergraduate major. As we discussed before, there
exists growing evidence that suggests economic training, either directly or indirectly, could induce
ideological views in students (e.g. Allgood et al. 2012, Colander and Klamer 1987, Colander 2005,
Rubinstein 2006). Consistent with these studies, results reported in Table 11 suggest that
economists whose undergraduate major was economics or business/management exhibit the
strongest ideological bias (1/4th of a standard deviation). However, we find that economists with
an undergraduate major in law; history, language, and literature; or anthropology, sociology,
psychology, exhibit the smallest ideological bias (statistically insignificant in all three cases).
46
6. Conclusion
We use an online randomized controlled experiment involving 2425 economists in 19
countries to examine the influence of ideological and authority bias on views among economists.
Economists who participated in our survey were asked to evaluate statements from prominent
economists on different topics. However, source attribution for each statement was randomized
without participants’ knowledge. For each statement, participants either received a mainstream
source, a less-/non-mainstream source, or no source. We find that economists’ reported level of
agreement with statements is significantly lower when statements are randomly attributed to less-
/non-mainstream sources who hold widely-known views or ideologies that put them at different
distances to mainstream economics, even when this distance is relatively small. In addition, we
45
For the latter group, this could be driven by lack of familiarity with where different sources stand in relation to
mainstream economics and their ideology.
46
Of course, this systematic difference could be driven by self-selection of individuals into different undergraduate
majors and is not necessarily causal.
38
find that removing the source attribution also significantly reduces the agreement level with
statements.
We use a Bayesian updating framework to organize and test the validity of two competing
hypotheses as potential explanations for our results: unbiased Bayesian updating versus
ideologically-/authority-biased Bayesian updating. While we find no evidence in support of
unbiased Bayesian updating, our results are all consistent with the existence of biased updating.
More specifically, and in contrast (consistent) with implications of unbiased (biased) Bayesian
updating, we find that changing/removing sources (1) has no impact on economists’ precision of
their posterior beliefs (proxied for by economists’ reported level of confidence with their
evaluations); (2) similarly affects experts/non-experts in relevant areas; and (3) changes strongly
and systematically by economists’ political orientation. We also find systematic and significant
heterogeneity in our estimates of ideological/authority bias by gender, country, country/region
where PhD was completed, area of research, and undergraduate major with patterns consistent
with ideological/authority bias.
Scholars hold different views on whether economics can be a ‘science’ in the strict sense
and free from ideological biases. However, perhaps it is possible to have a consensus that the type
of ideological bias that could result in endorsing or denouncing an argument on the basis of its
author’s views rather than its substance, is unhealthy and in conflict with scientific tenor and the
subject’s scientific aspiration, especially when the knowledge regarding rejected views is limited.
Furthermore, it is hard to imagine that the biases that our results uncover will only manifest
themselves in a low-stakes environment, such as our experiment, without spilling over to other
areas of academic life.
47
After all, political scientists, sociologists, and psychologists have long
established the widespread influence of such biases on various important domains of our lives.
Therefore, while due to data limitations we cannot examine the influence of these biases on various
academic outcomes, there already exists growing evidence that suggests value judgements and
political ideology of economists affect not just research (Jelveh et al. 2018, Saint-Paul 2018), but
also citation networks (Önder and Terviö 2015), faculty hiring (Terviö 2011), as well as
economists’ positions on positive and normative issues related to public policy (e.g. Beyer and
47
A strong majority of experimental studies in economics and other disciplines are based on low-stake experiments,
but we rarely discount the importance of their findings and their implications based on the low-stake nature of the
experiments.
39
Pühringer 2019; Fuchs, Krueger and Poterba 1998; Mayer 2001; van Dalen 2019; Van Gunten,
Martin, and Teplitskiy 2016). It is therefore not a long stretch to suggest that biases underlying our
results could play an important role in suppressing plurality, narrowing pedagogy, and delineating
biased research parameters in economics. We believe that recognizing their own biases, especially
when there exists evidence that suggests they could operate through implicit or unconscious
modes, is the first step for economists who strive to be objective and ideology-free. This is also
consistent with the standard to which most economists in our study hold themselves.
Another important step to minimize the influence of our ideological biases is to understand
their roots. As argued by prominent social scientists (e.g. Althusser 1976, Foucault 1969, Popper
1955, Thompson 1997), the main source of ideological bias is knowledge-based, influenced by the
institutions that produce discourse. Mainstream economics, as the dominant and most influential
institution in economics, propagates and shapes ideological views among economists through
different channels.
Economics education, through which economic discourses are disseminated to students
and future economists, is one of these important channels. It affects the way students process
information, identify problems, and approach these problems in their research. Not surprisingly,
this training may also affect the policies they favor and the ideologies they adhere to. For example,
Colander and Klamer (1987) and Colander (2005) survey graduate students at top-ranking
graduate economic programs in the US and find that, according to these students, techniques are
the key to success in graduate school, while understanding the economy and knowledge about
economic literature only help a little. This lack of depth in knowledge acquired, not only in
economics but in any discipline or among any group of people, make individuals to lean more
easily on ideology. Economics teaching not only influences students’ ideology in terms of
academic practice but also in terms of personal behavior. As we discussed in Section 2, there
already exists strong evidence that, compared to various other disciplines, students in economics
stand out in terms of views associated with greed, corruption, selfishness, and willingness to free-
ride.
48
Another important channel through which mainstream economics shapes ideological views
among economists is by shaping the social structures and norms in the profession. While social
48
Even if this relationship is not strictly casual, it suggests that there exists something about economic education that
leads to a disproportionate self-selection of such students into economics.
40
structures and norms exist in all academic disciplines, economics seems to stand out in at least
several respects, resulting in the centralization of power and the creation of incentive mechanisms
for research, which in turn hinder plurality, encourage conformity, and adherence to the dominant
(ideological) views.
For example, in his comprehensive analysis of pluralism in economics, Wright (2019)
highlights several features of the discipline that make the internal hierarchical system in economics
“steeper and more consequential” compared to most other academic disciplines. These features
include: (1) particular significance of journal ranking, especially the Top Five, in various key
aspects of academic life including receiving tenure (Heckman and Moktan 2018), securing
research grants, invitation to seminars and conferences, and request for professional advice; (2)
dominant role of “stars” in the discipline (Goyal et al. 2006, Offer and Söderberg 2016); (3)
governance of the discipline by a narrow group of economists (Fourcade et al. 2015); (4) strong
dominance of both editorial positions and publications in high-prestige journals by economists at
highly ranked institutions (Colussi 2018, Fourcade et al. 2015, Heckman & Moktan 2018; Wu
2007); and the strong effect of the ranking of one’s institution, as a student or as an academic, in
career success (Han 2003, Oyer 2006).
As another example, in a 2013 interview with the World Economic Association, Dani
Rodrik highlights the role of social structure in economics by suggesting that “there are powerful
forces having to do with the sociology of the profession and the socialization process that tend to
push economists to think alike. Most economists start graduate school not having spent much time
thinking about social problems or having studied much else besides math and economics. The
incentive and hierarchy systems tend to reward those with the technical skills rather than
interesting questions or research agendas. An in-group versus out-group mentality develops rather
early on that pits economists against other social scientists.”
49
Interestingly, a very similar picture
of the profession was painted in 1973 by Axel Leijonhufvud in his light-hearted yet insightful
article titled “The life among the Econ.”
50
Some economists might object that economists are human beings and therefore these biases
are inevitable. But economists cannot have it both ways! The strong influence of ideological bias
49
Interview with Dani Rodrik, World Economics Association Newsletter, Vol. 3, No. 2, April 2013
https://www.worldeconomicsassociation.org/files/newsletters/Issue3-2.pdf
50
Leijonhufvud, A. (1973). Life among the Econ. Economic Inquiry, 11(3), 327-337.
41
on views among economists that is evident in our empirical results cannot be reconciled with the
positivist methodology most economists claim allegiance to. Once one admits the existence of
ideological bias, the widely-held view that “positive economics is, or can be, an 'objective' science,
in precisely the same sense as any of the physical sciences” (Friedman 1953) must be openly
rejected. This is clearly not the case today, with positive economics so pervasive that every
competing view has been virtually eclipsed.” (Boland 1991). Furthermore, the differences we find
in the estimated effects across personal characteristics such as gender, political orientation,
country, and undergraduate major clearly suggest that there are ways to limit those ideological
effects, and ways to reinforce them.
42
Reference:
[1] Abadie, A., Athey, S., Imbens, G. W., & Wooldridge, J. (2017). When should you adjust
standard errors for clustering? (No. w24003). National Bureau of Economic Research.
[2] Allgood, S., Bosshardt, W., van der Klaauw, W., & Watts, M. (2012). Is economics
coursework, or majoring in economics, associated with different civic behaviors? The
Journal of Economic Education, 43(3), 248-268.
[3] Althusser, L. (1976). Essays on ideology. Verso Books.
[4] Backhouse, R. E. (2010). The puzzle of modern economics: science or ideology? Cambridge
University Press.
[5] Bartels, L. M. (2002). Beyond the running tally: Partisan bias in political perceptions.
Political behavior, 24(2), 117-150.
[6] Bar-Tal, Y., & Jarymowicz, M. (2010). The effect of gender on cognitive structuring: who
are more biased, men or women?. Psychology, 1(02), 80.
[7] Bertrand M., & E. Duflo (2017). “Field experiments on discrimination” In Handbook of
Economic Field Experiments, Vol. 1, pp. 309-393, North-Holland.
[8] Beyer, K. M., & Pühringer, S. (2019). Divided we stand? Professional consensus and
political conflict in academic economics (No. 94). ICAE Working Paper Series.
[9] Boland, L. A. (1991). Current views on economic positivism. Companion to contemporary
economic thought, 88-104.
[10] Carter, J. R., & Irons, M. D. (1991). Are economists different, and if so, why?. Journal of
Economic Perspectives, 5(2), 171-177.
[11] Chang, H. (2014). Economics: The User's Guide.
[12] Coase, R. H. (1994). How Should Economists Choose? in: Ronald H Coase (ed.) Essays on
Economics and Economists. Chicago: University of Chicago Press
[13] Colander, D. (2005). The making of an economist redux. The Journal of Economic
Perspectives, 19(1), 175-198.
[14] Colander, D., & Klamer, A. (1987). The making of an economist. The Journal of Economic
Perspectives, 1(2), 95-111.
[15] Colussi, T. (2018). Social ties in academia: A friend is a treasure. Review of Economics and
Statistics, 100(1), 45-50.
[16] Currie, J., Lin, W., & Meng, J. (2014). Addressing antibiotic abuse in China: An
experimental audit study. Journal of development economics, 110, 39-51.
[17] Dobb, M. (1973). Theories of value and distribution: ideology and economic
theory. Cambridge UniversityPress, Cambridge.
[18] DeGroot, M. H. (1970). Optimal Statistical Decisions, New York: McGraw-Hill.
[19] Eagleton T. (1991). Ideology: an introduction. Cambridge.
43
[20] Fine, B., & Milonakis, D. (2009). From economics imperialism to freakonomics: The
shifting boundaries between economics and other social sciences. Routledge.
[21] Fischle, M. (2000). Mass Response to the Lewinsky Scandal: Motivated Reasoning or
Bayesian Updating?. Political Psychology, 21(1), 135-159.
[22] Fisher, I. (1919). Economists in Public Service: Annual Address of the President. The
American Economic Review, 9(1), 521.
[23] Fisman, R., & O’Neill, M. (2009). Gender differences in beliefs on the returns to effort
evidence from the world values survey. Journal of Human Resources, 44(4), 858-870.
[24] Foucault, M. (1972). The archeology of knowledge, trans. AM Sheridan Smith. London:
Tavistock.
[25] Fourcade, M., Ollion, E., & Algan, Y. (2015). The superiority of economists. Journal of
economic perspectives, 29(1), 89-114.
[26] Frank, B., & Schulze, G. G. (2000). Does economics make citizens corrupt?. Journal of
economic behavior & organization, 43(1), 101-113.
[27] Frank, R. H., Gilovich, T., & Regan, D. T. (1993). Does studying economics inhibit
cooperation?. Journal of economic perspectives, 7(2), 159-171.
[28] Frank, R. H., Gilovich, T. D., & Regan, D. T. (1996). Do economists make bad citizens?.
Journal of Economic Perspectives, 10(1), 187-192.
[29] Frankfurter, G. & Mcgoun, E. (1999). Ideology and the theory of financial economics.
Journal of Economic Behavior & Organization. 39. 159-177.
[30] Freedman C. (2016). In search of the two-handed economist: ideology, methodology and
marketing in economics. Springer.
[31] Frey, B. S., Pommerehne, W. W., & Gygi, B. (1993). Economics indoctrination or selection?
Some empirical results. The Journal of Economic Education, 24(3), 271-281.
[32] Frey, B. S., Pommerehne, W. W., Schneider, F., & Gilbert, G. (1984). Consensus and
dissension among economists: An empirical inquiry. The American Economic
Review, 74(5), 986-994.
[33] Friedman M. (1953). Essays in positive economics. University of Chicago Press.
[34] Fryer, R. G., Harms, P., & Jackson, M. O. (2018). Updating beliefs when evidence is open
to interpretation: Implications for bias and polarization.
[35] Fuchs, V. R. (1996). Economics, values, and health care reform. American Economic
Review, 86(1), 1-24.
[36] Fuchs, V. R., Krueger, A. B., & Poterba, J. M. (1998). Economists' views about parameters,
values, and policies: Survey results in labor and public economics. Journal of Economic
Literature, 36(3), 1387-1425.
[37] Fullbrook, E. (2008). Pluralist economics. Palgrave Macmillan.
[38] Fullbrook, E. (2003). The crisis in economics. London: Routledge.
44
[39] Galbraith, J. K. (1989). Ideology and economic reality. Challenge, 32(6), 4-9.
[40] Gentzkow, M., & Shapiro, J. M. (2006). Media bias and reputation. Journal of political
Economy, 114(2), 280-316.
[41] Gerber, A., & Green, D. (1999). Misperceptions about perceptual bias. Annual review of
political science, 2(1), 189-210.
[42] Gordon, R., & Dahl G. B. (2013). Views among economists: professional vonsensus or
point-counterpoint? American Economic Review, 103(3), 629635.
[43] Goyal, S., Van Der Leij, M. J., & Moraga-González, J. L. (2006). Economics: An emerging
small world. Journal of political economy, 114(2), 403-412.
[44] Halligan L. (2013). Time to stop this pretence economics is not science, The Telegraph,
October 19.
[45] Han, S. K. (2003). Tribal regimes in academia: A comparative analysis of market structure
across disciplines. Social networks, 25(3), 251-280.
[46] Harcourt, G. C. (1969). Some Cambridge controversies in the theory of capital. Journal of
Economic Literature, 7(2), 369-405.
[47] Hart, W., Albarracín, D., Eagly, A. H., Brechan, I., Lindberg, M. J., & Merrill, L. (2009).
Feeling validated versus being correct: a meta-analysis of selective exposure to information.
Psychological bulletin, 135(4), 555.
[48] Heckman, J. J., & Moktan, S. (2018). Publishing and promotion in economics: the tyranny
of the top five. Journal of Economic Literature, forthcoming.
[49] Hodgson, G. M., & Jiang, S. (2007). The economics of corruption and the corruption of
economics: an institutionalist perspective. Journal of Economic Issues, 41(4), 10431061.
[50] Hoover, K. R. (2003). Economics as ideology: Keynes, Laski, Hayek, and the creation of
contemporary politics. Rowman & Littlefield Publishers.
[51] Horowitz, M., & Hughes, R. (2018). Political identity and economists’ perceptions of
capitalist crises. Review of Radical Political Economics, 50(1), 173-193.
[52] Ioannidis, J. P., Stanley, T. D., & Doucouliagos, H. (2017). The power of bias in economics
research. The Economic Journal, 127(605), F236-F265.
[53] Jelveh, Z., Bruce K., & Naidu S. (2018). Political language in economics. Columbia
Business School Research Paper No. 14-57.
[54] Krugman, P. (2009). The conscience of a liberal. WW Norton & Company.
[55] Lawson, T. (2012). Mathematical modelling and ideology in the economics Academy:
competing explanations of the failings of the modern discipline? Economic Thought, 1(1,
2012).
[56] List, J. A., Sadoff, S., & Wagner, M. (2011). So you want to run an experiment, now what?
Some simple rules of thumb for optimal experimental design. Experimental Economics,
14(4), 439.
45
[57] MacCoun R. J., & Paletz S. (2009) Citizens' perceptions of ideological bias in research on
public policy controversies. Political Psychology 30(1),43-65.
[58] MacCoun, R. (1998). Biases in the interpretation and use of research results. Annual Review
of Psychology, 49, 259287.
[59] Maniadis, Z., & Tufano, F. (2017). The research reproducibility crisis and economics of
science. The Economic Journal, 127(605), F200-F208.
[60] Maniadis, Z., Tufano, F., & List, J. A. (2017). To replicate or not to replicate? Exploring
reproducibility in economics through the lens of a model and a pilot study. The Economic
Journal, 127(605), F209-F235.
[61] Maniadis, Z., Tufano, F., & List, J. A. (2014). One swallow doesn't make a summer: New
evidence on anchoring effects. American Economic Review, 104(1), 277-90.
[62] Mayer, T. (2001). The role of ideology in disagreements among economists: A quantitative
analysis. Journal of Economic Methodology, 8(2), 253-273.
[63] Marwell, G., & Ames, R. E. (1981). Economists free ride, does anyone else?: Experiments
on the provision of public goods, IV. Journal of public economics, 15(3), 295-310.
[64] Merton R. K. (1973). The sociology of science. University of Chicago Press, Chicago.
[65] Meyers-Levy, J. (1986). Gender differences in information processing: A selectivity
interpretation (Doctoral dissertation, Northwestern University).
[66] McCloskey, D. (2017). The Many Transgressions of Deirdre McCloskey, Institute for New
Economic Thinking, June 28, 2017, accessed 30 January 2019 at
https://www.ineteconomics.org/perspectives/blog/the-many-transgressions-of-deirdre-
mccloskey
[67] Milberg W. S. (1998). “Ideology” in The Handbook of Economic Methodology.
Cheltenham: Edward Elgar, pp. 243-246.
[68] Miller, J., & Katz, D. (2018). Gender differences in perception of workplace experience
among anesthesiology residents. The journal of education in perioperative medicine,
20(1).Moretti, E. (2011). Social learning and peer effects in consumption: Evidence from
movie sales. The Review of Economic Studies, 78(1), 356-393.
[69] Modigliani, F. (1977). The Monetarist Controversy, or, Should We Forsake Stabilization
Policies?. American Economic Review, 67:1-19.
[70] Morgan, J. (Ed.). (2015). What is neoclassical economics?: debating the origins, meaning
and significance. Routledge.
[71] Myrdal, G. (1954). The political element in the development of economic thought. Harvard
University Press.
[72] Offer, A., & Söderberg, G. (2016). The Nobel factor: The prize in economics, social
democracy, and the market turn. Princeton University Press.
[73] Önder, A. S., & Terviö, M. (2015). Is economics a house divided? Analysis of citation
networks. Economic Inquiry, 53(3), 1491-1505.
46
[74] Oyer, P. (2006). Initial labor market conditions and long-term outcomes for
economists. Journal of Economic Perspectives, 20(3), 143-160.
[75] Piketty, T. (2014). Capital in the twenty-first century. Harvard University Press.
[76] Popper, K. R. (1962). On the sources of knowledge and of ignorance.
[77] Rabin, M., & Schrag, J. L. (1999). First impressions matter: a model of confirmatory bias.
The Quarterly Journal of Economics, 114(1), 37-82.
[78] Ragins, B. R., Townsend, B., & Mattis, M. (1998). Gender gap in the executive suite: CEOs
and female executives report on breaking the glass ceiling. Academy of Management
Perspectives, 12(1), 28-42.
[79] Riach, P., & Rich J. (2002). Field experiments of discrimination in the market place. The
economic journal 112, no. 483.
[80] Rivlin, A. M. (1987). Economics and the political process. The American Economic Review,
1-10.
[81] Robinson, J. (1973). Ideology and analysis. Collected Economic Papers, 5, 254-61.
[82] Romer, P. M. (2015). Mathiness in the theory of economic growth. The American Economic
Review, 105(5), 89-93.
[83] Rothbarb M. (1960). “The mantle Issues, of science” In Scientism and Values. Princeton NJ.
[84] Rubinstein, A. (2006). A sceptic's comment on the study of economics. The Economic
Journal, 116(510), C1-C9.
[85] Saint-Paul, G. (2018). The possibility of ideological bias in structural macroeconomic
models. American Economic Journal: Macroeconomics 10(1), 216-241.
[86] Samuels W. J. (1992). “Ideology in economics” In Essays on the Methodology and
Discourse of Economics, pp. 233-248. Palgrave Macmillan UK.
[87] Schumpeter, J. A. (1949). Science and ideology. The American Economic Review, 39(2),
346359.
[88] Stigler, G. J. (1965). The economist and the state. The American Economic Review, 1-18.
[89] Stigler, G. J. (1960). The influence of events and policies on economic theory. The American
economic review, 36-45.
[90] Stigler, G. J. (1959). The politics of political economists. The Quarterly Journal of
Economics, 522-532.
[91] Stiglitz J. E. (2002). There is no invisible hand. The Guardian, 20 December, accessed 14
August 2018 at https://www.theguardian.com/education/2002/dec/20/highereducation.uk1.
[92] Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs.
American Journal of Political Science, 50(3), 755-769.
[93] Terviö, M. (2011). Divisions within academia: Evidence from faculty hiring and
placement. Review of Economics and Statistics, 93(3), 1053-1062.
[94] Thompson, H. (1997). Ignorance and ideological hegemony: A critique of neoclassical
economics. Journal of Interdisciplinary Economics, 8(4), 291-305.
47
[95] Tobin, J. (1976). Is Friedman a monetarist?, in J. Stein (ed.), Monetarism, Amsterdam,
North Holland, pp. 3326.
[96] van Dalen, H. P. (2019). Values of Economists Matter in the Art and Science of
Economics. Kyklos.
[97] Van Gunten, T. S., Martin, J. L., & Teplitskiy, M. (2016). Consensus, polarization, and
alignment in the economics profession. Sociological Science, 3, 1028-1052.
[98] Wang, L., Malhotra, D., & Murnighan, J. K. (2011). Economics education and greed.
Academy of Management Learning & Education, 10(4), 643-660.
[99] Wiles, P. (1979). Ideology, methodology, and neoclassical economics. Journal of Post
Keynesian Economics, 2(2), 155-180.
[100] Wright, J. (2019). Pluralism and social epistemology in economics. Doctoral thesis,
Cambridge University. https://doi.org/10.17863/CAM.37650
[101] Wolfers, J. (2013). Comments on Roger Gordon and Gordon Dahl’s ‘Views among
Economists: Professional Consensus or Point-Counterpoint’. Unpublished comments
delivered at the American Economic Association annual meeting, January 5, San Diego.
http://tinyurl.com/gs9eyep.
48
Tables and Figures
Figure 1: Probability of different agreement levels By statement
Note: See Table A15 in our online appendix for a complete list of statements and sources.
0.03
0.14
0.14
0.54
0.13
0.2 .4 .6
Fraction
Average Agreement = 3.60
Relative entropy index = 0.805
Statement 1:
Tone-deafness of economists about concerns
regaridng globalization
0.14
0.45
0.18
0.19
0.05
0.1.2.3.4.5
Fraction
Average Agreement = 2.55
Relative entropy index = 0.875
Statement 2:
Intellectial Monopoly is a disease rather
than a cure
0.05
0.20
0.32
0.34
0.09
0.1.2 .3 .4
Fraction
Average Agreement = 3.21
Relative entropy index = 0.881
Statement 3:
Non-rational impluses are part of our
reasoning process
0.08
0.23
0.10
0.39
0.20
0.1.2.3 .4
Fraction
Average Agreement = 3.39
Relative entropy index = 0.909
Statement 4:
The intersts of the very wealthy are not
aligned with the society
0.07
0.19
0.16
0.36
0.21
0.1.2.3 .4
Fraction
Average Agreement = 3.44
Relative entropy index = 0.939
Statement 5:
Economics has a serious gender problem
which it has failed to address
0.09
0.22
0.17
0.35
0.17
0.1 .2 .3 .4
Fraction
Average Agreement = 3.28
Relative entropy index = 0.942
Statement 6:
Shortcomings of current economic discourse
0.03
0.20
0.22
0.45
0.09
0.1.2.3.4.5
Fraction
Average Agreement = 3.36
Relative entropy index = 0.838
Statement 7:
Communities economsits are affilaited with can
bias their views
0.01
0.04
0.13
0.48
0.35
0.1.2.3.4.5
Fraction
Average Agreement = 4.11
Relative entropy index = 0.720
Statement 8:
Market economy does not solely rely on
profit maximization
0.09
0.25
0.23
0.31
0.12
0.1 .2 .3
Fraction
Average Agreement = 3.13
Relative entropy index = 0.944
Statement 9:
The laws of property have fostered inequality
0.06
0.13
0.12
0.40
0.29
0.1.2 .3 .4
Fraction
Average Agreement = 3.73
Relative entropy index = 0.877
Statement 10:
There are alarming signs regarding the
fairness of capitalism
0.08
0.19
0.16
0.35
0.22
0.1.2 .3 .4
Fraction
Average Agreement = 3.45
Relative entropy index = 0.935
Statement 11:
Current mathematical economics are a mere
concoction and imprecise
0.12
0.27
0.24
0.29
0.08
0.1 .2 .3
Fraction
Average Agreement = 2.94
Relative entropy index = 0.944
Statement 12:
Neoclassical microeconomics should be replaced
by behavioral economics and network theory
0.10
0.31
0.25
0.27
0.07
0.1 .2 .3
Fraction
Average Agreement = 2.90
Relative entropy index = 0.918
Statement 13:
Division of labour hinders innovation and
makes people ignorant
0.07
0.20
0.17
0.40
0.16
0.1.2 .3 .4
Fraction
Average Agreement = 3.40
Relative entropy index = 0.913
Statement 14:
Economic models make many bad predictions
because they are populaed with finctional agents
0.03
0.12
0.13
0.45
0.27
0.1.2.3.4.5
Fraction
Average Agreement = 3.81
Relative entropy index = 0.835
Statement 15:
Sociology of economics and its socialization process
makes economists arogant and homogenous
Strongly Disagree Disagree Neutral Agree Strongly Agree
49
Figure 2: OLS estimates of differences in agreement level between control and treatment
groups By statement
Note: Agreement levels is z-normalized for each statement. Control variables include: gender, PhD completion
cohort, current status, country, research area. Both 90% and 95% confidence intervals are displayed for
each estimate. The two horizontal lines on each confidence interval band represent where the 90%
confidence interval ends.
First (second) listed source for each statement is the actual (altered) source. Bold source for each pair refers to the
less-/non-mainstream source. See Table A15 in our online appendix for more details.
50
Table 1: List of Mainstream and Less-/Non-Mainstream Sources for Each Statement
#
Mainstream Source
Less-/Non-mainstream Source
1
Dani Rodrik, professor of international political
economy at Harvard University and the author of The
Globalization Paradox: Democracy and the Future of the
World Economy (2012).
Paul Krugman, professor of economics at Princeton
University, the 2008 recipient of the Nobel Prize in
Economics, and the author of The Accidental Theorist and
Other Dispatches from the Dismal Science (1999).
2
David Levine, professor of economics at Washington
University in St. Louis and the author of Against
Intellectual Monopoly (2008).
Richard Wolff, professor of economics emeritus at the
university of Massachusetts, Amherst, and the author of
Rethinking Marxism (1985).
3
Friedrich von Hayek (1899-1992), professor of
economics at University of Chicago and London School
of Economics, and the 1974 recipient of the Nobel Prize
in economics.
Sigmund Freud (1856-1939), the founder of psychoanalysis
and the author of the book Civilization and Its Discontents
(1929).
4
Angus Deaton, professor of economics at Princeton
University, the 2015 recipient of the Nobel Prize in
Economics, and the author of The Great Escape: Health,
Wealth, and the Origins of Inequality (2013).
Thomas Piketty, professor of economics at the Paris School
of Economics and the author of capital in the twenty-first
century (2013).
5
Carmen Reinhart, Professor of the International
Financial System at Harvard Kennedy School and the
author of This Time is Different: Eight Centuries of
Financial Folly (2011)”
Diane Elson, British Economist and Sociologist, Professor
Emerita at the University of Essex, and the author of Male bias
in the development process (1995).
6
Ronald Coase (1910-2013), professor of economics at
the University of Chicago Law School and the 1991
recipient of the Nobel Prize in Economics.
William Milberg, dean and professor of economics at the
New School for Social Research and the author of The Crisis
of Vision in Modern Economic Thought (1996).
7
Irving Fisher (1867-1947), professor of political
economy at Yale University
John Kenneth Galbraith (1908-2006), professor of
economics at Harvard University and the author of The New
Industrial State (1947).
8
Amartya Sen, professor of economics and philosophy
at Harvard University and the author of Development as
Freedom (1999).
Michael Sandel, American political philosopher and
professor of government at Harvard University, and the author
of What Money Can't Buy: The Moral Limits of Markets
(2012).
9
John Stuart Mill (1806-1873), an English philosopher,
political economist, and the author of On Liberty (1859).
Friedrich Engels (1820-1895), a German philosopher and the
co-author of The Communist Manifesto (1848).
10
Larry Summers, professor of economics and president
emeritus at Harvard University.
Yanis Varoufakis, Greek economist who also served as the
Greek Minister of Finance (from January to July 2015, when
he resigned), and the author of And the Weak Suffer What
They Must? Europe's crisis, America's economic future
11
Joh Maynard Keynes (1883-1946), professor of
economics at Cambridge and the author of The General
Theory of Employment, Interest and Money (1936).
Kenneth Arrow, professor of economics at Stanford
University and the 1972 recipient of the Nobel Prize in
Economics.
12
Paul Romer, professor of economics at the New York
University and the author of The Troubles with
Macroeconomics (forthcoming in the American
Economic Review).
Steve Keen, post-Keynesian professor of economics at
Kingstone University (UK) and the author of Debunking
economics: the naked emperor dethroned? (2011).
13
Adam Smith
Karl Marx
14
Richard Thaler, professor of behavioural science and
economics at University of Chicago Booth School of
Business and the author of Misbehaving: The Making of
Behavioural Economics (2015).
Gerd Gigerenzer, Director at the Max Planck Institute for
Human Development, former professor of psychology at the
University of Chicago, and the author of Gut feelings: The
intelligence of the unconscious (2007).
15
Dani Rodrik, professor of international political
economy at Harvard University and the author of The
Globalization Paradox: Democracy and the Future of the
World Economy (2012).
Anwar Shaikh, professor of economics at the New School for
Social Research (New York) and the author of Capitalism:
Competition, conflict, Crises (2016).
Note: Our dichotomization of the sources into “mainstream” and “less-/non-mainstream” is meant to simplify and summarize
the relative ideological differences between sources, even though we believe these differences are more appropriately
understood as a continuum rather than a dichotomy. Of course, it is well-understood that this classification does not readily
apply to some sources, such as older ones (e.g. Marx or Engels) or sources from other disciplines (e.g. Sandel or Freud) in the
same way it applies to others. However, to remain consistent and to avoid confusion for the reader, we stick to the same naming
convention for all sources.
51
Table 2: OLS Estimated Treatment Effects
A: In Units of Agreement Level
(1)
(2)
(3)
(4)
Treatment 1 (none-/less-mainstream source)
-0.264***
-0.261***
-0.262***
-0.268***
(0.014)
(0.014)
(0.014)
(0.014)
Treatment 2 (no source)
-0.415***
-0.404***
-0.406***
(0.015)
(0.015)
(0.015)
B: In Units of Standard Deviation
Treatment 1 (none-/less-mainstream source)
-0.223***
-0.220***
-0.221***
-0.226***
(0.012)
(0.012)
(0.012)
(0.012)
Treatment 2 (no source)
-0.350***
-0.341***
-0.343***
(0.012)
(0.012)
(0.012)
P-value: Treatment 1 = Treatment 2
0.000
0.000
0.000
NA
Controls
No
Yes
No
No
More Control
No
No
Yes
No
Fixed Person Effects
No
No
No
Yes
Number of observations
36375
36375
36375
25185
Note: Omitted category is receiving a mainstream source. Heteroskedasticity-robust standard
errors are reported in parentheses. The dependent variable is agreement level on a scale
from 1 (strongly disagree) to 5 (strongly agree). For panel (B), the dependent variable
is standardized to have mean zero and standard deviation of one. The average agreement
level in our sample is 3.35 with standard deviation of 1.185. Significance levels: *** <
1%, ** < 5%, * < 10%.
Controls include: gender, PhD completion cohort, current status, country, and research
area. More Controls include all the previously listed variables as well as age cohort,
country/region of birth, English proficiency, department of affiliation, and country
where PhD was completed.
† We cannot identify the effect of treatment 2 in models with individual fixed effects since
those who are sorted into treatment 2 receive all statements without a source and therefore
there is no variation in treatment within a person and across statements. We therefore exclude
these participants from the fixed effects model.
52
Table 3: Ordered logit estimates of Treatment Effects
Outcome:
Panel A: Without Controls
Strongly
disagree
Disagree
Neutral
Agree
Strongly
agree
Predicted probability of outcome
0.050***
0.168***
0.166***
0.403***
0.212***
Control Group (mainstream source)
(0.001)
(0.002)
(0.002)
(0.002)
(0.003)
Difference in predicted probability
0.022***
0.050***
0.021***
-0.036***
-0.057***
mainstream Vs. less-/non-mainstream
(0.001)
(0.003)
(0.001)
(0.002)
(0.003)
Difference in predicted probability
0.039***
0.083***
0.029***
-0.067***
-0.085***
mainstream Vs. no source
(0.001)
(0.003)
(0.001)
(0.002)
(0.003)
Outcome:
Panel B: With Controls
Strongly
disagree
Disagree
Neutral
Agree
Strongly
agree
Predicted probability of outcome
0.048***
0.166***
0.169***
0.411***
0.206***
Control Group (mainstream source)
(0.001)
(0.002)
(0.002)
(0.002)
(0.002)
Difference in predicted probability
0.021***
0.051***
0.022***
-0.038***
-0.056***
mainstream Vs. less-/non-mainstream
(0.001)
(0.003)
(0.001)
(0.002)
(0.003)
Difference in predicted probability
0.037***
0.083***
0.030***
-0.068***
-0.082***
mainstream Vs. no source
(0.001)
(0.003)
(0.001)
(0.002)
(0.003)
Number of observations
36375
36375
36375
36375
36375
Note: Robust standard errors are reported in parentheses. Significance levels: *** < 1%, ** < 5%, * < 10%.
The dependent variable is agreement level on a scale from 1 (strongly disagree) to 5 (strongly agree).
Controls include: gender, PhD completion cohort, current status, country, research area.
53
Table 4: OLS Estimates of Differences in Confidence Level
A: In Units of Confidence Level
(1)
(2)
Treatment 1 (none-/less-mainstream source)
0.005
0.008
(0.011)
(0.010)
Treatment 2 (no source)
-0.019
(0.012)
B: In Units of Standard Deviation
Treatment 1 (none-/less-mainstream source)
0.006
0.009
(0.012)
(0.011)
Treatment 2 (no source)
-0.020
(0.013)
P-value: treatment 1 = treatment 2
0.037
NA
Controls
Yes
No
Fixed Person Effects
No
Yes
Number of observations
36088
24984
Note: Omitted category is Control Group (i.e. mainstream source).
Heteroskedasticity-robust standard errors are reported in
parentheses. The dependent variable is confidence level with
evaluation on a scale from 1 (least confident) to 5 (most
confident). For panel (B), the dependent variable is standardized
to have mean zero and standard deviation of one. The average
confidence level in our sample is 3.93 with standard deviation
of 0.928. Since confidence level was voluntary to report in our
survey, compared to agreement level regressions we lose a small
number of observations were confidence level is not reported.
Significance levels: *** < 1%, ** < 5%, * < 10%.
Controls include: gender, PhD completion cohort, current status,
country, research area.
† We cannot identify the effect of treatment 2 in fixed effects model
since those who are sorted into this group receive all statements
without a source and therefore there is no variation in treatment
within a person and across statements. We therefore exclude these
participants from the fixed effects model.
54
Table 5: OLS Estimated Treatment Effects By expertise
(1)
(2)
(3)
Control group
Treatment 1
Treatment 2
Expert
-0.002
-0.227***
-0.344***
(0.029)
(0.0184)
(0.0193)
Non-Expert
-0.214***
-0.338***
(0.0160)
(0.0168)
P-value: equality of coefficients
0.580
0.883
F-statistic: equality of coefficients
0.30
0.02
Number of observations
36375
Note: Control group refers to receiving a mainstream source. Treatment 1 refers to
receiving a less-/non-mainstream source. Treatment 2 refers to receiving no source.
Omitted category is expert & control group. Expert is an indicator that is equal to 1
of a participant’s reported area of research is related to the area of an evaluated
statement and zero otherwise. See Table A14 in the Online Appendix for more details.
Heteroskedasticity-robust standard errors are reported in parentheses. The
dependent variable is agreement level on a scale from 1 (strongly disagree) to 5
(strongly agree) and is z-normalized. Significance levels: *** < 1%, ** < 5%,
* < 10%.
Controls include: PhD completion cohort, current status, country, research area.
55
Table 6: OLS Estimated Treatment Effects By political orientation
Main Results
Author-created categories
(1)
(2)
(3)
Control group
Treatment 1
Treatment 2
Far Left
-0.046*
-0.325***
(0.024)
(0.0280)
Left
-0.241***
-0.229***
-0.330***
(0.0217)
(0.018)
(0.0189)
Center
-0.408***
-0.280***
-0.402***
(0.025)
(0.026)
(0.0289)
Right
-0.564***
-0.319***
-0.337***
(0.028)
(0.032)
(0.0324)
Far Right
-0.607***
-0.358***
-0.388***
(0.046)
(0.061)
(0.0648)
P-value of equality
0.000
0.000
0.236
F-statistic of equality
70.94
17.27
1.38
# observations
36315
Note: Control group refers to receiving a mainstream source. Treatment 1 refers to receiving
a less-/non-mainstream source. Treatment 2 refers to receiving no source. Omitted category
is Far Left & control group. Heteroskedasticity-robust standard errors are reported in
parentheses. The dependent variable is agreement level on a scale from 1 (strongly
disagree) to 5 (strongly agree) and is z-normalized. Political orientation is self-
reported by participants on a scale from -10 (far left) to 10 (far right). Significance
levels: *** < 1%, ** < 5%, * < 10%. Controls include: gender, PhD completion
cohort, current status, country, research area.
For Columns (1) to (3), we use self-reported political orientation to group participants
into 5 categories: Far left = [-10 -7], Left = [-6 -2], Centre = [-1 1], Right = [2 6],
Far Right = [7 10]. Results reported in Columns (4) to (9) are for robustness check.
For results reported in columns (4) to (6), we create the five political groups using the
quintiles of political orientation distribution. For results reported in columns (7) to
(9), we create the five political groups using the quintiles of the adjusted political
orientation distribution. Adjusted political orientation measure is created by running
a regression of self-reported political orientation on a series of indicators based on
questions asked from participants to identify their political typology. See Table A13
in our online appendix for more details.
56
Table 7: OLS Estimated Treatment Effects By Gender
(1)
(2)
(3)
Control group
Treatment 1
Treatment 2
Male
-0.244***
-0.338***
(0.013)
(0.014)
Female
0.0633***
-0.137***
-0.353***
(0.0197)
(0.0248)
(0.027)
P-value: equality of coefficients
0.000
0.638
F-statistic: equality of coefficients
14.13
0.22
Number of observations
36375
Note: Control group refers to receiving a mainstream source. Treatment 1 refers to
receiving a less-/non-mainstream source. Treatment 2 refers to receiving no source.
Omitted category is male & control group. Heteroskedasticity-robust standard errors
are reported in parentheses. The dependent variable is agreement level on a scale
from 1 (strongly disagree) to 5 (strongly agree) and is z-normalized.
Significance levels: *** < 1%, ** < 5%, * < 10%.
Controls include: PhD completion cohort, current status, country, research area.
57
Table 8: OLS Estimated Treatment Effects By country
(1)
(2)
(3)
Control Group
Treatment 1
Treatment 2
Australia
-0.325***
-0.536***
(0.055)
(0.059)
Austria
-0.201**
0.021
-0.079
(0.081)
(0.103)
(0.101)
Brazil
-0.090
0.017
0.015
(0.074)
(0.086)
(0.108)
Canada
-0.012
-0.282***
-0.396***
(0.045)
(0.034)
(0.037)
France
0.195***
-0.217***
-0.358***
(0.047)
(0.040)
(0.041)
Germany
0.005
-0.178***
-0.233***
(0.055)
(0.060)
(0.064)
Ireland
0.010
-0.440***
-0.428***
(0.112)
(0.148)
(0.151)
Italy
0.118**
-0.113***
-0.237***
(0.047)
(0.038)
(0.042)
Japan
0.037
-0.353***
-0.367***
(0.060)
(0.072)
(0.072)
Netherlands
-0.087
-0.249***
-0.125*
(0.065)
(0.076)
(0.075)
New Zealand
-0.079
-0.237***
-0.356***
(0.070)
(0.082)
(0.087)
Scandinavia
0.004
-0.295***
-0.385***
(0.048)
(0.044)
(0.048)
South Africa
0.254***
-0.118
-0.330***
(0.081)
(0.107)
(0.097)
Switzerland
0.073
-0.293***
-0.455***
(0.077)
(0.099)
(0.096)
UK
0.012
-0.221***
-0.378***
(0.051)
(0.050)
(0.050)
US
-0.082**
-0.214***
-0.349***
(0.040)
(0.020)
(0.021)
P-value: equality of coefficients
0.000
0.000
0.000
F-statistic: equality of coefficients
9.07
2.53
3.40
Number of observations
36375
Note: Control group refers to receiving a mainstream source. Treatment 1 refers to
receiving a less-/non-mainstream source. Treatment 2 refers to receiving no source.
Omitted category is Australia & control group. Heteroskedasticity-robust standard
errors are reported in parentheses. The dependent variable is agreement level on a
scale from 1 (strongly disagree) to 5 (strongly agree) and is z-normalized.
Significance levels: *** < 1%, ** < 5%, * < 10%.
Controls include: gender, PhD completion cohort, current status, research area.
58
Table 9: OLS Estimated Treatment Effects By country/region where PhD was
completed
(1)
(2)
(3)
Control Group
Treatment 1
Treatment 2
Africa
-0.095
-0.280**
(0.118)
(0.117)
Asia
0.0230
-0.390***
-0.360***
(0.115)
(0.097)
(0.091)
Canada
0.0437
-0.316***
-0.464***
(0.101)
(0.045)
(0.051)
Europe 1 (France, Belgium)
0.0966
-0.159***
-0.254***
(0.102)
(0.039)
(0.040)
Europe 2 (Germany, Austria, Netherlands,
-0.0180
-0.198***
-0.264***
Switzerland, Luxembourg)
(0.101)
(0.042)
(0.040)
Europe 3 (Italy, Spain, Portugal)
-0.0148
-0.109**
-0.216***
(0.101)
(0.045)
(0.052)
Europe 4 (Denmark, Finland, Norway, Sweden)
0.0936
-0.300***
-0.444***
(0.103)
(0.056)
(0.058)
Europe 5 (UK, Ireland)
0.0351
-0.177***
-0.331***
(0.0992)
(0.045)
(0.046)
Not Applicable
-0.128
-0.182***
-0.373***
(0.109)
(0.043)
(0.044)
Oceania
-0.0232
-0.186**
-0.329***
(0.110)
(0.080)
(0.079)
Other
-0.273*
-0.095
-0.967***
(0.152)
(0.197)
(0.227)
South America
0.133
0.013
-0.041
(0.142)
(0.113)
(0.128)
United States
-0.0578
-0.251***
-0.372***
(0.0957)
(0.018)
(0.019)
P-value: equality of coefficients
0.000
0.004
0.000
F-statistic: equality of coefficients
3.48
2.40
3.25
Number of observations
36375
Note: Control group refers to receiving a mainstream source. Treatment 1 refers to receiving a less-
/non-mainstream source. Treatment 2 refers to receiving no source. Omitted category is Africa &
control group. Heteroskedasticity-robust standard errors are reported in parentheses. The
dependent variable is agreement level on a scale from 1 (strongly disagree) to 5 (strongly agree)
and is z-normalized. Significance levels: *** < 1%, ** < 5%, * < 10%.
Controls include: gender, PhD completion cohort, current status, country, research area.
“Other” category includes Central America, Eastern Europe, Rest of Europe, Middle East, The
Caribbean. Due to very small cell size for these countries/regions (135 observations in total),
we have put them all in one category.
59
Table 10: OLS Estimated Treatment Effects By research area
(1)
(2)
(3)
Control Group
Treatment 1
Treatment 2
Teaching
-0.138**
-0.398***
(0.060)
(0.059)
History of Thought, Methodology,
0.159***
-0.106**
-0.224***
Heterodox Approaches
(0.052)
(0.047)
(0.051)
Mathematical and Quantitative Methods
-0.115**
-0.256***
-0.298***
(0.052)
(0.046)
(0.048)
Microeconomics
-0.161***
-0.229***
-0.379***
(0.050)
(0.043)
(0.043)
Macroeconomics and Monetary Economics
-0.125***
-0.333***
-0.197***
(0.047)
(0.036)
(0.037)
International Economics
-0.031
-0.265***
-0.489***
(0.050)
(0.044)
(0.049)
Financial Economics
-0.143**
-0.263***
-0.271***
(0.059)
(0.063)
(0.060)
Public Economics
-0.088*
-0.310***
-0.323***
(0.052)
(0.046)
(0.049)
Health, Education, and Welfare
0.027
-0.227***
-0.485***
(0.052)
(0.048)
(0.055)
Labor and Demographic Economics
-0.028
-0.208***
-0.359***
(0.047)
(0.036)
(0.039)
Law and Economics
0.006
-0.237**
-0.412***
(0.083)
(0.112)
(0.122)
Industrial Organization
-0.094*
-0.246***
-0.326***
(0.054)
(0.053)
(0.058)
Economic Development, Innovation,
0.080
-0.152***
-0.504***
Technological Change
(0.050)
(0.042)
(0.044)
Agricultural and Natural Resource Economics
0.000
-0.167***
-0.363***
(0.050)
(0.042)
(0.044)
Urban, Rural, Regional, Real Estate,
-0.054
-0.124
-0.329***
and Transportation Economics
(0.068)
(0.078)
(0.079)
Cultural Economics, Economic Sociology,
0.104
-0.071
0.087
Economic Anthropology
(0.110)
(0.160)
(0.169)
Business Administration, Marketing, Accounting
0.290***
-0.223**
-0.465***
(0.075)
(0.096)
(0.112)
Other
-0.002
-0.039
-0.025
(0.065)
(0.073)
(0.072)
P-value: equality of coefficients
0.000
0.004
0.000
F-statistic: equality of coefficients
7.82
2.12
4.77
Number of observations
36375
Note: Control group refers to receiving a mainstream source. Treatment 1 refers to receiving a less-/non-
mainstream source. Treatment 2 refers to receiving no source. Omitted category is Teaching & control
group. Heteroskedasticity-robust standard errors are reported in parentheses. The dependent
variable is agreement level on a scale from 1 (strongly disagree) to 5 (strongly agree) and is z-
normalized. Significance levels: *** < 1%, ** < 5%, * < 10%.
Controls include: gender, PhD completion cohort, current status, country.
60
Table 11: OLS Estimated Treatment Effects By Undergraduate Major
(1)
(2)
(3)
Control Group
Treatment 1
Treatment 2
Other Social Sciences
-0.062
-0.232*
(Anthropology, Sociology, Psychology)
(0.104)
(0.122)
Business, Management
-0.036
-0.218***
-0.257***
(0.083)
(0.049)
(0.053)
Biology, Chemistry, Physics
-0.004
-0.141*
-0.338***
(0.094)
(0.080)
(0.087)
Computer Science, Engineering
-0.086
-0.147*
-0.386***
(0.095)
(0.081)
(0.084)
Earth and space sciences, Geogr